• 5 MIN

Why do robots have smiley faces?

Why do robots have smiley faces?

By SMU City Perspectives team

There is a reason why engineers and designers provide machines with the semblance of friendliness, but it takes more than that to establish trust between AI and humans.

I was at Promenade MRT Station waiting to travel to work, when my attention was drawn to a little boy pointing at a cleaning robot moving in my direction. It was making comforting gurgling noises, its eyes were blinking gently and soft music accompanied its movements.

"Look at its smiley face!" said the boy and we all boarded the driverless train without a worry.

Remember the days before the pandemic when we could taxi to Changi Airport to go on a holiday or business trip? Did it ever cross your mind that once we were airborne, the pilot switched to automatic, and AI flew the plane?

But when someone suggests rolling out an autonomous vehicle to transport us securely in the city, many of us fear for our safety as do other road users, despite the fact that most accidents are caused by human error. Fear associated with AI is often based more on imagined than actual risks, but it is fear all the same.

And there are very real concerns posed by AI for the future of work and the privacy of personal data about which there should be much informed discussion.

There is no doubt that AI has made life more convenient in many ways. Who uses cash anymore? Digital payment systems have become the norm, and online platforms get our food delivered and allow shopping to continue when we worry about visiting malls during the ongoing pandemic.

So the interaction between humans and AI is not always easy to predict. How often have you been frustrated communicating with a chatbot? But we do not want to wait in queues and are much happier when we can gain entry to a concert with the flash of a card.

More fundamental issues face citizens in Singapore, as this smart city becomes more dependent on mass data sharing and AI-assisted tech. Sensors are being widely positioned in housing estates to better manage usage and ensure safety. But would we be happy having a sensor in every room in the house?

A matter of trust

This takes us back to the robot and the smiley face. It is no coincidence that engineers and designers give robots human appearance and visualise reassuring emotions as we come face-to-face with the future. The reason is to establish that elusive bond called Trust.

Unfortunately, AI is often applied and data shared without including communities in the decision process. It might generally be a good thing to have robots assist in complicated surgery and lighten the nurses' job in caring for patients, but there may be situations so private and personal with health services that we want only humans involved. Who should determine that?

In an effort to improve trust between AI and humans, governments, including Singapore's, have policies to ensure that the technology should be trustworthy and data use ethical.

Recently the European Union proposed a detailed set of guidelines for trustworthy robotics and ethical design. But sometimes, ethics and promises of trustworthiness may not be enough. When it was revealed that TraceTogether data could be shared with the police, many in Singapore doubted the privacy assurances regarding the technology, and trust was damaged.

What do people want?

The Centre for AI and Data Governance (CAIDG) at the Singapore Management University (SMU) wants to know how and why trust is established or challenged when humans and machines come together.

We believe that the most effective way to promote and sustain a trusted relationship is to locate AI in communities. With this in mind, we have launched our ground-breaking AI in community research and policy initiative.

This project wants to locate people at the centre of the AI revolution and to give opportunities to those more affected by technology to have the information they need to participate in a trusted AI future.

So this means much more than smiley faces on robots.

The United Nations is promoting what it calls its Sustainable Development Goals (SDGs). These range from physical concerns like access to clean water, across equitable universal healthcare, and better opportunities for girls in particular to a decent education. Big picture issues like climate change are also addressed.

In its programme to achieve and sustain these goals, the UN is looking to experts such as CAIDG to advise it on how AI might be applied effectively.

Experts have said that AI's influence will be trusted and maximised only if communities are included in decisions about AI's role before it is positioned to help realise the SDGs.

But is this just a lot of nice sounding words about humans being in the loop? How can communities of people who know little about AI or its consequences be involved in its applications?

There are two answers. First, before AI is introduced, people need to be listened to about their fears and concerns, and these need to be relayed back to AI designers and policymakers.

Second, the motivation for AI cannot just be about profit, as the Google AI for Social Good project, which CAIDG is a part of, recognises. It is true that AI can stimulate economic development. But communities need to trust and be confident that there is a public good purpose for AI from which they can collectively benefit.

CAIDG is giving special research emphasis to the following contexts in which AI and communities can come together:

- trustworthy innovation,

- ethical AI ecosystems,

- responsible Covid-19 surveillance tech,

- fair platform economies,

- safe autonomous vehicles,

- open finance,

- care robotics and the "Internet of bodies", that is, any tech that connects to the Internet and does diagnostic measurement of the body,

- citizen-centric smart cities, and

- people-focused personal data access and management.

It is also open to other areas of research focus that communities consider as important concerns.

If trust is at the heart of placing AI within communities, then researchers need to understand what makes trust tick, much more than just smiley faces on robots.

Professor Mark Findlay is director of the Centre for AI and Data Governance at Singapore Management University. This research is supported by the National Research Foundation Singapore under its Emerging Areas Research Projects Funding Initiative.