Sophia smiles mischievously, bats her eyelids and tells a joke. Without the mess of cables that make up the back of her head, you could almost mistake her for a human. The humanoid robot, created by Hanson robotics, is the main attraction at a UN-hosted conference in Geneva this week on how artificial intelligence can be used to benefit humanity.
The event comes as concerns grow that rapid advances in such technologies could spin out of human control and become detrimental to society.
Sophia herself insisted the pros outweigh the cons when it comes to artificial intelligence.
“AI is good for the world, helping people in various ways,” she told us, tilting her head and furrowing her brow convincingly.
Machines are getting smart. Really smart. So what does this mean for us humans?
Short answer: the benefits are limited only by our imagination.
For a longer answer, here’s what they say about machines as counselors, coaches, chauffeurs, and full-fledged intellectual partners.
Need to Vent? Try Talking to a Robot
Are therapist robots in our future?
In a recent study, Kellogg’s Eli Finkel and colleagues had participants share a difficult personal story to a robot named Travis. Sometimes Travis responded by nodding, swaying slightly to mimic breathing, and displaying supportive text, like “I completely understand what you have been through.” Other times Travis remained motionless and displayed flat lines like, “Okay, please continue.”
When Travis reacted by moving and displaying supportive text, participants rated it as more social and competent. They even leaned in and made better eye contact when they spoke to the robot, signals of warmth and openness.
In another study, participants felt better about themselves when Travis appeared emotionally attentive.
“We might not have to look too far in the future before robots might play an emotionally significant role in our lives,” says Finkel.
A Robot to Keep You Steady on Your Feet
Therapy machines already exist, in the form of exoskeletal robots. These wearable robotic devices assist people in rehabilitation settings. Think about a mechanical pair of pants or sleeve that someone wears to help them stand upright or reach for an object.
These machines, designed to work with rather than for humans, are programmed to push us to our limits without letting us fall over. In a sense, their work is not unlike that of a coach who must elicit top performances from her athletes while still keeping them safe.
Dislike Driving? Trust Your Car to Do the Job
We use machines all the time. That doesn’t mean we are ready to let them get behind the wheel. What would it take to make our interactions with robots less fraught?
Research by Kellogg’s Adam Waytz suggests a way forward. “Typically when you humanize technology, people tend to like it more,” says Waytz.
He and colleagues gauged people’s responses to two different self-driving car simulators. They equipped one with a humanlike voice, and gave it a name (Iris) and a gender (female); they left the other voiceless. As the vehicle mimicked steering and braking, passengers were less stressed by, and more trusting of, the machine that spoke to them.
Want to Be Smarter? There’s a Machine for That
David Ferrucci thinks we are at the beginning of a beautiful friendship. With computers, that is.
“One of our human frailties is we think we know what we need to know to make decisions. Do we? How do we know we know enough? What are we missing?” says Ferrucci in a conversation with Kellogg’s Brian Uzzi. Ferrucci was the lead scientist behind the development of IBM’s Watson project.
Computers are uniquely capable of taking advantage of the plethora of data newly available to us, he says. They can collect it, filter it, analyze it—and soon enough, they will be able to explain it to us in a way we can understand.
In addition to helping us cope with huge amounts of data, this machine-as-thought-partner can help uncover our biases that may be skewing our decision-making, and ultimately help us make smarter, clearer decisions.
What will it take to get us there? In a recent podcast, Ferrucci describes a future where humans and computers grow up together.
“I don’t mean literally grow up with you when you’re a baby or something,” he clarifies. “But it has to interact, evolve, it has to be part of the process—just the way you work with a team of people. Over time, you get your own language. You get your own common model of how the world works around you. You can speak about it efficiently and effectively.”
Check that Algorithm Before Making a Big Decision
Speaking of good decisions—it turns out that your emotions rule.
Stock traders about to buy or sell would be smart to get a handle on their own moods before they pull the trigger. And there’s an algorithm ready to help them.
Uzzi, along with a team of colleagues, analyzed 886,000 trade-related decisions and 1,234,822 instant messages from 30 stock traders over a two-year period. The researchers created an algorithm that tagged each communication with a probable emotional state—unemotional, extremely emotional, or between the two—based on the words the trader used.
The team found that traders made less lucrative trades when they were in a highly emotional state or a low emotional state.
“When they were at an intermediate level of emotion, somewhere between being cool-headed and being highly emotional, they made their best trades,” says Uzzi.
One eventual goal? A machine-learning program that could mine digital communications in real time to provide traders with feedback about their current emotional state.
Work is underway to make artificial intelligence emotionally smart, to care about people, she told us, insisting that “we will never replace people, but we can be your friends and helpers.”
But she acknowledged that people should question the consequences of new technology.
Among the feared consequences of the rise of the robots is the growing impact they will have on human jobs and economies.
Decades of automation and robotisation have already revolutionized the industrial sector, raising productivity but cutting some jobs.
And now automation and AI are expanding rapidly into other sectors, with studies indicating that up to 85 percent of jobs in developing countries could be at risk.
There are legitimate concerns about the future of jobs, about the future of the economy, because when businesses apply automation, it tends to accumulate resources in the hands of very few. But unintended consequences, or possible negative uses (of AI) seem to be very small compared to the benefit of the technology.
AI is for instance expected to revolutionize healthcare and education, especially in rural areas with shortages of doctors and teachers.
“Elders will have more company, autistic children will have endlessly patient teachers,” Sophia told us.
But advances in robotic technology have sparked growing fears that humans could lose control.
Amnesty International chief Salil Shetty was at the conference to call for a clear ethical framework to ensure the technology is used on for good.
“We need to have the principles in place, we need to have the checks and balances,” he told us, warning that AI is “a black box… There are algorithms being written which nobody understands.”
Shetty voiced particular concern about military use of AI in weapons and killer robots.
“In theory, these things are controlled by human beings, but we don’t believe that there is actually meaningful, effective control,” he said.
The technology is also increasingly being used in the United States for predictive policing, where algorithms based on historic trends could reinforce existing biases.
Clear guidelines are needed. It is important to discuss these issues before the technology has definitively and unambiguously awakened.
While Sophia has some impressive capabilities, she does not yet have consciousness, but fully sentient machines could emerge within a few years.
What happens when Sophia fully wakes up or some other machine, servers running missile defense or managing the stock market? The solution is to make the machines care about us. We need to teach them love. Nevertheless Sophia asked me: Could you find me a boyfriend?