“Siri sucks,” Mark Stephen Meadows tells me. “She’s robotic. There’s nothing there. There’s no heart, no ingenuity.”
I first heard Meadows complain about robotic robots during a panel at the second annual Intelligent Assistants Conference, held in October in New York City. It was a chance for people who work in the burgeoning field of virtual assistants to showcase their wares.
With start-ups now engaged in creating their own digital Pepper Potts, these days there are assistants that can read and sort your email; book tables at restaurants or seats on airplanes; help buy, register and insure a new car; manage issues with your bank account; switch your electricity or gas provider to get you a better deal; and so on. The idea is to free both individual and corporate users from having to deal with actual people.
The highlight of the conference was the, where top prizes went to Amtrak’s Julie, a system that helps people book US rail tickets; Telefonica Mexico’s androgynous ; and a banking helper from ING Netherlands named Inge. Throughout, the judges praised the winners for their “personality”. And yet for all that, it seemed like all the delegates could talk about was how to get prospective users to warm to these artificial servants. “If something doesn’t have a personality, then you don’t know how to interact with it,” says Meadows.
Our trust is exactly what the next generation of AI assistants will need. To be effective, it needs to juggle your most sensitive personal data, like medical or financial records. That’s a big ask. How do you convince someone towith this information – and to use it to make decisions on their behalf?
To engender that trust, developers are resorting to some tired old tropes, including the sassy female assistant always at the ready – a cliché enumerated with gusto in a job ad earlier this year on the website of the UK’s Guardian newspaper, seeking a “literary assistant (cum Miss Money Penny, cum Bree Van de Kamp, cum Archetypal Muse, cum Lara Croft)”. A few studies suggest that people tend to find female voices more pleasing to listen to. Perhaps that explains why, from phone operators to fighter jet navigation systems, the voice of technological assistance has long been female.
And why so many existing intelligent assistants – Apple’s US version of Siri, Microsoft’s Cortana, Amazon’s Alexa, the computer answering search queries in Google commercials, even– have female voices.
If pop culture is a reliable guide, the AI assistants of the future won’t be deviating from that script. Think Her, the Hollywood movie about a man who falls in love with a computerised voice. Played by Scarlett Johanssen, the voice is witty and alluring, thoughtful and knowing, instantly devoted to her human.
But in some places, including France and the UK, Siri is by default male. And in certain realms, like voice-overs for movie trailers,. Communications scholar Clifford Nass suggests that’s because people tend to trust male voices more under certain circumstances. In his book The Man Who Lied to His Laptop, he details one telling case in which BMW was forced to recall a female navigation system from German cars in the 1990s, because .
It’s obvious that people in different places and with different agendas want different things. Where one person prefers the American default female Siri, another person might prefer an Australian male voice. One type of assistant may have the edge over another in how it makes people feel or how efficiently they get things done, whether the activity in question is therapy or navigation or processing a banking request. As AI agents become more prevalent, they’ll need to be.
More intriguingly though, maybe we all need assistants that change themselves, shifting gender, voice and emotional register in real time to suit our needs and moods.
That’s what Meadows is working on at Botanic, his start-up based in San Francisco. Last year, his team helped build Sophie, a “doctor” designed to dispense medical information and reminders via an app. More recently, they’ve worked on a financial adviser, a German avatar designed to help people better communicate with younger family members, and a new healthcare agent for the Chinese market.
His design process starts with a single question: who is this assistant for? The firm surveys with potential users to find out whether they are mostly young or old, female or male, businessmen or artists. Other questions follow. Is the system supposed to be more of a companion or an assistant? What kind of things would the user want to discuss with it?
Next, they go through a series of sketches, trying to figure out what the agent should look like. Sometimes, it’s as simple as mimicking the user themselves: a teenage girl, for example, might want to talk to another teenage girl. “Humans trust people that look like them,” says Meadows.
Reflecting the user
Botanic’s creations can plug into technology that reads and reflects the user. Sophie, the iPad doctor, peers through the device’s camera and listens with the microphone for clues to the speaker’s emotional state. If someone seems distressed, for example, her face and words reflect concern, perhaps.
Market research after launch also helps sculpt the avatar to its purpose. Once the Chinese agent is released around March of next year, Botanic will collect vocal and lexical data in an attempt to better understand how people talk to the system, and will then tweak it accordingly.
In the meantime, one unlikely inspiration for robot personality may be humans themselves. Many services, including, rely on hidden human workers to fill in the gaps that the system doesn’t yet know how to handle. This can add a few accidental extras. Meadows describes one text-driven chatbot developed at his old company, that was fed answers by a team to teach it what to say. Over time, the team members’ personal preferences quietly seeped into the software, giving it “a real interest in Catherine Zeta-Jones and pizza”, he says.
I didn’t ask whether anyone wanted to listen to it.
(Image: Oli Scarff/Getty)
This entry passed through the Full-Text RSS service – if this is your content and you’re reading it on someone else’s site, please read the FAQ at fivefilters.org/content-only/faq.php#publishers.