SeongJoon Cho/Bloomberg via Getty Images
It was like playing against a wall. That’s how European Go champion Fan Hui described the experience of losing, an built by team. “The problem is humans sometimes make very big mistakes, because we are human,” he said. “The program is not like this.”
these days. As well as Go, the DeepMind team have pitted their algorithms against and . There are involving first-person shooters like Unreal Tournament. And there are regular showdowns between bots and , such as the annual .
Games are a great way to test AIs because they offer a range of challenges, says Julian Togelius at New York University. But as computers hone their gaming skills, we will need new ways to make them fun opponents.
Overall, we’re still winning. “The best StarCraft-playing programs can barely beat a beginner,” says Togelius. Computers are quickly closing the gap, though.
Togelius is interested in what’s known as general artificial intelligence – the kind of smarts that can be applied to many different problems. The problem is that you can’t achieve it by training an AI on one game. “You can’t just take AlphaGo and apply it to another problem, not even another game,” he says. “Deep Blue beat Kasparov in chess but can’t play checkers. The bestbot is worthless at Super Mario Bros.” The AI simply gets good at a specific task, and its skills aren’t transferable.
A general AI would be able to play many different games, even ones it has never seen before. Together with colleagues at the University of Essex, UK, and DeepMind, Togelius runs the– now in its third year – testing AIs across a variety of different arcade games. This year, events are planned for July, September and October.
To err is human
But to be truly entertaining opponents – as Fan found out – AIs need another human trait: the ability to make mistakes. “This is actually one of the biggest problems with AI for games,” says games developer Chris Hecker. “It’s hugely important to make them fallible.”
Hecker is working on a two-player game called. One player controls a spy who must blend into a small crowd of computer-controlled guests at a cocktail party to avoid being identified. The other player controls a sniper trying to pick the spy off. The spy tries to act like a bot while the sniper looks for human slips.
Hecker wants to add a single-player mode in which an AI takes on either the spy or sniper role. “The big challenge is going to be making it feel like it’s fair and not cheating,” says Hecker.
The spy role will require the AI to occasionally make a slip so that it stands out from the other computer-controlled guests. The sniper role will be even trickier to get right. “It’s relatively trivial to make an AI that can kill a player every time, but making it feel like a worthy competitor and, more importantly, fun and interesting to play, is hard.”
One solution is to have the AI sniper let players know what tipped it off. “If it can remember and tell you, ‘I shot you because I saw you bug the ambassador,’ then that’s starting to be a conversation between the human and AI player that feels fair and natural,” says Hecker.
But knowing why you lost doesn’t help if you lose every time. So Hecker wants the AI to play like a human would – with mistakes. The idea is to have the AI sniper gradually build up a case against each of the guests based on their actions, but then have it forget things. “It is very hard as a human player to remember all the things each guest does, and I’ll have to model that,” he says.
The idea of fallible AI could open up whole new ways to play.and colleagues at the University of California, Santa Cruz, are developing a game called , which simulates a small community. The player has to investigate a death by interviewing characters who misremember and lie.
Unreliable characters like this are common in novels and TV dramas, but not in video games. When they appear, as in L.A. Noire, released in 2011, their false memories and lies are scripted.
This will not be the case in Talk of the Town. Each character is played by an AI agent with a mental model of the town and townsfolk. As the game proceeds, characters pick up information, some of it incorrect. They also share information with each other and have to choose whether or not to believe what they hear.
On top of this, characters’ memories fade or get muddled as the game progresses. “If one agent believes another works at a certain bar in town, they might come to believe that the character works at a different bar,” says Ryan – or even a dentist’s.
The AI characters can also lie. “Lying is the most challenging aspect to model because lying is a very complex and nuanced human phenomenon,” says Ryan. “People lie about all sorts of things for all sorts of reasons.”
Fallibility will, of course, not be part of the job specification when AlphaGo takes on the world Go champion in Seoul, South Korea, next month. How might the outcome reflect on his opponent, in a part of the world where the game is taken very seriously?
“In China, Go is not just a game,” Fan told reporters after his defeat. “It is also a mirror on life. We say if you have a problem with your game, maybe you also have a problem in life.”
More on these topics:
This entry passed through the Full-Text RSS service – if this is your content and you’re reading it on someone else’s site, please read the FAQ at fivefilters.org/content-only/faq.php#publishers.