Continue reading page |1|2

Computers built to mimic the brain can now recognise images, speech and even create art, and it’s all because they are learning from data we churn out online

Do androids dream of electric squid? (Image: Reservoir Lab at Ghent University)

I AM watching it have a very odd dream – psychedelic visions of brain tissue folds, interspersed with chunks of coral reef. The dreamer in question is an artificial intelligence, one that live-streams from a computer on the ground floor of the Technicum building in Ghent University, Belgium. This vision has been conjured up after a viewer in the chat sidebar suggests “brain coral” as a topic.

It’s a fun distraction – and thousands of people have logged on to watch. But beyond that, the bot is a visual demonstration of a technology that is finally coming of age: neural networks.

The bot is called 317070, a name it shares with the Twitter handle of its creator, Ghent graduate student Jonas Degrave. It is based on a neural network that can recognise objects in images, except that Degrave runs it in reverse. Given static noise, it tweaks its output until it creates images that tally with what viewers are requesting online.

The bot’s live-stream page says it is “hallucinating”, although Degrave says “imagining” is a little more accurate.

Degrave’s experiment plays off recent Google research which aimed to tackle one of the core issues with neural networks: that no one knows how neural networks come up with their answers. The images the network creates to satisfy simple instructions can give us some insights.

Neural networks have been racing ahead of late. They can recognise different kinds of tumours in medical images. They have learned to play Super Mario World and can hold their own in the complex board game Go, performing as well as a moderately advanced human without planning ahead. Trained on a database of moves, the network takes the board layout as its input and outputs the best possible move.

These days neural networks are involved in many of your interactions with your smartphone or any large internet company. “The first one we had was in Android phones in 2012 when they put in speech recognition,” says Yoshua Bengio of the University of Montréal in Quebec, Canada. “Now all the major speech recognition software uses them.”

In a few short years neural networks have overtaken established technologies to become the best way to automatically perform face recognition, read and understand text and interpret what’s happening in photographs and videos. And they are learning it all from us.

Whenever we use the internet or a smartphone, we are almost certainly contributing data to a deep learning system, one probably relying on neural networks that our data helped train in the first place. The most remarkable property of such systems is that they can process new kinds of data without having to be tinkered with (see “How do neural networks work?“).

Google was the first company to bring a neural network into our everyday lives. Deluged in data collected through its internet services, it made sense for the company to build one. A lot of the cutting-edge work has been done by UK firm DeepMind after it was bought by Google, which is now believed to be using the firm’s technology in seven of its products.

Free training

Other internet companies like Facebook also have troves of data ripe for a neural network to analyse: billions of photos of faces, if tagged accurately, can be used to train a powerful face recognition system. The hallmark of Google’s and Facebook’s success is that the actions of ordinary users train the networks for free.

New hardware has helped too. “Ten years ago we were using regular computers and it wasn’t great,” says Bengio. “Then we realised we could use graphical processing units designed for playing video games and get a 20-fold speed-up. Specifically designed chips can give you a 100-fold increase.”

Neural networks already underpin state-of-the-art speech and image recognition, and are now tackling the sonic building blocks of speech to improve recognition of less common languages. Bengio thinks the next frontier will be in human-computer interaction. Neural networks will be the interface, learning and interpreting our behaviour and translating it into instructions the computer can carry out efficiently.

Next-generation smartphones could hold chips customised to run neural networks, putting adaptable learning systems in our pockets. Wearables from Fitbits to the Apple Watch will all feed data into new AI models that can recognise healthy behaviour such as regular exercise, or gauge walking speed.

The startling progress of neural networks raises other, more philosophical, questions. Is this how the first machine consciousness will be born? Watching 317070’s livestream already gives the uncanny sense that you are looking at a human-like consciousness at work. Each of its images is unique, generated through a process that even its creator doesn’t really understand.

But 317070’s dreaming is for its audience, not itself. It has no idea that it is having these dreams, nor the capacity to have ideas about anything that it is not told to think about.

Continue reading page |1|2
Future Issue of New Scientist Magazine

  • New Scientist
  • Not just a website!
  • Subscribe to New Scientist and get:
  • New Scientist magazine delivered every week
  • Unlimited online access to articles from over 500 back issues
  • Subscribe Now and Save

If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to.

This entry passed through the Full-Text RSS service – if this is your content and you’re reading it on someone else’s site, please read the FAQ at fivefilters.org/content-only/faq.php#publishers.

Related Posts

Facebook Comments

Return to Top ▲Return to Top ▲