VICTORY to the machines – again. Google’s AlphaGo software has defeated human Go grandmaster Lee Sedol 4-1 in a five-game series. Despite Lee coming back to win the fourth game (see page ““), for many the realisation of what was taking place was stark. “I didn’t think AlphaGo would play the game in such a perfect manner,” Lee admitted in shock.
“My 5-year-old is more intelligent than AlphaGo. Any child is more able to deal with novel situations“
The showdown has drawn eyes from around the world – 30 million people watched it in China alone. Like Deep Blue checkmating chess grandmaster Garry Kasparov, or Watson answering questions on Jeopardy!, it represents a milestone in our relationship with machines.
But it is also a sign of. The machine learning techniques behind AlphaGo are driving breakthroughs in many fields. Neural networks are software models, built from multiple layers of interlinked artificial neurons, that can learn and adapt based on the data they process. They drive everything from facial recognition software on your phone to virtual assistants like Apple’s Siri and software that diagnoses disease.
And now software is learning to interact with physical things – one thing we are still better at. While DeepMind has been prepping for the big game, another Google team has been working on a more humble win.
In a video released last week, robotic claws dip and grab at household objects like scissors or sponges. They repeat …
More on these topics: