AI passes Go but has no monopoly on intelligence

by Edward Cone

People love to anthropomorphize machines. Usually, that’s no problem. You might name your car and sweet-talk it all the way to the gas station when the fuel light goes on, but you probably won’t confuse it for a person. Things get trickier as machines get smarter, so it’s important to remember that artificial intelligence is not human intelligence; for AI to live up to its promise we need to understand how it thinks like us, and how it doesn’t.

imageTake last week’s first-ever victory of a computer over a top-level player in the ancient game of Go. “AlphaGo is clearly a form of highly tuned intelligence,” says David Krakauer, president of the Santa Fe Institute, about the program that defeated grandmaster Lee Se-dol, 4-1, in a five-game series.

Yet the software approaches the game differently than its human opponents. AlphaGo deals with the huge number of possible moves, says Krakauer, by randomly simulating “possible futures from any given position.” And it learns from experience.

People can’t play the same way. Instead, we understand how a variety of possible board configurations can force game action in particular directions. “Humans construct higher-order patterns in order to determine successful moves,” says Krakauer. “We learn classes not instances, we like the genera not the species.”

As it happened, writes Cade Metz in Wired, Lee took a game from AlphaGo by thinking in a way his opponent did not – just as the computer had done to him earlier:

“In Game Two, the Google machine made a move that no human ever would. And it was beautiful. As the world looked on, the move so perfectly demonstrated the enormously powerful and rather mysterious talents of modern artificial intelligence. But in Game Four, the human made a move that no machine would ever expect. And it was beautiful too. Indeed, it was just as beautiful as the move from the Google machine—no less and no more.”

Complementary intelligences

Krakauer sees forms of intelligence everywhere – not just in humans and computers but in plants, microorganisms, even societies. His definition includes three key elements: inference, representation, and strategy. As explained about halfway through the first video here, intelligent entities can “adapt and learn and predict” (inference); encode information about their environments (representation); and use inference and representation to outcompete others (strategy).

So winning at strategy-intensive Go is a big step beyond beating humans at the famously tactical game of chess. But an AI that masters one sort of strategy may not be able to apply it to other challenges. AlphaGo, says Krakauer, “is exquisitely crafted to perform one task. It is like a precision tool, a digital caliper, and not at all like a hand. The complex world demands these more general solutions: nature favors hands over calipers.”

For the foreseeable future, these specialized machines will need human hands to guide them. Understanding their form of intelligence will help us use them wisely and well. AI will do many jobs better than people do them, and that’s going to cause some pain and dislocation. But as we work with the technology our intelligence should grow, too; the greatest promise of AI is making humans smarter. Says Krakauer, “Great tools do not replace us, they extend us.”

Edward Cone is deputy director of Thought Leadership and head of the technology practice at Oxford Economics