Longterm Wiki
Back

Year in review: AlphaGo scores a win for artificial intelligence

web

Data Status

Not fetched

Cited by 1 page

PageTypeQuality
Deep Learning Revolution EraHistorical44.0

Cached Content Preview

HTTP 200Fetched Feb 22, 20266 KB
Year in review: AlphaGo scores a win for artificial intelligence 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 

 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 

 
 

 
 
 

 
 Skip to content 

 
 
 
 
 
 
 
 

 
 
 
 
 Subscribe today
 

 
 Every print subscription comes with full digital access

 
 

 Subscribe Now 
 

 
 

 
 

 
 
 
 
 
 By Thomas Sumner 

 December 14, 2016 at 8:33 am - More than 2 years ago 

 
 

 
 Share this:

 Share 
 
 Share via email (Opens in new window) 
 Email 
 
 
 Share on Facebook (Opens in new window) 
 Facebook 
 
 
 Share on Reddit (Opens in new window) 
 Reddit 
 
 
 Share on X (Opens in new window) 
 X 
 
 
 Print (Opens in new window) 
 Print 
 
 
 
 

 

 
 
 
 
 In a hotel ballroom in Seoul, South Korea, early in 2016, a centuries-old strategy game offered a glimpse into the fantastic future of computing.

 The computer program AlphaGo bested a world champion player at the Chinese board game Go, four games to one ( SN Online: 3/15/16 ). The victory shocked Go players and computer gurus alike. “It happened much faster than people expected,” says Stuart Russell, a computer scientist at the University of California, Berkeley. “A year before the match, people were saying that it would take another 10 years for us to reach this point.”

 
 
 
 Sign up for our newsletter

 
 We summarize the week's scientific breakthroughs every Thursday.

 
 
 
 
 
 
 
 




 

 The match was a powerful demonstration of the potential of computers that can learn from experience. Elements of artificial intelligence are already a reality, from medical diagnostics to self-driving cars ( SN Online: 6/23/16 ), and computer programs can even find the fastest routes through the London Underground . “We don’t know what the limits are,” Russell says. “I’d say there’s at least a decade of work just finding out the things we can do with this technology.”

 AlphaGo’s design mimics the way human brains tackle problems and allows the program to fine-tune itself based on new experiences. The system was trained using 30 million positions from 160,000 games of Go played by human experts. AlphaGo’s creators at Google DeepMind honed the software even further by having it play games against slightly altered versions of itself, a sort of digital “survival of the fittest.”

 These learning experiences allowed AlphaGo to more efficiently sweat over its next move. Programs aimed at simpler games play out every single hypothetical game that could result from each available choice in a branching pattern — a brute-force approach to computing. But this technique becomes impractical for more complex games such as chess, so many chess-playing programs sample only a smaller subset of possible outcomes. That was true of Deep Blue, the computer that beat chess master Garry Kasparov in 1997.

 But Go offers players many more choices than chess does. A full-sized Go board includes 361 playing spac

... (truncated, 6 KB total)
Resource ID: 94c1a5b48dd89194 | Stable ID: ZGRkYzI0Yz