GO is an ancient strategy game that is very popular in Asia. On the surface it is very simple. It’s played on a grid players take turns placing pebble-like pieces at the intersections of the grid lines. The object is to completely surround the opponents pieces so they have nowhere to move. It sounds easy but it is definitely not. Game theorists rate GO as at least 3 times more complex than Chess. Someone calculated that there are more possible moves in GO than all of the atoms in the universe.
Recently we’ve seen how A.I.s (Artifiicial Intelligence) have been beating Chessmasters and Jeopardy champions. The technique used is called deep learning. Instead of teaching the A.I. how to play Chess by explicitly programming in strategies, with deep learning you give the machine the basic rules and then let it “watch” many games. The A.I. learns much as children do, through observation and experimentation. This technique has yielded ever improving results such as those mentioned earlier. Last year an A.I. from Google called AlphaGo roundly defeated GO master Lee Sodol using this technique.
Deep learning can also yield surprises. Some years ago a gent I know named Jeff Pepper was working on a project for The Carnegie Group, the commercial arm of Carnegie Mellon’s Robotics/A.I labs. It was a DARPA project for a targeting system for an autonomous cruise missile. The idea was to have the missile (think drone today) loiter in an area and identify targets. They started by using an early type of A.I. called a neural network. They fed it pictures with and without tanks, they’d tell it whether it had correctly identified pics with a tank. As expected it quickly went from no better than random guessing to 100% accuracy. They take the system into the field and turn it on and it sees tanks everywhere. They’re trying to debug it, but everything looks OK. Finally some smart human realizes that the pictures with tanks were all taken on a sunny day and that the A.I. wasn’t saying “There’s a tank!”, it was saying “It’s a sunny day!”.
The moral of the study is that the programmers had no idea what the A.I. was “thinking”. Fast forward to today. Google has been working on the next generation of AlphaGo, AlphaGo Zero. Unlike it’s predecessor AlphaGo Zero started from, well zero. It wasn’t seeded with the basic rules, it learned by playing against the original A.I. At first it lost miserably. Again over time it got better and could consistently beat it’s big brother. What is interesting here is that when human researchers played the games back they found that Zero was playing a completely unorthodox style. To human observers many moves made no sense, yet they produced the desired results. Recently machine learning as been applied to writing software. Soon we’ll first generation software that’s already “smarter” than us writing software that is smarter still. What does the future hold? A utopia where super-intelligent A.I. fulfills our needs and wishes before they even occur to us? Or a dystopian future where our machine overlords rule with an iron fist. Maybe we could ask an A.I. that question. Unfortunately the answer will probably be “42“.