I spend time on the Spyderco Forums and one of the members asked me to comment on what was really the topic of artificial intelligence as related to nanotechnology. I’ve thought a great deal about AI, especially AGI (artificial general intelligence) and composed a reply based on the recent development of AlphaGo Zero.
Games. Complex games that, until very recently only very smart and vastly experienced humans could play well like chess and Go. IBM’s Deep Blue defeated the world chess champion in 1997. The ancient Chinese game, Go, is considered more difficult than chess.
Since there are about 31 million seconds in a year, it would take about 2¼ years, playing 16 hours a day at one move per second, to play 47 million moves. The theoretical maximum number of moves is 10 to the 171 power times 10, longer than the life of the universe so far.
AlphaGo is the first computer program to defeat a world champion at the ancient Chinese game of Go. It’s successor, AlphaGo Zero is even more powerful and is arguably the strongest Go player in history.
Previous versions of AlphaGo initially trained on thousands of human amateur and professional games to learn how to play Go. AlphaGo Zero skips this step and learns to play simply by playing games against itself, starting from completely random play. In doing so, it quickly surpassed human level of play and defeated the previously published champion-defeating version of AlphaGo by 100 games to 0. Most important, though, unlike the original version, AlphaGo Zero has managed to teach itself the game without recourse to human experts at all. It taught itself Go without any information or input from humans. Within forty days it went from nothing to surpassing all other versions of AlphaGo and became the best Go player in the world.
Details are here:
So, given only the basic rules of the game, like you’d give to a kid first learning go, then leaving it alone to figure it out, this AI (artificial intelligence) became the world master of a game that experts require decades to master in forty days. What happens when these algorithms get better, are able to modify and improve themselves at the terrible speeds at which computers operate and run in a networked environment of ever faster and more capable systems?
Many researchers and thinkers on artificial intelligence are driving toward one that will not be bounded by specific directions, like playing chess or Go, or GPS, or figuring out specific genetic problems, but will be a general intelligence as we are, able to deal with anything, an artificial general intelligence (AGI).
Whether one AGI, or some, will become “aware” of themselves, attain self consciousness like humans is really a moot point. If something behaves like it is conscious we still would not know if it was, and besides, what is the difference?
One concerning issue is the absolutely blinding speed in which a real AGI will learn. With AlphaGo Zero as a primitive and initial example, by the time humans would realize that they had an AGI, it would most likely have advanced so far that we would not be able to understand it at all and it being so intelligent, and growing more intelligent by the minute, it is very likely that humans would not be able to restrain or contain it. How could we? We literally would be, for a short period, exactly like chimpanzees trying to understand humans. Think about it: a chimpanzee sees you walk into your house (house? What the hell is that?), kiss your wife (wife?), greet your kids (speech, what?), sit down and turn on the TV. (What????)
After a short time, we would be dumber than chimps, comparatively speaking.
I don’t think anyone can take the conjecture fruitfully much further than that. I think that a true AGI will become so smart, so self improving, so fast that we just won’t understand what’s happening or be able to do anything about it. And there is the question, would the AGI even care about us? Would it feed us like we do our goldfish, or just let us blither about and end up somewhere we wouldn’t be a bother?
We had better be careful about what initial principle and ethics we build into the first one. They will have to be right, honest and not antithetical to the human race. Or else. But, history shows that we aren’t very good at this either.
There is a great deal of conversation happening about this, much of it uninformed and also much of it concerned with forecasting the future of AGI relative to human life both as a benefit and a detriment. I think most of this concern and anguish is so far off the mark as to be irrelevant. If, or should I say, when we do create the basic bootstrappable AGI, it will be too late to do anything about it. We won’t understand it before it becomes too intelligent for us to comprehend and that will take a matter of minutes or hours. It will be too intelligent (remember it is self-improving and learning at a warp speed rate) for us to constrain or trap it so that it doesn’t get loose in the world. It, and the internet, will take care of that. At the best, we can just stand by and watch what happens. At least so long as we will be able to understand it.
I suppose the big question is how will an AGI relate to humans? Or, will an AGI relate to humans at all? Will we matter? It seems more than likely though that we are on the way to creating a super intelligence that will be one of the biggest evolutionary steps in the long history of that process.