Foom! I

Published on February 1, 2017 at 10:47 AM

According to The Register (remember when it was good?), Eric Schmidt recently opined that

“We are nowhere near that in real life, we’re still in baby stages of conceptual learning,” he said. “In order to worry about that you have to think 10, 20, 30, or 40 years into the future.”

"That" being the Singularity, natch. The Reg said that Schmidt said, in the words of The Reg "there’s no sign that the Singularity is on the horizon." I'm not sire what I could 40 years much less 10, but there you go, that's The Reg for you. It wasn't an Orlowski article at least.

AI has been around for a long time. Neural nets were invented in the 1940s, the term was coined in the 1950s, a lot of the basic ideas and algorithms have been around since the 1960s and people were saying much of what people are saying now in the 1970s. AI has always been 20 years away. So why is it different this time?

Moore's law, for one. 50 years is about 30 Moore's law periods or a billion fold increase in the "power" of computers. Quantity has a certain quality. A lot of things were simply impossible 50 years ago even if the algorithm existed. Today you can get a computer for less than $2000 that is as powerful as the most powerful computer on the planet 15 years ago, so the capacity for more people to be able to do stuff is vastly greater than it was only a couple of decades ago even leaving aside the improved algorithms and out of the box tools that exist now. It is not as though A(G)I is suddenly going to become a bad idea. It is one of the best ideas anyone ever had. And there are vastly more research dollars and hours going into A(G)I R&D now than was the case 5 years much less 15. Even if the current bubble bursts if seems unlikely that A(G)I research won't continue. And we have in recent years seen many important breakthrough: Alpha Go, Google Translate, that poker program, improved computer vision, pointers towards solutions of the frame problem and the grounding problem. We are awash with data and for a lot of A(G)I problems, lack of data was an issue in the past. Just having Wikipedia, much less the whole of the (Deep) Web is something that researcher could only have dreamt of in 1987.

So even if we extrapolate linearly from where we are in 2017 compared to where we were in 2012 or 2007 or 2002, we would get to quite interesting place in 2022, 2027 and 2032. Think of all those graduate students who are going into A(G)I now. If only I were 25 years younger! (A blog for another day.) Glomming together a bunch of advanced technologies could produce at least an interesting demo.

Foom! comes from self-modifying systems that can think thoughts exponentially more quickly all the time. There are (potential) constraints on a system going to "infinity" "quickly". But if we get powerful neuromorphic chips and some clever insights and new algorithms from all those researchers sucked into A(G)I and tools to bring together a lot of existing algorithms and data, it's not that hard to imagine having a borderline seed AI by 2037.

At the 1995 Worldcon, I was on a panel where I opined that for AGI (we didn't call it that then) and molecular nanotechnology, 1995-2005 would be the research period, 2005-2015 the consolidation period, 2015-2025 the implementation period and after 2025 all bets were off. That was 22 years ago this year. NMT hasn't amounted to much yet, but it's still a brilliant idea, but I was on the money (so far) with A(G)I. The Singularity could come from someone in their parents' garage, although more likely from the GRU. I don't expect if before 2025, but technology can surprise us at times. I can easily imagine it in 2038. 40 years is 2057. That's a lot of Moore's law even if we hit the limit on components per unit area soon (NMT, anyone?) So, from the Traveller/Vernor Vinge/Transhuman Space perspective, we have to assume that A(G)I is much harder than it might be. And given that we had 200,000+ years of H. Sapiens before language, it's easy to think that language might be a software hack. Imagine another 40 years of NLP. There's a point at which you have enough powerful narrow domain system that if you string them together, you get something interesting (that can assimilate every book and paper ever written). I find it hard to imagine something won't come of that.      

Add comment

Comments

There are no comments yet.