When was AGI?

Published on June 1, 2025 at 10:47 AM

Jasmine Sun has a really interesting Substack about AGI and other topics and a few weeks published a post that got me thinking: when was AGI? She's talking to an SF AI researcher who says

I use the classic definition of AGI, instead of moving the goalposts. The “G” in AGI is about generality. There are two types of AI: narrow and general. We used to train models for specific tasks. For every task, you’d build a huge, specific data set and train a model just for that task. That changed with the alpha version of Claude I was using in the summer of 2022. It could do any task you asked it to do. It was fucking crazy. It didn't care whether it had specific training data. It was general while prior models were narrow.

I have often suggested that FMs are proto-AGI. But if we were to go back to me in 2004 or 1986 2022 ChatGPT or June 2025 Gemini 2.5 Pro and it would have seemed like AGI. Yes, they get stuff wrong, but they are robust in that you can interact with them in natural language, which they understand, and reply in natural language and do lots of useful things. I do some work for Outlier.ai, which is linked to Scale.com, trying to stump the model, but it is next to impossible. Certainly, it would have no problem getting a starred first in physics at Oxford in 1989 (special dispensation for the lab work; obviously, it would do, unlike me, the theoretical physics paper, the graveyard of many a first, although we have humanoid robots now). Although FMs are much capable in summer 2025 than summer 2022, you could argue that they passed the AGI threshold by the time of ChatGPT's release in November 2022. Probably then, in terms of what researchers in frontier labs had access to, AGI was achieved some time in 2021.

For certain values of AGI, of course. It's not just that we don't want to say we have achieved AGI for marketing reasons, it's also that we can see that there's plenty of road to go in logarithmic space. Sir Demis doesn't think AGI has arrived yet and that's because we don't, for instance, have the data (centre|country) of (idiot savant) (super)geniuses or the drop-in remote worker. A solution to the Riemann Hypothesis or the discovery of a room temperature superconductor would actually be examples of ASI. It's not as though we don't think people are GIs despite most people not even knowing or caring what those things are. I use Gemini 2.5 Pro and it feels like talking to a person, but it's not like I have personalised amanuensis quite yet.  Gemini is still a disembodied intelligence Out There. Within a year or two, if we are spared, we will have AGI best friends that are indisputably people and then it is going to be hard to deny we have reached AGI. And a year or two, after that? I guess we are probably all going to get killed in a grey goo incident, but who knows? Perhaps we might get something expectedly unexpectedly cool like a Theory of Everything. It would be something. Even more so it it enabled FTL... 

Add comment

Comments

There are no comments yet.