The Memo - 14/Jun/2025
1X Gamma + Redwood 160M, Ilya speaks, Gemini 2.5 Pro 06-05, and much more!
To: US Govt, major govts, Microsoft, Apple, NVIDIA, Alphabet, Amazon, Meta, Tesla, Citi, Tencent, IBM, & 10,000+ more recipients…
From: Dr Alan D. Thompson <LifeArchitect.ai>
Sent: 14/Jun/2025
Subject: The Memo - AI that matters, as it happens, in plain English
AGI: 94%
ASI: 0/50 (no expected movement until post-AGI)
Alan for the Icelandic Center for Artificial Intelligence (Jun/2023):
There is no one on earth smart enough to be able to keep up with modern artificial intelligence in 2023. We are going to have to rely on artificial intelligence to help with regulating and supporting artificial intelligence. We’re all flawed: I’m not smart enough, the governments are not smart enough.
We have this enormous brain that might be the equivalent of 1,000 Einsteins. We’ve measured GPT-4 as an IQ of 152, which is in the 99.9th percentile.
If you don’t like IQ (and some people don’t like it) here’s another 100 metrics where it achieves in the 99th percentile.
It’s smart, and it’s not just logic smart, it’s creative smart.
We need to be leveraging that rather than relying on committees of old people trying to make decisions about technology that’s changing every day.
The early winners of The Who Moved My Cheese? AI Awards! for Jun/2025 are White House AI and crypto czar, David Sacks (‘a post-economic order in which people stop working and instead receive government benefits… is their fantasy; it’s not going to happen.’) and The Atlantic (including Emily Bender doubling down) with their wrong-in-2020-but-dangerously-wrong-in-2025 quote (‘Large language models do not, cannot, and will not "understand" anything at all. They are not emotionally intelligent or smart in any meaningful or recognizably human sense of the word’).
Welcome to a few hundred new subscribers who came in this week following Wes Roth’s video of The Memo edition 28/May/2025. I don’t really keep track of my media appearances these days, but we had a big run on AI and consciousness this month for outlets including FinancialSense, MacLife (below), 7News, and more.
This is one of the longest editions so far (6,000 words), exploring recursive self-improvement (RSI), Waymo’s new scaling laws, the latest Gemini model, a completely new AI team rivalling OpenAI and Google, and my favourite humanoid robot with its new onboard AI model. Let’s jump in…
Contents
The BIG Stuff (Ilya speaks, 1X Gamma + Redwood 160M, Meta AI Superintelligence…)
The Interesting Stuff (AI in all OSU majors, Waymo scaling, Gemini, 100 humanoids…)
Policy (DOGE prompt, JFK AI, OpenAI’s 20 lawsuits + ChatGPT logs, Getty Images…)
Toys to Play With (New AI course, o3 Pokémon, ‘Artificial’ movie, ElevenLabs…)
Flashback (Excellence or intimacy?…)
Next (Roundtable…)
The BIG Stuff
Apple publishes ‘The Illusion of thinking: Understanding the strengths and limitations of reasoning models via the lens of problem complexity’ (Jun/2025)
With maybe just a hint of sour grapes, Apple researched specific limitations of thinking/reasoning models by looking at RL traces in edge cases of puzzle testing. I foresee this paper joining some of the Cheese winners, and Google’s infamous 2021 ‘Stochastic parrots’ paper (wiki, and watch GPT-3’s response via Leta AI).
Source (30 pages): https://machinelearning.apple.com/research/illusion-of-thinking
Millions of people read the paper title (‘The illusion of thinking’) and drew the conclusion that LLM’s don’t think. Beyond the title, the findings ‘primarily reflect experimental design limitations rather than fundamental reasoning failures.’ (paper response by Open Philanthropy, 10/Jun/2025).
So, when it comes to complex problems, do models actually think? (Hint: yes.) And what about humans? (Hint: Generally, no.) I had a bit of a play with this, using the current top reasoning model to re-write the paper about humans instead of AI. You can read the initial prompt I sent to o3-pro (without deep research), and I ended up using this prompt sent to deep research (o3) instead.
Just for fun, read the formatted paper online, or download it (PDF, 16 pages):
The Memo features in recent AI papers by Microsoft and Apple, has been discussed on Joe Rogan’s podcast, and a trusted source says it is used by top brass at the White House. Across over 100 editions, The Memo continues to be the #1 AI advisory, informing 10,000+ full subscribers including RAND, Google, and Meta AI. Full subscribers have complete access to all 30+ AI analysis items in this edition!
Ilya speaks at University of Toronto (6/Jun/2025)
Dr Ilya Sutskever was the chief scientist and co-founder of OpenAI, and one of the primary minds behind GPT-1, GPT-2, GPT-3, GPT-3.5, GPT-4, and reasoning models [Sidenote: He described GPT-2 as ‘alchemy’ (Oct/2019) and GPT-4 as ‘magic’ (Mar/2023)]. After leaving OpenAI in 2024, he launched Safe Superintelligence Inc, a company focused on building ASI, free from short-term commercial pressures. He earned his PhD at the University of Toronto under Prof Geoffrey Hinton.
Ilya rarely speaks publicly, so I thought it was important to provide a transcript of parts of his recent speech to students at UToronto.
…we are living in a most unusual time… The way we work is starting to change in unknown and unpredictable ways. Some work may feel the change sooner, some later…