The Memo - 12/Apr/2025
ASI milestone 31/Mar/2025, 1X NEO autonomous updates, agents + brain, and much more!
To: US Govt, major govts, Microsoft, Apple, NVIDIA, Alphabet, Amazon, Meta, Tesla, Citi, Tencent, IBM, & 10,000+ more recipients…
From: Dr Alan D. Thompson <LifeArchitect.ai>
Sent: 12/Apr/2025
Subject: The Memo - AI that matters, as it happens, in plain English
AGI: 92% ➜ 94%
ASI: 0/50 (no expected movement until post-AGI)
My keynote ‘GPT-5 & brain-machine interfacing’ was recently delivered to over 2,000 psychiatrists and professors at the 31st International Symposium on Controversies in Psychiatry. Featuring demonstration videos from Synchron and Neuralink, the rehearsal version is now available, and references my GPT-5 paper, both available exclusively to full subscribers of The Memo. Slide highlights are available at: LifeArchitect.ai/bmi.
Content.
The BIG Stuff (ASI, 1X NEO autonomous, agents + brain, Turing test again…)
The Interesting Stuff (OpenAI open weight model, Ilya using TPUs…)
Policy (US tariff math via LLM, China minerals, NYT copyright case proceeds…)
Toys to Play With (MCP/A2A, DALL-E 3 link, infographics, Runuway Gen-4…)
Flashback (The MIT AI Memos…)
Next (o4-mini, IBM Granite-3.3, ‘Quasar Alpha’, roundtable #28 in 24 hours…)
The BIG Stuff
Exclusive: ASI milestone may have been hit on 31/Mar/2025
Okay, this is a big one. This update presents a groundbreaking development in artificial superintelligence (ASI), a system whose intelligence surpasses that of the brightest and most gifted human minds.
My ASI checklist documents some of the visible milestones we’ll see after achieving artificial superintelligence, and includes two very specific items:
Item #4: First major new mathematical proof achieved by ASI
Item #5: First major mathematical conjecture resolved by ASI
Dr Weiguo Yin at Brookhaven National Laboratory earned his PhD in strongly correlated systems in 1998 from Nanjing University, China with the honor of National Outstanding Doctoral Dissertation Award. He has around 5,000 citations, and an h-index of 29, indicating substantial influence in his field (Google Scholar).
The Potts model (wiki) is a mathematical framework from statistical physics that describes how complex systems—such as magnets, biological networks, or social groups—organize themselves. Picture a grid where each cell can take one of several states, similar to coloring squares on a checkerboard. The model shows how neighboring cells influence each other, encouraging similar states—like aligned magnetic spins or shared opinions. The Potts model was unsolved by humans. Solving the Potts model means predicting how these simple interactions create larger patterns, offering insights into phenomena like phase transitions and collective behavior.
Dr Yin used the OpenAI reasoning model, o3-mini-high, to solve the one-dimensional J1-J2 q-state Potts model for arbitrary values of q. For the first time, this breakthrough provides clarity to complex physical phenomena, from superconductors to novel material states, and marks a significant step in AI-driven scientific discovery.
This scientific breakthrough was publicly acknowledged by Greg Brockman, co-founder of OpenAI (4/Apr/2025).
This development suggests that as of Monday, 31 March 2025, AI may have reached a critical threshold of performance surpassing the abilities of the most accomplished human scientists. However, as Dr Yin’s breakthrough does not fully satisfy my ASI checklist criteria—item #4 (complete original mathematical proof: the Potts model solution arguably doesn’t constitute a mathematical proof in the strict sense, though I’m open to being wrong on this one) and item #5 (resolution of a major mathematical conjecture: this would not be recognized as a major conjecture as listed on the wiki page)—the ASI progress checklist currently remains at 0/50. I discussed these points in my recent livestream (timecode, 6/Apr/2025).
Read Dr Yin’s paper (6 pages): https://arxiv.org/abs/2503.23758
Read the ASI checklist: https://lifearchitect.ai/asi/
Exclusive: 1X NEO robot new autonomous updates (5/Apr/2025)
1X has released several short videos of its NEO robot in the home and garden.
Eric Jang, VP AI at 1X, Twitter:
‘ChatGPT showed the world that autonomy does not have to be solved one task at a time… the home is where you find the ultimate treasure of diverse data necessary to create a general intelligence. Diverse data across many tasks and environments is a necessary ingredient of making general purpose robots… NEO is a lot more reliable now, and I think the rate of progress will be very steep in 2025.’
This one bumped up my countdown from 92% to 94%: https://lifearchitect.ai/agi/
Watch 1X’s three 10-sec videos joined together and upscaled to 4K using Topaz via fal.ai on Poe.com (video link):
Gen Z [born 1997–2012] is still anxiously using AI: Poll (8/Apr/2025)

A recent survey conducted by the Walton Family Foundation, GSV Ventures, and Gallup reveals that a significant portion of Gen Z [ages 13–28, born 1997–2012] feels anxious about AI, despite its prevalent use. Approximately 41% reported anxiety, while 36% expressed excitement about AI.
Read more via Axios.
Advances and challenges in foundation agents: From brain-inspired intelligence to evolutionary, collaborative, and safe systems (31/Mar/2025)

Researchers from Stanford, Google DeepMind, Microsoft, Yale, Duke, and many more institutions discussed the transformative impact of large language models (LLMs) on AI, focusing on the development of advanced intelligent agents that integrate brain-inspired architectures. They emphasized a modular approach mapping cognitive, perceptual, and operational functions to human brain analogs and highlight advancements in self-enhancement and adaptive learning.
Read the paper (264 pages): https://arxiv.org/abs/2504.01990
My literature review of LLMs + brain spans five years: https://lifearchitect.ai/brain/
Large language models pass the Turing test (again) (31/Mar/2025)
I reckon we’ve squashed the informal ‘Turing test’ many times over since GPT-3 in 2020. In a new study out of UC San Diego, GPT-4.5 was found to pass a three-party Turing test by being judged as human 73% of the time by participants, far surpassing actual human participants.
Read the paper: https://arxiv.org/abs/2503.23674
The Interesting Stuff
Dream 7B (7/Apr/2025)
Dream 7B (Diffusion reasoning model), developed by HKU NLP Group in collaboration with Huawei Noah’s Ark Lab, is a cutting-edge open diffusion large language model that surpasses previous models in performance. It demonstrates superior planning and inference flexibility, outperforming traditional autoregressive models in tasks requiring complex reasoning and contextual understanding. The model leverages a diffusion-based architecture, enabling bidirectional contextual modeling and flexible generative capabilities.
I love diffusion models because they’re so elegantly mind-bending. How can these models possibly ‘de-noise’ (by filling in the blanks) without knowing the full response before they even start?
Watch it working (animated gif):
See it on the Models Table.
The Memo features in recent AI papers by Microsoft and Apple, has been discussed on Joe Rogan’s podcast, and a trusted source says it is used by top brass at the White House. Across over 100 editions, The Memo continues to be the #1 AI advisory, informing 10,000+ full subscribers including RAND, Google, and Meta AI. Full subscribers have complete access to the entire 4,000 words of this edition!
OpenAI will release an ‘open weight’ AI model this [US] summer (31/Mar/2025)