To: US Govt, major govts, Microsoft, Apple, NVIDIA, Alphabet, Amazon, Meta, Tesla, Citi, Tencent, IBM, & 10,000+ more recipients…
From: Dr Alan D. Thompson <LifeArchitect.ai>
Sent: 1/May/2024
Subject: The Memo - AI that matters, as it happens, in plain English
AGI: 72%
Jonathan Ross, Groq CEO (Apr/2024):
‘Think back to Galileo—someone who got in a lot of trouble. The reason he got in trouble was he invented the telescope, popularized it, and made some claims that we were much smaller than everyone wanted to believe. We were supposed to be the center of the universe, and it turns out we weren’t. And the better the telescope got, the more obvious it became that we were small. Large language models are the telescope for the mind. It’s become clear that intelligence is larger than we are, and it makes us feel really, really small, and it’s scary. But what happened over time was as we realized the universe was larger than we thought and we got used to that, we started to realize how beautiful it was, and our place in the universe. I think that's what’s going to happen. We’re going to realize intelligence is more vast than we ever imagined, and we're going to understand our place in it, and we're not going to be afraid of it.’
I have a bunch of public livestreams scheduled for May/2024, starting in just a few hours from this edition. Come and join in, click ‘notify me’ on the first four scheduled streams. And here’s the link to the first stream on Tuesday at 4PM LA time:
Contents
The BIG Stuff (assassinations, GPT-4.5, SenseNova 5.0, Phi-3…)
The Interesting Stuff (Llama 3 metrics, 60 Minutes, Moderna, Elon…)
Policy (Big new safety team…)
Toys to Play With (Poe alternative, Unity, buying stuff, new Leta avatar platform…)
Flashback (GM, WEF…)
Next (GPT-5, invitation link to next roundtable…)
The BIG Stuff
Exclusive: AI inventors at risk of assassination (2024)
OpenAI CEO: “I think some things are gonna go theatrically wrong with AI. I don't know what the percent chance is that I eventually get shot, but it’s not zero.” (19/Mar/2024, 1h12m47s)
Elon Musk lawsuit, comments about DeepMind CEO: “It has been reported that following a meeting with Mr. Hassabis and investors in DeepMind, one of the investors remarked that the best thing he could have done for the human race was shoot Mr. Hassabis then and there.” (29/Feb/2024, p9)
Being Australian, I don’t claim to know who Tucker Carlson is (lucky me, it seems), but he recently proposed a nuclear solution:
If [AI is] bad for people, then we should strangle it in its crib right now. And one is blow up the datacenters. Why is that hard? If it's actually going to become what you describe, which is a threat to people/humanity/life, then we have a moral obligation to murder it immediately. (21/Apr/2024)
I don’t really have any further comment on this (actually, I feel like I shouldn’t have said anything, and especially not put this in writing), but I find it particularly interesting at this juncture of humanity’s evolution. The general human condition—for all of our progress—still sometimes defaults back to caveman days. Kurzweil summed it up in a quote for which there doesn’t seem to be a reliable source:
The antitechnology Luddite movement will grow increasingly vocal and possibly resort to violence as these people become enraged over the emergence of new technologies that threaten traditional attitudes regarding the nature of human life (radical life extension, genetic engineering, cybernetics) and the supremacy of humankind (artificial intelligence). Though the Luddites might, at best, succeed in delaying the Singularity, the march of technology is irresistible and they will inevitably fail in keeping the world frozen at a fixed level of development. (old wiki dump)
For mind bleach, watch my Jul/2023 video on evolution and AI (link):
And read the related paper: https://lifearchitect.ai/endgame/
China overtakes GPT-4 with SenseTime SenseNova 5.0 600B (25/Apr/2024)
We’ve been tracking China in The Memo for several years now. As a former permanent resident of the country, I am particularly interested in how they are applying the brain power of 1.42 billion people to large language models and AI. This model has 600B parameters trained on 10T tokens (17:1), outperforming GPT-4 across a few metrics. MMLU=84.78, GPQA=42.93.
Read an analysis by FutuBull.
The model launch necessitated a stock pause via TechInAsia.
See it on the Models Table.
LLMs + GPQA + IQ (1/May/2024)
I’m releasing a new visual analysis of current large language model highlights using the high-ceiling GPQA benchmark (in place of MMLU) mapped against PhD graduates.
GPQA (Google-Proof Questions and Answers) was designed in 2023 by domain experts led by a team from NYU, Cohere, and Anthropic. It has 448 multiple-choice questions written by PhDs in biology, physics, and chemistry.
Take a look: https://lifearchitect.ai/iq-testing-ai/
Exclusive: GPT-4.5 (Apr/2024)
It’s the moment we’ve been waiting for since August 2022. I love bringing exclusives to The Memo, and this is a really big one. Right now, you can use and test what might be the GPT-4.5 model (or something better than it) yourself.