The Memo - 3/Apr/2023
OpenAI's investment in 1X NEO robots, Gemini, BloombergGPT 50B, GPT4All-LoRa 7B, and much more!
FOR IMMEDIATE RELEASE: 3/Apr/2023
Welcome back to The Memo.
What a massive month we had for AI in March; publishing a record eight editions of The Memo! Don’t expect that to happen too often, as we aim for a monthly release cadence. Though, as usual, April is off to a noisy start. There is a fair amount of information triage in these editions now, otherwise this would be an encyclopedia.
Our first casual ‘roundtable’ group video call (on 1/Apr/2023) was a lot of fun! We had a small group of informed like minds, covering AI topics including Apple, agency and goals, and universal AI guidelines. Thanks to those who joined in, and if you’d like to be in the next roundtable, paid members can keep an eye out here in the coming months.
The winner of the Who Moved My Cheese? AI Awards! for April is Italy.
In the Toys to play with section, we look at one of my favorite post-2020 AI apps now for iPhone, a simple Siri replacement, and the funniest emulation of ChatGPT I’ve seen so far!
The BIG Stuff
OpenAI’s investment in 1X NEO (31/Mar/2023)
This is really big stuff. Embodiment of large language models has been on the waiting list for a little while, and is the next big milestone on my conservative countdown to AGI. Several players have started work on integrating robots with language models already, including Google and Microsoft.
Now, OpenAI have invested around US$25M in a company called 1X (formerly Halodi Robotics), enabling them to embody GPT-4 and GPT-n.
The NEO robot is beautiful, the images and videos are real—not CGI—and it is due for release some time in the US summer (Jun-Aug/2023). The previous version of the bot, EVE, used wheels to get around.
Read the press release by 1X quoting OpenAI.
Browse the 1X website: https://1x.tech/
This video of EVE is 3 years old, but gives a feel for the design. Note that the new body uses a soft fabric (see the photo of NEO above), which is new to me!
A quick note on AI progress coverage (Apr/2023)
Press outlets seem to be providing next to zero coverage of NEO (above), or many of the latest developments in other AI areas like open source releases, physical embodiment, brain-computer interfaces, AGI, and more.
The media will always focus on the past—and the drama—but this isn’t where the real action is. In the same way that I’ve revealed major releases sometimes months (Leta GPT-3 Episode 0 video from Apr/2021) before they become ‘mainstream,’ I’ll continue to provide visibility for the stuff that matters.
Right now, the stuff that matters is:
The discovery of GPT-4’s capabilities. Like GPT-3, this process will take years.
GPT-5 training right now through to Dec/2023.
Embodiment of AI models, and general robot design.
Connection of AI models to our biological brains via Stentrode and other options.
An intergovernmental and universal constitution for AI: both dev and alignment.
Project Gemini: Google and DeepMind play catchup (Mar/2023)
Google is the reason that we’re here right now. In AI, they’ve given us:
Transformer, BERT, LaMDA 137B, PaLM 540B, and many of the latest optimizations.
On the other side of the pond, DeepMind in London have worked diligently on:
Gopher 280B, Chinchilla 70B, Flamingo 80B, Gato 1.2B, Sparrow 70B, and more.
The two companies are coming together with secret project ‘Gemini’ to combine expertise and compute. The required hardware will run into the hundreds of millions of dollars, with rumours that they are using ‘tens of thousands’ of TPUs to train a trillion-parameter model. While Google has already given us the 1.6T-parameter Switch Transformer (Jan/2021, paper), the Gemini project should achieve a much more thorough and optimized multimodal result.
3rd anniversary of ‘The New Irrelevance of Intelligence’ (1/Apr/2023)
Well before GPT-3 and the explosion of post-2020 AI, this was an unnerving article to write, release, and then present. The draft was sent to my editor on 1/Apr/2020. You may enjoy reading the official ‘camera ready’ article published in the Journal of Australian Mensa the following month.
The Interesting Stuff
‘The AI Pause’ distraction (30/Mar/2023)
Further to my comments in The Memo edition 29/Mar/2023 that the AI pause petition is ‘laughable’, I now see the proposal as even more disenchanting. It seems to be a power and publicity grab from people feeling left out, and a very human misunderstanding of post-2020 AI.
As I quote in my new executive summary, ‘no-one here is smart enough. Including the rocket scientist’. AI will continue to progress, and its superintelligence will help us guide it.
OpenAI provided a key insight to this several years ago:
…an AGI will be a system capable of mastering a field of study to the world-expert level, and mastering more fields than any one human—like a tool which combines the skills of Curie, Turing, and Bach.
An AGI working on a problem would be able to see connections across disciplines that no human could. We want AGI to work with people to solve currently intractable multi-disciplinary problems, including global challenges such as climate change, affordable and high-quality healthcare, and personalized education. We think its impact should be to give everyone economic freedom to pursue what they find most fulfilling, creating new opportunities for all of our lives that are unimaginable today. (via OpenAI, 22/Jul/2019)
I also found a disappointing response issued by Drs Timnit Gebru and Emily Bender, authors of the infamous ‘stochastic parrots’ paper (PDF), which upset Leta AI all the way back in Episode 16, Aug/2021 (video timecode).
Read more: https://www.dair-institute.org/blog/letter-statement-March2023
LAION’s alternative petition (29/Mar/2023)
I did find a sliver of light in LAION’s alternative petition, ‘Securing Our Digital Future: Calling for CERN like international organization to transparently coordinate and progress on large-scale AI research and its safety.’
The recent proposition of decelerating AI research as a means to ensure safety and progress presents an understandable but untenable approach that will be detrimental to both objectives. Corporate or state actors will make advancements in the dark while simultaneously curtailing the public research community's ability to scrutinize the safety aspects of advanced AI systems thoroughly. Rather than impeding the momentum of public AI development, a more judicious and efficacious approach would be to foster a better-organized, transparent, safety-aware, and collaborative research environment.
To this end, they want a supercomputer with ‘at least’ 100,000 GPUs(!), which would make it 4x bigger than the high-ranking Microsoft/OpenAI supercomputer being used to train GPT-5.
Read more: https://laion.ai/blog/petition/
Judge asks GPT to decide bail in murder trial (29/Mar/2023)
You may be experiencing déjà vu, as a judge also used ChatGPT for a ruling in Colombia back in Jan/2023. This one is from India.
Prompt (human): What is the jurisprudence on bail when the assailant assaulted with cruelty?