The Memo - 29/Mar/2023
Figure 01, Cerebras-GPT, Neuralink seeking human trials, ChatGPT retrieval, and much more!
FOR IMMEDIATE RELEASE: 29/Mar/2023
Welcome back to The Memo.
The Policy section makes a comeback to analyze the Goldman Sachs report on AI and the economy.
In the Toys to play with section, we look at setting GPT-4 loose in a browser to do whatever it wants, Synthesia’s hidden AI avatar channel with millions of views, and demonstrating the incredible skills of ChatGPT using retrieval on UN documents.
The BIG Stuff
GPT-4 has 1T parameters, according to eight sources (26/Mar/2023)
I have significantly expanded my GPT-4 summary page, including Semafor’s recent disclosure (via eight sources) that GPT-4 has 1T (1,000B) parameters. It is still unclear whether this is a dense model like most other models, or a sparse MoE model.
This means that GPT-4 1T would be 6x larger than GPT-3 175B, and 14x larger than Chinchilla 70B.
With Chinchilla-scaling indicating training GPT-4 1T with 20T tokens, GPT-4 1T would have been trained on 66x more data than GPT-3 175B (300B tokens), 14x more data than Chinchilla (1.4T tokens).
Read more: https://lifearchitect.ai/gpt-4/
Neuralink seeking human trials (27/Mar/2023)
Elon Musk's brain implant company Neuralink has approached one of the biggest U.S. neurosurgery centers as a potential clinical trials partner as it prepares to test its devices on humans once regulators allow for it, according to six people familiar with the matter.
I’ve spoken a lot about how AI models (like GPT-3 or GPT-4) will be directly integrated with the brain, and the enormous benefits to doing so, from pain relief to restoring sight to thought transfer.
Read more: https://lifearchitect.ai/bmi/
Watch my video from a year ago—Apr/2022—simulating how a brain-computer interface might use a large-language model (timecode):
We recently played around with a short live-stream using GPT-4 in the same way!
Watch the video:
Future of Life Institute calling for a ‘pause’ on AI development (29/Mar/2023)
‘We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.’
This is laughable. Surprising signatories include:
Connor Leahy (first to re-create GPT-2)
Steve Wozniak (founded Apple)
Elon Musk (founded OpenAI)
Andrew Yang (UBI advocate)
1,000+ more…
EDIT: Confirmed by Reuters.
I will not be signing.
As Ray Kurzweil noted a few years ago:
You can’t stop the river of advances. These ethical debates are like stones in a stream. The water runs around them. You haven’t seen any of these… technologies held up for one week by any of these debates. — Dr Ray Kurzweil (January 2020, and further in his prediction here.)
Read the petition: https://futureoflife.org/open-letter/pause-giant-ai-experiments/
The Interesting Stuff
Cerebras-GPT 13B released (29/Mar/2023)
‘All Cerebras-GPT models are available on Hugging Face… All models in the Cerebras-GPT family have been trained in accordance with Chinchilla scaling laws (20 tokens per model parameter) which is compute-optimal. These models were trained on the Andromeda AI supercomputer comprised of 16 CS-2 wafer scale systems.’
Cerebras-GPT 13B underperforms similar models, and severely underperforms LLaMa 13B on benchmarks that matter (thanks to binarymax at HN):
Try it: https://huggingface.co/cerebras
Figure.ai is a contender for humanoid embodied AI (4/Mar/2023)
In The Memo 24/Mar/2023 edition, we mentioned Agility Robotics’ Digit v4 (video). There are several other contenders, and I have now listed some of these at my conservative countdown to AGI.
Figure 01 was unveiled at the beginning of Mar/2023, at 60kg and 170cm or 5’6”. It has five hours of electric battery life, can carry 20kg, and walks at around 4.3km/h (1.2m/s). The company says: ‘Figure is the first-of-its-kind AI robotics company bringing a general purpose humanoid to life… The world’s first commercially-viable autonomous humanoid robot’
With $100M in funding, this will be the missing piece between post-2020 AI models (language, visual, multimodal models) and physical embodiment. If I were looking to help steer a country (or world!), I would go ‘all in’ on this. Artificial general intelligence (AGI) is nearly here, and needs to be inside a physical form factor like a humanoid body. Once that is done, we will move into this AI revolution completely, changing the way all of humanity operates, removing the need for any and all ‘work,’ and opening up a whole new world.
‘The team is ex-Boston Dynamics, Tesla, Apple SPG, IHMC, Cruise [and Alphabet X]. Collectively we align on building a better future for humanity through the intersection of AI and robotics.’
Read exclusive interview with Figure via TechCrunch.
Watch the Figure 01 marketing video (boring).
Read more from the company: https://www.figure.ai/
ChatGPT in surgical science (26/Mar/2023)




Read the paper: https://academic.oup.com/bjsopen/article/7/2/zrad032/7085520
AI in movies: narrative compass (27/Mar/2023)
The influence of movies on our collective psyche is huge. When I sit for media interviews, they often start off with flawed comparisons between post-2020 AI and what they saw in a movie once.
Here’s a great viz about good/evil and optimistic/cautionary views of AI.

Zoom IQ with OpenAI (28/Mar/2023)
AI to leverage our proprietary AI models, those from leading AI companies such as OpenAI, and select customers’ own models…
Say a team member joins their Zoom meeting late, they can ask Zoom IQ to summarize what they’ve missed in real time and ask further questions. If they need to create a whiteboard session for their meeting, Zoom IQ can generate it based on text prompts. Once the session ends, Zoom IQ will summarize the meeting and post that recap to Zoom Team Chat, even suggesting actions for owners to take on…
You need to hop on a meeting to talk out a few items, so you set something up using an agenda Zoom IQ suggests, which pulls context from the chat.
Read more: https://blog.zoom.us/zoom-iq-smart-companion/
Policy
Goldman Sachs report on AI and economics (27/Mar/2023)
While the report is written by a very large financial institution, the findings are overly-conservative. It could even be said that Goldman is being maliciously dishonest in their underestimates of AI’s impacts on the economy.
Report title: The Potentially Large Effects of Artificial Intelligence on Economic Growth
Authors: Joseph Briggs, Devesh Kodnani
Affiliation: Goldman Sachs
Date: 27/Mar/2023
Summary:
Generative AI could raise annual US labor productivity growth by just under 1½ percentage points over a 10-year period following widespread business adoption [way too conservative -Alan].
Generative AI could eventually increase annual global GDP by 7 percent, equal to an almost $7 trillion increase in annual global GDP over a 10-year period [way too conservative -Alan].
Read more: The report is internal to Goldman. It was very difficult to obtain a copy of this, but readers of The Memo can download it here (PDF).
Also, read a summary analysis by James Pethokoukis, and the summary by Financial Times.
I’ve compiled a list of recent ‘AI + economics’ papers for interest. It includes findings by the Whitehouse, NBER, OpenAI, and this Goldman report.
Read more: https://lifearchitect.ai/economics/
Toys to Play With
Run Wild (27/Mar/2023)
It may be irresponsible for me to print this in this edition, but here goes!
This project builds on Nat Friedman’s similar concept for GPT-3.
‘At it's core,
run-wild
bridges GPT-4 and a headless Chromium browser, automating actions as self-directed by it's goal.’
In plain English, this allows you to set GPT-4 loose inside a browser to do whatever it wants. Consider that it could have access to any site (financial/banking, email, social media, calendar/scheduling, and more), and be able to perform every possible click or ‘action’ or mechanism including typing. Being ‘self-directed’ it would be very interesting to follow its movements.
Take a closer look: https://github.com/refcell/run-wild
Synthesia’s ‘AI Explains AI’ channel
I’ve used Synthesia to power the Leta AI avatar (and many other avatars) since early 2021. Check out their hidden TikTok channel ‘AI Explains AI’ with a few million likes!
ChatGPT retrieval plugin (26/Mar/2023)
If you learn anything from today’s edition, make it this part…
OpenAI’s ChatGPT retrieval plugin will run the world for a little while, and its mechanism will advance AI substantially.
In plain English, the plugin allows you to give ChatGPT your folders full of PDFs and documents; essentially letting it ‘see’ or ‘read’ from your knowledge base.
OpenAI’s President noted that ‘Retrieval is probably going to be the most ubiquitous language model plugin for the near future, since it allows any organization to make their data searchable (with full control over permissions etc)’ (27/Mar/2023).
You will have to get your hands dirty here to try it, but it is worth it!
The announcement tweet was written by GPT-4 (post here by OpenAI):

Watch an example video retrieving from UN documents.
Try it (needs coding): https://github.com/openai/chatgpt-retrieval-plugin
Try a ‘related but not the same’ product (free, no login) using ChatGPT across any webpage or document: https://klavier.ai/
(For Klavier.ai, try giving it the Leta Transcripts (82,000 words!), it’s a lot of fun! https://lifearchitect.ai/leta-transcripts/)
Next
April promises to be loud. Several colleagues have significantly altered their estimates for AGI, with some saying it will fully arrive within 12-18 months; by end of next year (2024).
All my very best,
Alan
LifeArchitect.ai
Archives | Unsubscribe new account | Unsubscribe old account (before Aug/2022)