The Memo - 19/Mar/2023 (20+ announcements)
ERNIE Bot, Stanford Alpaca 7B, Midjourney v5 with hands, ChatGLM-6B, and much more!
FOR IMMEDIATE RELEASE: 19/Mar/2023
Welcome back to The Memo.
There are over 20 major announcements in this edition.
If you’ve been following my work on post-2020 AI, you will have noticed that I tend towards optimistic. In my recent livestream about GPT-4 (watch the replay), I commented—for perhaps the first time—that the GPT-4 model and its implications are ‘scary’. I’ve generally avoided using that word, and even chastised media for using it, preferring the word ‘exhilarating’ and sometimes ‘confronting’ to describe post-2020 AI.
A few hours after my livestream, OpenAI’s CEO also went live, admitting that he feels the same way. On 17/Mar/2023, he told ABC America:
We've got to be careful here… we are a little bit scared of this.
The reasons for my fear around this particular model are many, and I address each of them in the livestream (replay). They include:
OpenAI cronyism and preferential treatment. Some ‘friends’ of OpenAI got access to the GPT-4 model 8 months ago, in August 2022. This included OpenAI’s President giving his former company Stripe early access to the model. I find this to be especially egregious given that OpenAI planned on ‘delaying deployment of GPT-4 by a further six months [to Sep/2023]’ (paper) before making the model more generally available.
OpenAI trade secrets. OpenAI hid all technical details about the model, including token and parameter counts, architecture, and training dataset. We don’t know what’s in it. OpenAI’s Chief Scientist went on record to confirm that they were ‘wrong’ to ever publish details about models (16/Mar/2023).
GPT-4 capabilities. The performance of GPT-4 has been understated. GPT-4 is in the 90th percentile of human testing for many metrics, including one particularly difficult competitive Olympiad (99.5th percentile), and now vastly outperforms the human average in many fields ranging from medicine to law to wine tasting theory (LifeArchitect.ai/GPT-4#capabilities).
GPT-4 power-seeking. As discussed in The Memo edition 12/Feb/2023, AI safety is about more than just alignment with humanity. The GPT-4 model was tested for ‘power-seeking,’ including setting it loose (in a sandbox) and giving it money and VMs to see if it could replicate itself and hoard resources. Additionally, GPT-4 was allowed to (successfully) socially engineer (deceive) a real human worker at TaskRabbit to solve a Captcha for it, which they did. (I hope you can see exactly why I’m a little concerned here!)
Economic impacts without a mitigation strategy. UBI—universal basic income—is not ready, and workers are beginning to be displaced already. As previously reported in The Memo edition 2/Mar/2023, 48% of surveyed companies admitted that they have already replaced workers with GPT-4’s predecessor (25/Feb/2023).
I’m using a temporary new format for this very-long edition, due to the sheer depth and breadth of AI releases in the last few days. Format is:
Organization name: AI announcement or tool (date)
In the Toys to play with section, we look at the first book written with GPT-4, available for free download.
An eagle-eyed reader has pointed out that The Memo sits very high in the leaderboard for all paid technology newsletters worldwide!
I’d also like to invite you to test out a little pilot roundtable for paid subscribers. This is a video call in an informal setting, an opportunity for you to ask me any questions you have, and to meet other paid readers. The first roundtable will be:
Life Architect - The Memo - Roundtable #1
Saturday 1/Apr/2023 at 5PM Los Angeles
Saturday 1/Apr/2023 at 8PM New York
Sunday 2/Apr/2023 at 8AM Perth
or check your timezone.
You don’t need to do anything for this; there’s no registration or forms to fill in, I don’t want your email, you don’t even need to turn on your camera or give your real name!