To: US Govt, major govts, Microsoft, Apple, NVIDIA, Alphabet, Amazon, Meta, Tesla, Citi, Tencent, IBM, & 10,000+ more recipients…
From: Dr Alan D. Thompson <LifeArchitect.ai>
Sent: 31/Dec/2023
Subject: The Memo - AI that matters, as it happens, in plain English
AGI: 64%
Welcome back to The Memo.
2023 was a huge year for AI! My independent research and analysis via The Memo and LifeArchitect.ai was featured by many global leaders, from Accenture (paper) to the US Government (paper) and beyond. For the full wrap-up of 2023, see my new report and video, The sky is comforting. Six alternative annual AI reports are also listed in this edition.
The winner of The Who Moved My Cheese? AI Awards! for December 2023 is the entire EU, and all 27 member states. The NYT lawsuit is a distant second. And don’t get distracted by that one! The New York Times is engaging in a disingenuous cash grab by a fading media company worth less than 8% of OpenAI’s current valuation, or around 0.28% of Microsoft’s market cap.
GPT and LLMs in general are safe and won’t be destroyed. Read the full NYT complaint (PDF, 69 pages, 27/Dec/2023), and the legal analysis by Silicon Valley IP lawyer Cecilia Ziniti (Twitter 28/Dec/2023).
Then recall Ray Kurzweil’s words from 1/Jan/2020:
You can’t stop the river of advances. These ethical debates are like stones in a stream. The water runs around them.
You haven’t seen any of these [AI] technologies held up for one week by any of these debates…
There’s enormous economic imperative. There is also a tremendous moral imperative. We still have not millions but billions of people who are suffering from disease and poverty, and we have the opportunity to overcome those problems through these technological advances.
You can’t tell the millions of people who are suffering from cancer that we’re really on the verge of great breakthroughs that will save millions of lives from cancer, but we’re cancelling all that because the terrorists might use that same knowledge to create a bioengineered pathogen.
The AGI countdown is still at 64%, and the latest graph now shows we should hit 100% even earlier than expected—by 26/Jan/2025. That’s only a year and a month from now…
The next roundtable will be on 27/Jan/2024, see end of this edition.
My first keynote in 2024 will be for EY, and I plan to release as many non-confidential keynote recordings as I can to full subscribers here in The Memo.
The BIG Stuff
Midjourney v6 (Dec/2023)

Midjourney has released version 6 of its AI image generation model, featuring more realistic images and in-image text generation capabilities, marking a significant improvement from previous versions.
Midjourney’s founder wrote:
Prompting with V6 is significantly different than V5. You will need to ‘relearn’ how to prompt.
Version 6 is the third model trained from scratch on AI superclusters. It’s been in the works for 9 months [Alan: Mar/2023 to Nov/2023 inclusive].
Read a review by VentureBeat.
See comparison of the same ‘old man’ prompt v1-v6.
See several comparisons side by side.
Sidenote: In one of the early editions of The Memo, subscribers were offered free beta access to Midjourney—many months before it was available to the public. I’m always on the lookout for new toys I can get to you. Here’s another comparison of progress since that time:

Exclusive: Chinese AI models (Dec/2023)
I am endlessly fascinated by the ultra-fast-pace of AI model development in China (and perhaps more so by the radio silence in AI discussions outside of China).
With more than 250 Chinese LLMs released in 2023 (28/Dec/2023), the development of LLMs and text-to-image models in that region seems to be even more rapid than the US in terms of deployment and integration with society via apps. You can see a list of text models in my Models Table and the supplementary tabs.
Around Christmas Eve, four large generative AI models successfully passed the official ‘large model standard compliance assessment’, which covers benchmarks for ‘generality, intelligence, and security’. (24/Dec/2023, English).
While these four frontier models don’t seem to be spelled out clearly anywhere else, I can provide a determined list (matched to the four named AI labs) here:
Frontier LLMs approved by China (most powerful first, all links in English):
1. Baidu ERNIE 4.0 1T (on 20T tokens) Oct/2023
2. Alibaba Tongyi Qianwen 2.0 300B* (on 3T tokens) Oct/2023
3. Tencent Hunyuan 100B (on 2T tokens) Sep/2023
4. 360 Zhinao 4.0 100B Jun/2023
*Note:Alibaba’s Tongyi Qianwen/Qwen 72B was open-sourced 30/Nov/2023; there is a version that is ‘a few hundreds of billions of parameters’ (31/Oct/2023) used internally.
New models (Dec/2023)
I counted 13 new model highlights in December 2023.
Alibaba SeaLLM-13b (13B), Berkeley/JHU LVM-3B (3B), CMU Mamba (2.8B), Google DeepMind Gemini (1.5T), Nexusflow.ai NexusRaven-V2 13B (13B), Together StripedHyena 7B (7.65B), Mistral AI mixtral-8x7b-32kseqlen (45B), Mistral AI Mistral-medium (180B), Deci DeciLM-7B (7.04B), BAAI Emu2 (37B), Google DeepMind MedLM, Upstage SOLAR-10.7B, Allen AI Unified-IO 2 (7B).
See the Models Table.
Sidenote: I recently refreshed my Chinchilla viz to bring it up to date with recent LLMs. The chart was featured by Weights & Biases—the ML platform used to train models inside OpenAI, Cohere, Aleph Alpha—and published in their mid-2023 whitepaper ‘How to Train LLMs from Scratch’ (PDF, direct download or official site).
Read more and download viz: https://lifearchitect.ai/chinchilla/
The Interesting Stuff
CNN: GPT-4 and ERNIE Bot 4.0 (15/Dec/2023)
In a test by CNN, ERNIE Bot 4.0 demonstrated more current knowledge, recognizing several recent events, unlike GPT-4 which relied on April 2023 data.
Read more via CNN Business.
Exclusive(ish): 2024 laptops feature a ‘Copilot’ hardware button (28/Dec/2023)
Microsoft is gearing up for significant updates to the Surface Pro and Surface Laptop lines in 2024, including new designs and next-gen Intel and Qualcomm chips, positioning them as the first true next-gen AI PCs.
Microsoft is also adding… a dedicated Copilot button on the keyboard deck for quick access to Windows Copilot.
Read the exclusive (with the lede buried) via Windows Central.
Microsoft Copilot utilizes the Microsoft Prometheus model, which was built on top of OpenAI’s GPT-4 1.76T. The Copilot system uses different techniques to achieve higher quality outputs, particularly around honesty/truthfulness:
Copilot uses grounding to improve the quality of the prompts its given. If you ask Word to create a document based on your data, Copilot will send that prompt to the Microsoft Graph [unified API service] to retrieve the context and data before modifying the prompt and sending it to the GPT-4 large language model. The [GPT-4 output] response then gets sent to the Microsoft Graph for additional grounding, security and compliance checks, before sending the response and commands back to Microsoft 365 apps. (16/Mar/2023)
Read more about Copilot via Microsoft.
Bing Chat was renamed to Microsoft Copilot on 15/Nov/2023, and became available to the public on 1/Dec/2023.
Try Copilot on web (formerly Bing Chat, free, login): https://copilot.microsoft.com/
Try Copilot on your iPhone or iPad (or mac) with the new app released 30/Dec/2023 (free, no login).
Big Tech’s year of partnering up with AI startups (18/Dec/2023)
Throughout 2023, major tech companies continued to exert influence by investing in artificial intelligence startups, with deals emphasizing funding and cloud computing partnerships.
Read more via Bloomberg.
OpenAI is in talks to raise new funding at valuation of US$100B (22/Dec/2023)
OpenAI is negotiating a new funding round potentially valuing it at over US$100 billion, which could make it one of the most valuable startups globally.
Read more via Bloomberg.
ARC: Evaluating language-model agents on realistic autonomous tasks (18/Dec/2023)
Last month (Nov/2023), I presented a keynote to the Australian Information Security Association (AISA) demonstrating a GPT-4 agent’s attempts at a subset of red team tasks. Now in December, this excellent new paper from the Alignment Research Centre (ARC) expands the few GPT-4 examples to a pilot suite of 12 tasks.
ARC created four simple agents by combining OpenAI GPT-4 and Anthropic Claude with scaffolding programs, and evaluated these agents on 12 tasks relevant to autonomous replication and adaptation (ARA).
This report investigates language model agents’ capabilities for autonomous replication and adaptation (ARA) and their implications for security and alignment, finding that while current agents can complete simple tasks, they struggle with more complex ones, and future models may require intermediate evaluations to prevent ARA.
Read the paper: https://arxiv.org/abs/2312.11671
ETH Zürich: Beyond memorization: Violating privacy via inference with large language models (Dec/2023)
Current LLMs can infer a wide range of personal attributes (e.g., location, income, sex)… at a fraction of the cost (100×) and time (240×) required by humans.
Read the paper: https://arxiv.org/abs/2310.07298
Sidenote and meandering pathway: We don’t hear much of ETH here in Australia, and I imagine it’s the same anywhere outside of Europe. ETH Zürich is also known as the Swiss Federal Institute of Technology, alma mater of many famous researchers including Albert Einstein and John von Neumann.
I wrote about these two unlikely colleagues in my 2016 book for prodigies and high-ability families, Bright:
Chapter: Exciting the brain, enhancing attentiveness, boosting performance
Consider the famous physicist John von Neumann, estimated to have a very high IQ (160+). Though he is best known for his contributions to maths and physics, John had a deep appreciation for music. It was a key ingredient in allowing his brain to process and analyse information. He relied on music to turn off the world around him.
His obituary in LIFE magazine states that he preferred thinking while on a nightclub floor, at a lively party, or with a phonograph playing in the room, all ways to help his subconscious solve difficult problems.
Von Neumann believed that concentration alone was insufficient for solving some of the most difficult mathematical problems and that these are solved in the subconscious. He would often go to sleep with a problem unsolved, wake up in the morning and scribble the answer on a pad he kept by the bedside table. It was a common occurrence for him to begin scribbling with a pencil and paper in the midst of a nightclub floor show or a lively party, “the noisier,” his wife says, “the better... he did most of his work in the living room with my phonograph blaring.”
While tenured at Princeton, John would often play loud German marching tunes on his office gramophone player. He did this while he himself was processing information, and while his colleagues (including Professor Albert Einstein) were also trying to work.
Read more (Bright, pp185-186): https://lifearchitect.ai/bright/
Full subscribers will find a complimentary copy of the book at the end of this edition.
Sidenote to the sidenote: This is also how OpenAI’s President Greg Brockman works: ‘When I need to think deeply, I've taken to lying on a beanbag in the dark with trance music playing. It’s surprisingly effective.’ (Twitter Dec/2011 & Dec/2023).
We know that Greg was responsible for getting GPT-4 finalized, and the inside scoop was that he locked himself in his office for a couple of weeks hacking together the final tweaks and parameters to get it to play nicely. I like to think that he was wearing headphones while finalizing that model, channeling a bit of AI pioneer John von Neumann; just two legends viscerally moved by loud music, and separated by about 75 years…
(And that’s the kind of insight you’ll only find in The Memo!)
GPT-4-designed processor successfully fabricated (22/Dec/2023)
Back in June 1945, it took John von Neumann a day or two (on a train!) to handwrite a 101-page report about his new logical computer design (wiki). It then took more than four years to clear a few hurdles, and the ENIAC computer was eventually completed in August 1949 (wiki).
GPT-4’s design of a co-processor was generated and fabricated much more rapidly.
The QTCore-C1, developed with GPT-4’s assistance, achieved a new milestone by successfully powering a Christmas light show, demonstrating the increasing capabilities of AI in chip design. Created by NYU Tandon’s Dr Hammond Pearce using conversations with the chat version of GPT-4, the source includes images and videos of the working co-processor.
Read more via Tom's Hardware.
AI designed a coin, now in circulation in Portugal (13/Dec/2023)
The CDV Lab at the University of Coimbra, Portugal, focuses on computational design and visualization, with the AI coin project being one of their endeavors in the field of artificial intelligence.
Given the popularity of Deep Learning generative models such as Stable Diffusion, Midjourney and DALL·E, our initial experiments involved using these tools…
As a starting point, we resorted to a modern version of our evolutionary art system NEvAr (Neuro Evolutionary Art), implemented using TensorGP, which generates images by evolving mathematical expressions. The TensorGP version uses the GPU to accelerate processing and can communicate with CLIP, and in addition to its own aesthetic model, it has access to several public domain ones…
We then developed a system, Metaprompter, that can create new prompts using evolutionary computation and concept augmentation. This allows it to start with a basic prompt, e.g., “an image of a digital world” and transform it through successive mutations into something like: “an image of a network planet, a black and white image of a circular design, a raytraced image, inspired by Anna Füssli, simulacra and simulation, in the style of neo brutalism, uncompressed png, abstract, top-view”…
After producing several of these prompts, we used them in NEvar, conducting several evolutionary runs. In total NEvAr created more than 43,000,000 images. A process of automatic curatorship reduced this number to 1,974, which was then reduced to 142 through visual inspection. These images were 3D rendered, and a blind vote led to a selection of 24 images. The creative director of the lab selected 3, and the team unanimously picked one.
Read the whole summary by CDV Lab.
Waymo's autonomous cars show safety edge over human drivers (20/Dec/2023)
I love using driverless Waymo vehicles in Phoenix. Their cars have now been found to be significantly less likely to be involved in injury-causing or police-reported crashes compared to human drivers, based on an analysis of 7.1 million miles driven in three cities.
Waymo’s driverless cars were 6.7 times less likely than human drivers to be involved a crash resulting in an injury, or an 85 percent reduction over the human benchmark, and 2.3 times less likely to be in a police-reported crash, or a 57 percent reduction. That translates to an estimated 17 fewer injuries and 20 fewer police-reported crashes compared to if a human driver would have driven the same distance in the cities where Waymo operates.
Read more via The Verge.
Nvidia’s biggest Chinese competitor unveils cutting-edge new AI GPUs (19/Dec/2023)
Chinese GPU manufacturer Moore Threads has introduced the MTT S4000, a new AI-focused graphics card, alongside clusters of 1,000 GPUs for data centers, challenging Nvidia's dominance in the server GPU market.
Read more via Tom's Hardware.
The race to put brain implants in people is heating up (23/Dec/2023)
Brain-computer interfaces (BCIs), driven by companies like Elon Musk's Neuralink, are rapidly advancing, with new developments showing promise for paralyzed individuals to control devices with their thoughts.
Read more via WIRED.
Apple: LLM in a flash: Efficient large language model inference with limited memory (12/Dec/2023)
The paper presents techniques for efficient large language model inference on devices with limited DRAM by utilizing flash memory, achieving up to a 25x increase in inference speed.
Read more: https://huggingface.co/papers/2312.11514
Sidenote: There is a related Apple paper from Oct/2023 about the EELBERT family of tiny models. They include EELBERT-base 86M, EELBERT-mini 3.3M, EELBERT-tiny 0.479M squished into 2.04MB, and UNO-EELBERT 0.312M at just 1.2MB, small enough to fit onto a watch (or floppy disk!). Compare this with my Sep/2023 analysis of Apple’s ‘100x larger tiny model’ UniLM 34M for inline predictive text, which is now featured in current versions of macOS and iOS.
Alternative annual reports and predictions…
Air Street Capital: State of AI Report 2023 (12/Oct/2023)
The State of AI Report 2023 highlights the ‘unexpected advancements’ in Large Language Models (LLMs), the resurgence of big tech, and the pivotal role of AI in various sectors, amidst the evolving landscape of AI governance and safety debates.
Read more via State of AI.
Stanford: AI Index Report 2023 (Apr/2023)
The AI Index Report 2023, an initiative by Stanford HAI, is released in April of each year, so is very much out of date in Dec/2023. I’ve included it here for completeness.
Read more via Stanford HAI.
Hugging Face: 2023, year of open LLMs (18/Dec/2023)
The year 2023 has witnessed a significant surge in the public's interest in Large Language Models (LLMs), sparking widespread discussions about the merits of open versus closed source models.
Read more via Hugging Face.
Google: 2023: A year of groundbreaking advances in AI and computing (22/Dec/2023)
Drs Jeff Dean and Demis Hassabis reflect on 2023’s AI advancements, highlighting products like Bard, PaLM 2, MusicLM, and more.
Read more via the Google Research Blog.
Bonus: Stability AI’s CEO Emad defends their output in 2023 (27/Dec/2023)
This is a really interesting read, and like many of the AI leaders, Emad Mostaque admits to being neurodivergent (‘To be fair I also rub people a bit wrong given what I say and how I say it. Having quite bad Asperger's and other stuff means this has been quite a learning curve for me…’).
Read more via the Stable Diffusion subreddit.
Bonus: Gates: The road ahead reaches a turning point in 2024 (19/Dec/2023)
Bill Gates shares his reflections on 2023, the potential of AI in shaping the future, and his optimism for tackling global challenges despite the hardships of the current times.
Read more via GatesNotes.
Toys to Play With
Groq.com: Super-fast inference using Llama 2 70B (Dec/2023)
This thing generates text at around 270 tokens per second. That’s more than twice as fast as ChatGPT. For comparison:
Llama 2 on Groq: 270 T/s [Alan’s test]
GPT-3.5-turbo: 108 T/s
Gemini Pro: 68 T/s [Alan’s test]
GPT-4: 12 T/s
Try it: https://chat.groq.com/
OpenAI cookbook (Dec/2023)
We’ve featured this resource once or twice in The Memo, but the most recent updates are worth mentioning again.
Take a look: https://cookbook.openai.com/
Flashback
I was thinking about that time back in Sep/2021 when we pitted Leta (GPT-3 from 2020) against Watson from 2011. The results were… incredible.
You can leave a comment or review on the entire Leta archive, now part of the Internet Archive: https://archive.org/details/leta-ai
Read the page: https://lifearchitect.ai/watson/
Watch my video (link):
Next
The next roundtable will be:
Life Architect - The Memo - Roundtable #6
Follows the Chatham House Rule (no recording, no outside discussion)
Saturday 27/Jan/2024 at 4PM Los Angeles
Saturday 27/Jan/2024 at 7PM New York
Sunday 28/Jan/2024 at 8AM Perth (primary/reference time zone)
or check your timezone via Google.
You don’t need to do anything for this; there’s no registration or forms to fill in, I don’t want your email, you don’t even need to turn on your camera or give your real name!
As promised, here is a complimentary copy of my book, Bright (2016), for any device.
Thanks for your support this year.
I’m really looking forward to analyzing and condensing artificial intelligence with you for the full 366 days of 2024…
All my very best,
Alan
LifeArchitect.ai
Old man prompt: ageism on steroids.
Hey Dr Thompson,
thank you for your most insightful and valuable commentary thru 2023.
Happy New Year to you from Ireland.
Wishing you a year filled with ever increasing intelligence (artificial/AGI/HGI or otherwise).
Kind Regards
Des Donnelly