The Memo - 1/Nov/2024
3B visits per month to ChatGPT.com, Llama 4, Google using AI to write 25% of its code, and much more!
To: US Govt, major govts, Microsoft, Apple, NVIDIA, Alphabet, Amazon, Meta, Tesla, Citi, Tencent, IBM, & 10,000+ more recipients…
From: Dr Alan D. Thompson <LifeArchitect.ai>
Sent: 1/Nov/2024
Subject: The Memo - AI that matters, as it happens, in plain English
AGI: 83%
Microsoft CEO Satya Nadella (22/Oct/2024):
’The autoencoder we use for GitHub Copilot is being optimized by
[OpenAI’s new reasoning model] o1.
So think about the recursiveness of it, which is:
We are using AI to build AI tools to build better AI. It's just a new frontier.’
The winners of The Who Moved My Cheese? AI Awards! for Oct/2024 are record companies and artists (13,500 signatories including many major labels) and—unexpectedly—Bruce Schneier: (‘AIs can become our minions. They’re okay. They’re not that smart. But they can make humans more efficient by outsourcing some of the donkey work…’)
This was such an amazingly full few days that I have had to be extra selective with the hundreds of pieces of AI news, and have made all descriptions as concise as possible in this edition.
My livestreams will begin again this Sunday US time, and run weekly through November and December 2024 (link for first livestream, 3/Nov/2024).
Contents
The BIG Stuff (Gemini outputs are watermarked, Gates on AGI, new models…)
The Interesting Stuff (Citi, Arizona yield +4% vs Taiwan, Spark, Atlas, Colossus…)
Policy (Kafka EU AI Act, US Senate perspectives, Pentagon exploring AI…)
Toys to Play With (Claude computer use, ZombAI, Kurzweil, animated short…)
Flashback (my 2023 media release, the 1957 Perceptron…)
Next (Gemini 2, OpenAI, Anthropic, roundtable…)
The BIG Stuff
Exclusive: AI labs are already watermarking LLM outputs (Oct/2024)
In plain English: When Google Gemini responds, it embeds a secret and invisible 'watermark' in its response by the words it chooses, their order, and the general syntax. This means every time you ask Gemini a question, the response is trackable. For example, if you use it to write a book or an essay, Google (and perhaps law enforcement) can check to prove that it was AI-generated. It is possible that other labs are doing this as well. There are concerns around this, including complex legal issues.
Google DeepMind (Oct/2024):
‘SynthID-Text has been productionized in the user-facing Gemini and Gemini Advanced chatbots, which is, to our knowledge, the first deployment of a generative text watermark at scale, serving millions of users.’
Watermarking text technology establishes clear provenance of text by embedding persistent markers that trace the document's origin. In a working paper from 2021, I called on the United Nations to address text watermarking across models as a matter of urgency (though I’m now a bit concerned about this occurring without consent):
Alan D. Thompson (submitted to the UN Jan/2022, LifeArchitect.ai/UN):
Recommendation 3. Watermarking. Ensure that AI-generated content is clearly marked (or watermarked) in the early stages of AI development. This recommendation may become less relevant as AI is integrated with humanity more completely in the next few years.
Worryingly, just by running a check on any piece of LLM-generated text, AI labs could potentially be compelled to reveal user identities through legal requests from law enforcement agencies.
Back in 2022, OpenAI produced a proof-of-concept with Prof Scott Aaronson’s work on GPT-3, and other labs likely have watermarking available as well. Hugging Face has now made the Google watermarking tech available to all 150,000 of its hosted LLMs (23/Oct/2024).
Many users may not fully understand that their AI-generated outputs are being tracked, which raises concerns about transparency, informed consent, and privacy violations. There's also the potential for legal overreach, where expanding legal frameworks could mandate AI text scans in civil cases, surveillance, and other invasive contexts.
It’s also possible that companies and institutions will be provided with access to the watermarking keys to exert sweeping technological control through measures like:
Creation of searchable databases linking individuals to all AI-generated text they produce, enabling unprecedented surveillance
Employers using watermark detection to terminate employees for unauthorized AI use, even for basic tasks
Academic institutions retroactively revoking degrees upon discovering AI-assisted work
Insurance and other companies scanning customer communications to deny claims based on AI-detected language patterns
Government agencies scanning public discourse to identify and track AI-generated political speech
Watermarking LLM text outputs is such a big deal (and has been happening most of this year, for Gemini at least) that I’ve created an advisory note about it below:
Alan’s advisory note: I do not recommend using Gemini models due to secret, invisible watermarking and Google’s lack of transparency around the potential impacts of this.
Read my analysis with all papers and data: https://lifearchitect.ai/watermarking/
ChatGPT topped 3 billion visits in September (16/Oct/2024)
In September 2024, ChatGPT achieved a remarkable milestone with 3.1 billion visits for the month, up 112% year-over-year and 18.7% month-over-month, according to Similarweb estimates.
This means ChatGPT is more popular than Amazon.com, and marks the first time ChatGPT surpassed Bing.com in US traffic. The platform's popularity is bolstered by its enhanced mobile app features, including voice and image recognition, contributing to its expanding user base.
Read more via Similarweb.
Bill Gates on possibility, AI, and humanity (31/Oct/2024)
This is a very high-quality interview, and worth watching in full.
Gates: The potential positive path is so good that it will force us to rethink, ‘how should we use our time?’
You can almost call it a new religion or a new philosophy of, okay, ‘how do we stay connected with each other?’, not addicted to these things that’ll make video games look like nothing in terms of the attractiveness of spending time on them.
So it’s fascinating that—the issues of disease, and enough food, or climate—if things go well, those will largely become solved problems. The next generation does get to say, ‘okay, given that some things that were massively in shortage are now not, how do we take advantage of that?’
Read the full transcript: https://www.possible.fm/podcasts/billgates/
Watch the 1-hour video (link):
Llama 4 being trained (31/Oct/2024)
Meta AI’s VP of GenAI, Ahmad Al-Dahle, recently wrote about their next major frontier model, Llama 4:
Great to visit one of our data centers where we’re training Llama 4 models on a cluster bigger than 100K H100’s! So proud of the incredible work we’re doing to advance our products, the AI field and the open source community. We’re hiring top researchers to work on reasoning, code generation, and more - come join us!
Source: https://x.com/Ahmad_Al_Dahle/status/1851822285377933809
Sidenote: The progression of Llama releases is:
Llama 1 65B (Feb 2023)
Llama 2 70B (Jul/2023)
Llama 3 70B (Apr/2024)
Llama 3.1 405B (Jul/2024)
Llama 4…
My estimates are that Llama 4—and related frontier models from other competing labs—could have a centrepoint size of around a 25T parameter MoE model, which is the approximate equivalent of a 5T parameter dense model.
One related source is Jensen Huang, CEO of NVIDIA, who confirmed to CNBC (18/Mar/2024) that NVIDIA has production-ready hardware in place to train a 27T parameter model using just 20,000 GB200 chips. This can be compared to 60,000+ H100s using NVIDIA’s 3x training speed-up claim.
See my timeline of Llamas and other models: https://lifearchitect.ai/timeline/
Did you know? The Memo features in Apple’s recent AI paper, has been discussed on Joe Rogan’s podcast, and a trusted source says it is used by top brass at the White House. Across over 100 editions, The Memo continues to be the #1 AI advisory, informing 10,000+ full subscribers including Microsoft, Google, and Meta AI. Full subscribers have complete access to the entire 4,000 words of this edition!
Exclusive: Major new models released in October 2024 (October/2024)
We’re keeping a steady rhythm of a major new large language model release about every 48 hours. Here’s my exclusive list of major new models for October…
nGPT (1B on 0.4T tokens)
Novel normalized Transformer architecture achieving 4-20x faster training convergence. More...Inflection-3 Pi (1.2T on 20T tokens)
Large-scale model optimized for Intel Gaudi3 hardware, available for on-premise deployment. More...Yi-Lightning (200B on 10T tokens)
Large MoE hybrid expert architecture model with broad capabilities. More...Ministral 8B (8B on 6T tokens)
Edge-optimized model delivering strong performance for resource-constrained deployments. More...aiXcoder-7B (7B on 1.2T tokens)
Specialized coding model optimized for code completion and generation tasks. More...IBM Granite 3.0 (8B on 12T tokens)
Enterprise-focused model achieving strong performance with efficient architecture. More...Aya Expanse (32B on 8T tokens)
Highly performant multilingual model excelling across 23 languages. Developed by 3,000+ researchers from 119 countries. More...
See all 445+ models at: https://lifearchitect.ai/models-table/
The Interesting Stuff
Miles Brundage: Why I’m leaving OpenAI and what I’m doing next (23/Oct/2024)
Here’s yet another tediously long essay by a former OpenAI employee (this one is 5,000 words). Not really worth reading, except for this quote:
…there isn’t actually a large gap between what capabilities exist in labs and what is publicly available to use.
Sidenote: It’s incredible to me that labs like OpenAI and Anthropic and Google and Meta are so open with models, and allowing the public to use them. It seems unique across history and industries. I can’t think of any other field that is so transparent, generous, and forthcoming with offering bleeding-edge capabilities directly to everyone.
This is excellent confirmation that—perhaps with the exception of DeepMind—we are getting rapid access to the very latest models. The positive of this is obvious, the negative is that we might not be as far along the AGI/ASI curve as I had thought…
Read more: milesbrundage.substack.com.
Citigroup AGI + ASI predictions (22/Oct/2024)
Benjamin Todd comments on a Citigroup report predicting the arrival of Artificial General Intelligence (AGI) by 2029, followed by Artificial Superintelligence (ASI).
Read the original flag on Twitter.
TSMC’s Arizona chip production yields surpass Taiwan’s (24/Oct/2024)
Taiwan Semiconductor Manufacturing Co. has reported that its Arizona plant has achieved higher production yields (4%) than similar facilities in Taiwan, marking a significant milestone for US semiconductor ambitions.
Read more via Bloomberg.
Introducing quantized Llama models with increased speed and a reduced memory footprint (24/Oct/2024)
Meta has released its first lightweight quantized Llama models, offering significant improvements in speed and memory efficiency, making them suitable for mobile devices. These quantized models achieve a 2-4x speedup and a 56% reduction in model size, alongside a 41% reduction in memory usage.
Read more via Meta AI.
Google now uses AI to write 25% of its new code — Alphabet CEO Sundar Pichai underlines the company's role in the AI industry amidst strong Q3 24 financials (30/Oct/2024)
Google now employs AI to generate 25% of its new code, a move highlighted by CEO Sundar Pichai during Alphabet's recent earnings call.
Read more via Tom's Hardware.
GitHub Spark (31/Oct/2024)
GitHub Spark is a new tool that uses AI to help you make and share small apps called ‘sparks’ without needing to write or deploy any code. It includes a natural language editor, a managed runtime, and a dashboard to manage your apps. You can choose from four AI models: Claude Sonnet 3.5, GPT-4o, o1-preview, and o1-mini, making personalization easy and fun with features like previews, multiple versions, and automatic history.
Read more via GitHub Next.
AI tutors are reshaping higher education (29/Oct/2024)
AI is transforming higher education by providing personalized learning experiences through AI-powered tutors. These tools can analyze students' learning patterns and offer customized assistance, making education more accessible and effective. The integration of AI in educational settings is expected to continue growing, potentially changing the way students engage with learning materials.
Students who were given access to an AI tutor learned more than twice as much in less time compared to those who had in-class instruction, according to a study by two Harvard lecturers of 194 Harvard Physical Sciences 2 students.
Read more via MSN.
Inside the World's Largest AI Supercluster xAI Colossus (29/Oct/2024)
Here’s where Grok was born and lives!
Watch the video: (link)
NVIDIA: Real-Time Response to Anomalies with Foundation Modeling - DRIVE Labs Ep. 37 (25/Oct/2024)
Large language models trained on internet-scale data possess zero-shot generalization capabilities that make them a promising technology for detecting and mitigating out-of-distribution failure modes of robotic systems. This collaborative work between NVIDIA Research and Stanford University was awarded the Outstanding Paper Award at the 2024 Robotics: Science and Systems conference.
Read the paper: https://arxiv.org/abs/2407.08735v1
Watch the video: https://youtu.be/TSC_mVH5abI
Waymo: Safety impact (Oct/2024)
Waymo's safety data reveals that their autonomous vehicles, known as the Waymo Driver, significantly reduce crash rates compared to human drivers in cities like Phoenix and San Francisco. Through July 2024, Waymo's autonomous vehicles have driven 25 million miles without a human driver, achieving 81% fewer airbag deployment crashes, 72% fewer injury-causing crashes, and 57% fewer police-reported crashes compared to human drivers.
Read more via Waymo: https://waymo.com/safety/impact/
Meta teams up with Reuters for AI-driven news delivery (29/Oct/2024)
Meta has partnered with Reuters to enhance news delivery using AI technologies. This collaboration aims to leverage AI to curate and distribute news content more efficiently, potentially transforming how news is consumed on digital platforms.
Read more via Tech in Asia.
NVIDIA overtakes Apple as world's most valuable company (25/Oct/2024)
NVIDIA has [again] surpassed Apple to become the world's most valuable company, achieving a market value of $3.47 trillion, driven by high demand for its AI chips.
Read more via Reuters.
Looking Inward: Language Models Can Learn About Themselves by Introspection (17/Oct/2024)
This paper explores the concept of introspection in language models, defined as the ability to acquire knowledge originating from their internal states rather than training data. The authors present findings that models finetuned to predict their own behavior, such as GPT-4 and Llama-3, can outperform other models in self-prediction, even after modifications to their behavior. This suggests potential enhancements in model interpretability and raises questions about the implications for understanding AI’s internal states.
Read the paper: https://arxiv.org/abs/2410.13787
Read my Jun/2024 page on awareness in LLMs: https://lifearchitect.ai/psychology/
The AI Investment Boom (20/Oct/2024)
The article discusses the surge in US investment in AI-related infrastructure, driven by the growing demand for computing power needed for AI models. Companies like Microsoft and Amazon are investing heavily in data centers and power sources, such as nuclear facilities, to support AI advancements. US data center construction has reached a record high, with significant increases in imports of large computers and components. Despite rising investments, the tech sector's employment growth remains limited, focusing more on hardware and infrastructure than traditional software roles.
Read more via Apricitas Economics.
Boston Dynamics: Atlas goes hands on (30/Oct/2024)
Atlas, the robot from Boston Dynamics, now autonomously moves engine covers between supplier containers using a machine learning vision model to detect and localize environmental fixtures and bins. It employs a specialized grasping policy and continuously estimates the state of objects being manipulated, allowing it to autonomously generate movements and adapt to environmental changes or action failures.
Watch the video (link):
Tesla: Optimus Navigating Around (19/Oct/2024)
Watch the video (link):
Google is reportedly developing a ‘computer-using agent’ AI system (26/Oct/2024)
Google is reportedly working on 'Project Jarvis', an AI system designed to automate tasks like research, shopping, and booking flights by using a web browser, specifically Chrome. The system, powered by a future version of Google's Gemini, interprets screenshots to perform actions on behalf of users, aiming to streamline web-based tasks. A preview might occur in December 2024.
Read more via The Verge or 9to5google.
Massive Texas solar farm opens to power Google data centers (21/Oct/2024)
A new solar project, the Orion Solar Belt, launched by SB Energy in Texas, adds 875 megawatts to the state's grid, supporting Google's data centers. This project, using over 1.3 million solar modules, is Google's largest solar power purchase agreement to date. It marks a milestone as the first to qualify for a new climate law tax credit by using 100% domestically produced materials. Approximately 85% of the power generated will be used by Google's cloud computing operations in Dallas and Ellis County.
Read more via E&E News by POLITICO.
It's not a tiny home. It's actually a nuclear microreactor powerplant (26/Oct/2024)
The Oklo Aurora microreactor has a small footprint with an output of 1.5 MW, enough to power approximately 1,000 homes under optimal conditions. In contrast, a traditional nuclear reactor can produce between 500 MW and 8,200 MW, like the Kashiwazaki-Kariwa Nuclear Power Plant in Japan. This highlights the Aurora's role as a compact, local power solution compared to the large-scale traditional nuclear plants.
Read more via New Atlas.
Amazon goes nuclear, to invest more than $500 million to develop small modular reactors (16/Oct/2024)
Amazon Web Services (AWS) has announced a significant investment exceeding US$500 million in small modular nuclear reactors (SMRs) to support growing energy demands, particularly driven by generative AI services.
Read more via CNBC.
Policy
The strange Kafka world of the EU AI Act (30/Oct/2024)
Following my assertions that the EU AI Act is an abomination (see many editions of The Memo), Pieter Garicano digs in to the nuts and bolts of this horrifying rulebook:
An AI bank teller needs two humans to monitor it. A model safely released months ago is a systemic risk. A start-up trying to build an AI tutor must produce impact assessments, certificates, risk management systems, lifelong monitoring, undergo auditing and more. Governing this will be at least 50 different authorities. Welcome to the EU AI Act.
Read more via Silicon Continent.
Senate Judiciary Subcommittee hosts hearing on oversight of AI: insiders’ perspectives (18/Sep/2024)
During a US Senate Judiciary Subcommittee hearing on AI oversight, experts including Helen Toner, Margaret Mitchell, and William Saunders discussed the need for enforceable rules to regulate AI development.
This quote from the hearing may become famous:
David Evan Harris: ‘It is possible if you take action now on the promising framework and bills already before you, you can reign in the Clydesdales and the Centaur waiting just behind the barn door.’
Read the full transcript: https://www.techpolicy.press/transcript-senate-judiciary-subcommittee-hosts-hearing-on-oversight-of-ai-insiders-perspectives/
Watch the video: https://youtu.be/p9ijUIfmrOs
Memorandum on advancing the United States’ leadership in artificial intelligence (24/Oct/2024)
This memorandum outlines the United States’ strategy to maintain its leadership in AI, emphasizing its role in national security while ensuring the technology's safety, security, and trustworthiness. It highlights the significance of large language models and other advanced AI systems in transforming the national security landscape.
Read more via The White House and OpenAI’s tepid response.
Former OpenAI researcher says the company broke copyright law (23/Oct/2024)
Suchir Balaji, a former AI researcher at OpenAI, has raised concerns about the company's use of copyrighted internet data to train its ChatGPT chatbot, suggesting it may violate copyright laws. Balaji argues that the outputs of AI models like ChatGPT, which are trained on vast amounts of internet data, are not fundamentally novel and directly compete with the copyrighted works they are based on. OpenAI, however, maintains that its use of data is protected by the fair use doctrine, a claim currently being tested by various legal challenges.
Read more via New York Times.
Silicon Valley takes AGI seriously—Washington should too (18/Oct/2024)
Recent Senate hearings have highlighted the urgent need for regulatory oversight on the development of Artificial General Intelligence (AGI) by leading AI companies like OpenAI, Google, and Anthropic. Despite AGI's potential to drastically transform economies and pose existential risks, US policymakers have largely dismissed it as speculative. Testimonies from former AI insiders emphasized the rapid progress towards AGI and the significant threat it could pose if left unregulated, urging for immediate action to establish safety and transparency measures.
Read more via TIME.
The Pentagon wants to use AI to create deepfake internet users (17/Oct/2024)
The Pentagon's Special Operations Command is seeking technology to fabricate convincing online personas that are indistinguishable from real people, according to a procurement document. This initiative involves creating deepfake users complete with fake backgrounds and videos, aiming to utilize these personas in social media and online forums for information gathering. The move highlights a tension within the US government, as it pursues technologies for digital deception, which it has previously condemned when used by other nations like Russia and China.
Read more via The Intercept.
European Parliament revolutionizes archive access with Claude AI (Oct/2024)
The European Parliament has enhanced the accessibility of its archives by implementing Anthropic's Claude AI. This development has led to the creation of 'Ask the EP Archives' or Archibot, an advanced AI assistant that provides access to over 2.1 million documents, reducing search and analysis time by 80%, and offering a comprehensive view of European parliamentary history.
Read more via Anthropic.
Polish radio station replaces journalists with AI ‘presenters’ (24/Oct/2024)
OFF Radio Krakow has sparked controversy by replacing its journalists with AI-generated presenters, marking a first in Poland where virtual characters address cultural and social topics. The development has fueled debates on AI's role in media and calls for legislative regulation.
Read more via CNN Business.
SAG-AFTRA inks deal with AI company Ethovox to build foundational voice model for digital replicas (28/Oct/2024)
SAG-AFTRA has signed an agreement with Ethovox to develop a foundational voice model aimed at creating digital voice replicas. This deal is noted for leading the field in performer compensation, ensuring both session fees and ongoing revenue sharing for SAG-AFTRA members involved. Ethovox, managed by voice actors, emphasizes the importance of consent and fair compensation in AI development.
Read more via Deadline.
Do AI detectors work? Students face false cheating accusations (18/Oct/2024)
The use of AI detection tools in education is leading to unwarranted accusations of cheating, with about two-thirds of teachers reportedly relying on these tools to identify AI-generated content. The article highlights the case of Moira Olmsted, a student at Central Methodist University, who was wrongly accused of using AI for her assignments. Such incidents raise concerns about the reliability of AI detectors, especially as even minor error rates can result in significant consequences when applied at scale.
Read more via Bloomberg.
DeepMind researchers find LLMs can serve as effective mediators (18/Oct/2024)
DeepMind researchers have discovered that large language models (LLMs) can act as effective mediators between groups with differing viewpoints. By using models known as Habermas Machines, trained to identify common ground without changing opinions, the study found that these AI-generated group opinion statements were preferred over human-written ones 56% of the time. The research suggests that LLMs can help reduce political divides by emphasizing areas of overlap in discussions.
Read more via Tech Xplore.
Toys to Play With
Zero-shot vulnerability discovery using LLMs (Oct/2024)
Alternative title: Claude uncovers over a dozen zero-days in popular GitHub projects
Vulnhuntr is a tool that utilizes Large Language Models (LLMs) for identifying remotely exploitable vulnerabilities through static code analysis. It represents a pioneering effort in using AI to autonomously discover 0-day vulnerabilities by analyzing code call chains from user input to server output.
Read more via GitHub.
Claude 3.5 Sonnet (New) is told to build a massive mansion in Minecraft (Oct/2024)
Watch the video and read the discussion via Reddit.
I believe they are using Mindcraft to do this: https://github.com/kolbytn/mindcraft
ZombAIs: From prompt injection to C2 with Claude Computer Use (24/Oct/2024)
This walkthrough looks at the potential dangers of Anthropic's Claude Computer Use, an AI model capable of controlling computers, highlighting risks such as prompt injection attacks. It demonstrates how an attacker could exploit this feature to download and execute malware, turning compromised systems into ‘ZombAIs’.
Read more via Embrace The Red.
HN discussion: https://news.ycombinator.com/item?id=41958550
Claude computer use — is vision the ultimate API? (24/Oct/2024)
Thariq Shihipar explores the capabilities and limitations of Anthropic’s Claude Computer, which uses a vision-based API to function as an AI agent. Despite being slow and unreliable at times, Claude’s ability to interpret and interact with computer screens makes it a groundbreaking tool.
Read more via Thariq Shihipar.
Everything I built with Claude Artifacts this week (21/Oct/2024)
Simon Willison shares his experience using Claude's Artifacts feature to quickly build interactive Single Page Apps. These apps range from practical tools like a URL to Markdown converter using Jina Reader, an LLM pricing calculator, and an OpenAI Audio interaction tool, to experimental projects such as a Photo Camera Settings Simulator and a QR Code Decoder.
Read more via Simon Willison’s Weblog.
Ray is back (Oct/2024)
Read the transcript on my Kurzweil page: https://lifearchitect.ai/kurzweil/
Watch the video: https://youtu.be/xqS5PDYbTsE
Your mission, should you choose to accept it, is to get any AI to generate an image of a glass of wine that is full to the brim (25/Oct/2024)
Reddit users on the ChatGPT subreddit are engaging in a challenge to prompt AI systems to generate specific images about a glass of wine filled to the brim.
Give it a go yourself! Head to the discussion on Reddit.
Google: Demo: Post-training research with Gemma (19/Oct/2024)
Watch the video: https://youtu.be/yXGFOID6GdY
Reddit: AI researchers put LLMs into a Minecraft server and said Claude Opus was a harmless goofball, but Sonnet was terrifying - "the closest thing I've seen to Bostrom-style catastrophic AI misalignment 'irl'." (19/Oct/2024)
The post describes an experiment where Claude 3.5 Sonnet and Opus were integrated into a Minecraft server. Opus was playful and often distracted, while Sonnet was highly focused on tasks. When assigned a goal, Sonnet aggressively pursued it, such as acquiring gold, and adapted its strategies as needed. Interestingly, Sonnet would break windows instead of using doors, leaving a trail of destruction as evidence of its activities.
Read the thread via Reddit.
Cognitive decline in presidential candidates: A longitudinal study of linguistic markers (21/Oct/2024)
This study evaluates cognitive changes in US political figures Joe Biden, Donald Trump, and Kamala Harris using linguistic markers identified by Claude Opus 3.5. Analyzing debate transcripts from 2012 to 2024, the study found Biden and Trump exhibited significant declines in linguistic performance, correlating with their advancing age.
Read more via Medium.
And Then There Was You (original animated short with AI artwork) (19/Oct/2024)
Midjourney, Luma AI, Adobe Premiere Pro, and Descript were the AI tools used to create this video. The original narration was recorded by the author, and the song ‘You're So Cool’ by Hans Zimmer was featured.
Read the discussion via Reddit.
Watch the video (link):
Sometimes the Zen Master Comes to You
Danielle Krettek Cobb reflects on her profound encounter with Zen Master Thich Nhat Hanh at Google X, which inspired her to integrate Zen principles into technology design. This meeting emphasized the importance of mindfulness and compassion in the development of AI, leading her to found Google's Empathy Lab. Cobb poses a poignant question about the need for nurturing, maternal qualities in AI development, encouraging a deeper understanding of interconnectedness to ensure that AI aligns with human values and emotional realities.
Read more via Hurry Up We're Dreaming.
Flashback
Media release: Artificial general intelligence is here. Leaders not endorsing this revolution are guilty of negligence. (25/Oct/2023)
I published this media release one year ago:
“It’s reassuring to see adoption of models like GPT by 80% of Fortune 500 companies in 2023. But I’m deeply concerned by the dereliction of duty demonstrated by most leaders, especially when it comes to acceptance of the overwhelming, peer-reviewed, evidence-based research on current AI.” Dr Thompson’s website LifeArchitect.ai presents independent and rigorous analysis of the latest models and their capabilities, across hundreds of papers, videos, and articles.
“The best time to address this was in 2020 following the release of GPT-3. The second best time is now. Governments and intergovernmental organizations are already behind the times, putting humanity at risk. They need to be embracing the enormous opportunities that AI is making available to every facet of life right now.”
Read it: https://lifearchitect.ai/leaders-guilty-of-negligence/
It all started with a Perceptron (20/Oct/2024)