The Memo - 5/Sep/2024
DOOM by AI, teacherless classrooms, second Neuralink implant, and much more!
To: US Govt, major govts, Microsoft, Apple, NVIDIA, Alphabet, Amazon, Meta, Tesla, Citi, Tencent, IBM, & 10,000+ more recipients…
From: Dr Alan D. Thompson <LifeArchitect.ai>
Sent: 5/Sep/2024
Subject: The Memo - AI that matters, as it happens, in plain English
AGI: 76%
OpenAI CEO at Stanford (24/Apr/2024):
’Whether we burn $500 million a year, or $5 billion, or $50 billion a year,
I don’t care. I genuinely don’t. As long as we can stay on a trajectory where eventually we create way more value for society than that,
and as long as we can figure out a way to pay the bills.
We’re making AGI. It’s going to be expensive. It’s totally worth it.’
Contents
The BIG Stuff (DOOM, behind-the-scenes with 1X NEO, Colossus, Neuralink…)
The Interesting Stuff (Teacherless classrooms, OpenAI’s profit margin…)
Policy (US Govt testing OpenAI + Anthropic models, Copilot issues, newsreaders…)
Toys to Play With (Artifacts, Cerebras Voice…)
Flashback (This is it + It’s happening…)
Next (Roundtable…)
The BIG Stuff
DOOM by AI: Diffusion Models Are Real-Time Game Engines (27/Aug/2024)
Google has introduced ‘GameNGen’, the first game engine fully powered by a neural model, enabling real-time interactions with complex environments. It is showcased through the simulation of the game DOOM at over 20 frames per second on a single TPU. The engine makes it difficult for human raters to distinguish between real game clips and simulations. GameNGen is trained in two phases: reinforcement learning to play the game and a diffusion model to predict subsequent frames based on past frames and actions.
Today, video games are programmed by humans. GameNGen is a proof-of-concept for one part of a new paradigm where games are weights of a neural model, not lines of code. GameNGen… can effectively run a complex game (DOOM) interactively on existing hardware…
A small part of this vision, namely creating modifications or novel behaviors for existing games, might be achievable in the shorter term. For example, we might be able to convert a set of frames into a new playable level or create a new character just based on example images, without having to author code…
Hopefully this small step will someday contribute to a meaningful improvement in people’s experience with video games, or maybe even more generally, in day-to-day interactions with interactive software systems.
Read the paper: https://arxiv.org/abs/2408.14837
View the repo: https://gamengen.github.io/
Watch the video, fully generated by AI (link):
Introducing NEO Beta | A Humanoid Robot for the Home (1/Sep/2024)
1X with investment from OpenAI is finally revealing the NEO humanoid robot. It’s incredible. Watch for the blurred/censored wrist joints, which must be a closely-guarded trade secret.
See the short launch video by 1X: https://youtu.be/bUrLuUxv9gE
Watch the longer behind-the-scenes video by S3 (link):
Elon Musk announces Colossus AI training system (3/Sep/2024)
Elon Musk revealed that the xAI team successfully brought the Colossus 100k H100 training cluster online within 122 days. Touted as the most powerful AI training system globally, Colossus is set to double its capacity to 200k with the integration of 50k H200s in the coming months. Musk commended the excellent work of his team, NVIDIA, and their partners and suppliers.
Given how secretive the AI supercomputer field is, it is not publicly known just how large competitors’ supercomputers are, but here are some older and fuzzier numbers:
Read the original Tweet: https://x.com/elonmusk/status/1830650370336473253
Neuralink PRIME study progress update — second participant (21/Aug/2024)
Neuralink's second participant in the PRIME Study, Alex, successfully received his Link implant, which allows him to control digital devices with his mind. The surgery went smoothly, and Alex has already surpassed previous records for brain-computer interface cursor control. He has used the Link to enhance his gaming experience and has begun learning computer-aided design, designing a custom mount for his charger. The study aims to demonstrate the safety and utility of the Link in daily life, with ongoing efforts to expand its capabilities.
Read more via Neuralink.
Watch Alex play Counter Strike via thought (link, no audio):
Exclusive: AI papers published per day (Sep/2024)
Remember this chart from Sep/2022? It shows the number of AI papers published per month since the 1990s. It was designed by a group of researchers from Max Planck and elsewhere. The original Sep/2022 paper used these arXiv paper categories: cs.AI, cs.LG, cs.NE, and stat.ML. To provide an update, I had a dig around arXiv—their Tableau data lake seems to be wildly inaccurate due to duplication, so I now use the cs.AI page which includes cross-posted entries from other fields (for example, a paper published to the stats category might be cross-posted to the AI category).
There were 20,759 AI papers published this year up to Sep/2024. So far in 2024, that’s:
1 new AI paper published every 16 minutes and 52 seconds.
If we wanted to read all papers on a given day—let’s choose Tuesday 27/Aug/2024—we’d have to read 135 papers.
If we read for 8 hours a day, we’d have to read one AI paper every 3 minutes and 34 seconds. Good luck!
Additionally, I tracked 13 new large language models announced last month (Aug/2024), and we already have two major new models in the first few days of September 2024: AI2 OLMoE-1B-7B and 01-ai Yi-Coder 9B.
See them on the Models Table: https://lifearchitect.ai/models-table/
The Interesting Stuff
Anthropic launches Claude Enterprise plan to compete with OpenAI (4/Sep/2024)
Anthropic has introduced a new subscription plan for its AI chatbot, Claude, aimed at enterprise customers seeking enhanced administrative controls and security. The Claude Enterprise plan competes with OpenAI's ChatGPT Enterprise by allowing businesses to upload proprietary knowledge, enabling Claude to act as a company-specific AI assistant.
Notably, Claude Enterprise features a larger context window of 500,000 tokens, surpassing its competitors, and includes integrations like GitHub for seamless use in engineering projects. Anthropic assures customers that their data will not be used to train AI models, addressing privacy concerns.
Read the announce: https://www.anthropic.com/enterprise
Read more via TechCrunch.
This adds to the very limited list of big enterprise AI software environments (not just the raw models). Here’s my (exclusive!) ranking of these enterprise AI platforms, in order of features:
Microsoft Copilot for Microsoft 365: https://www.microsoft.com/microsoft-365/microsoft-copilot
=== (there is a large gap between this platform and the rest in terms of capabilities)
Gemini for Google Workspace: https://workspace.google.com/solutions/ai
ChatGPT Enterprise: https://openai.com/chatgpt/enterprise
Palantir AIP: https://www.palantir.com/platforms/aip/
Claude Enterprise: https://www.anthropic.com/enterprise
Amazon Q: https://aws.amazon.com/q
OpenAI co-founder Sutskever's new safety-focused AI startup SSI raises US$1 billion (4/Sep/2024)
Safe Superintelligence (SSI), co-founded by former OpenAI chief scientist Ilya Sutskever, has raised US$1 billion to develop AI systems with a focus on safety. The newly established company, valued at US$5 billion, aims to surpass human capabilities while preventing potential AI-related harm. Prominent investors include Andreessen Horowitz (a16z) and Sequoia Capital.
Sutskever said his new venture made sense because he "identified a mountain that's a bit different from what I was working on."…
Sutskever said he will approach scaling in a different way than his former employer, without sharing details.
"Everyone just says scaling hypothesis. Everyone neglects to ask, what are we scaling?" he said.
"Some people can work really long hours and they'll just go down the same path faster. It's not so much our style. But if you do something different, then it becomes possible for you to do something special."
Read more via Reuters.
See Ilya’s website: https://ssi.inc/
UK’s first ‘teacherless’ AI classroom set to open in London (31/Aug/2024)
A private school in London, David Game College, is inaugurating the UK’s first classroom taught by artificial intelligence in place of human teachers. It involves AI platforms like ChatGPT that adapt lesson plans to each student’s strengths and weaknesses, aiming for a more precise and personalized learning experience.
Chris McGovern ‘a retired head teacher and a former advisor to the policy unit at 10 Downing Street’ won The Who Moved My Cheese? AI Awards! for Sep/2024 for this ridiculous quote:
I understand why [schools] may push AI. For one thing, it's cheaper... The problem with AI and the computer screen is that it is a machine and it's inert, so you're straight away dehumanising the process of learning, taking away those interpersonal skills and the interaction between pupils and teacher. It's a soulless, bleak future if it's going to be along the AI path only.
Read more via Sky News.
Read more via BI.
There is also a related video of the school: https://youtu.be/MHFCVbUcwIE
Models Can Be “Regretful” After Making Mistakes (Sep/2024)
…During their solution generation process, after writing “Define [param] as” for a wrong [param], they often “realize” such a mistake, showing a regretful pattern in their internal states.
To see this, one can apply their probing technique to extract information from the model’s last hidden layer... it often knows it has made a mistake, right after stating the parameter name in full.
The statistical difference between the two cases signifies that the model’s internal states do exhibit a “regretful” pattern, which can be detected via probing... In other words, error detection is easy and is a skill almost already embedded within the model’s internal states, even when pretrained on correct math problems only.
Read the paper: https://arxiv.org/html/2408.16293v1
I wrote about the psychology of modern LLMs and their views of self and the world recently: https://lifearchitect.ai/psychology/
Watch the video (timecode): https://youtu.be/yBL7J0kgldU?t=5488
New Yorker: Was linguistic AI created by accident? (23/Aug/2024)
This is a tremendous read, with interviews of the original Google Transformer team. The invention of the transformer by Google researchers seven years ago unexpectedly revolutionized artificial intelligence, particularly in the field of language processing. Initially aimed at improving machine translation, the transformer revealed its surprising capability to generate coherent and imaginative text, reshaping our understanding of ‘writing’ and ‘thinking’. Despite its success, the creators of the transformer admit to a limited understanding of why it works so well, likening their discovery to a modern-day alchemy that continues to generate both intelligence and mystery.
“This is going to be a huge deal,” Vaswani said.
“It’s just machine translation,” Gomez said, referring to the subfield of AI-driven translation software, at which their paper was aimed. “Isn’t this just what research is?”
“No, this is bigger,” Vaswani replied…
The true power of the transformer became clearer in the next few years, as transformer-based networks were trained on huge quantities of data from the Internet. In the spring of 2018, Shazeer gave a talk titled “Bigger Is Better,” arguing that scaling transformers led to dramatic improvements and that the process did not appear to plateau; the more you trained the models, the better they got, with no end in sight. At Google, Shazeer was instrumental in developing the LaMDA chatbot, which holds the dubious distinction of being perhaps the first large language model that some poor soul believed to be sentient. At OpenAI, the ultimate result of scaling up was ChatGPT…
The production of AI seems to carry a powerful side effect: as the machines generate intelligence, they also generate mystery. Human misunderstanding endures, possibly a permanent condition.
NaNoWriMo is in disarray after organizers defend AI writing tools (3/Sep/2024)
The organization behind National Novel Writing Month (NaNoWriMo, begins 1/Nov every year) is facing backlash after stating that opposing AI writing tools is ‘classist and ableist’. NaNoWriMo argues that AI can assist those with different cognitive abilities and reduce the cost of hiring human assistants. However, many writers, including those with disabilities, criticized the stance, stating that generative AI could exploit and devalue human creativity. The controversy has led to resignations from the NaNoWriMo Writers Board and calls for more transparency.
Read more via The Verge.
NVIDIA admits Blackwell defect, but says it's fine now (29/Aug/2024)
NVIDIA has acknowledged a design defect in its Blackwell generation GPUs, which affected production yields. CFO Colette Kress stated that a change to the GPU mask has been executed to improve yields, and shipments are still planned for Q4. NVIDIA anticipates that despite the delay, production will ramp up and continue into fiscal year 2026, with significant revenue expected. Gartner analyst Gaurav Gupta noted that while design changes can be costly, improved yields could offset any losses.
Read more via The Register.
OpenAI’s first in-house chip will be developed by TSMC on its A16 Angstrom process for its Sora video applications (2/Sep/2024)
OpenAI is developing its first custom chip using TSMC's A16 Angstrom process to enhance the video-generation capabilities of its Sora applications. This venture could also potentially boost Apple device sales, as the feature will be integrated into Apple's generative AI suite. Although initial plans for a dedicated foundry were scrapped, the collaboration highlights the ongoing AI race among tech companies to develop advanced custom solutions.
Read more via Wccftech.
OpenAI’s profit margin: Unit economics of LLM APIs (27/Aug/2024)
This analysis on the unit economics of OpenAI's API reveals a robust profitability margin, with estimates of a 75% gross margin as of June 2024, excluding model training and salaries. As models transition to the new GPT-4o, profit margins are expected to remain healthy at 55%. The study outlines two scenarios: increased competition leading to price drops, or a single superior model maintaining high prices and margins. The findings suggest significant price reductions for GPT-4-class models may be imminent, contrary to the belief that these services are operating at cost or loss.
Read more via LessWrong.
Philippines' call centers navigate AI impact on jobs (27/Aug/2024)
The Philippines, known as the world's call center capital, is experiencing the effects of artificial intelligence on its outsourcing industry, which is expected to surpass US$38B in revenue this year. As major players in the industry integrate AI tools to remain competitive, the shift towards automation is beginning to take over some duties traditionally performed by human workers, highlighting the global challenge of balancing cost-cutting measures with job security.
Read more via Bloomberg.
OpenAI says ChatGPT's weekly users have grown to 200 million (29/Aug/2024)
OpenAI reports ChatGPT now has 200 million weekly active users, doubling from 100 million in November 2023. 92% of Fortune 500 companies use OpenAI products. API usage doubled since ChatGPT-4o mini's July 2024 launch.
Read more via Reuters.
AI agents are coming. Do we know what they will do? (24/Aug/2024)
The WSJ explores the capabilities of emerging AI agents that can autonomously perform tasks typically handled by humans, such as booking travel, making reservations, and even managing personal finances. As these bots become integrated into various industries, they promise to enhance efficiency and convenience.
While reading a transcript of a Loman bot’s call, Wiens saw that a Google bot had called it to check a restaurant’s business listing, and the two bots conversed about the availability of high chairs and a kids menu.
Read more via the WSJ.
Policy
US Govt to get early access to OpenAI, Anthropic AI to test for doomsday scenarios (29/Aug/2024)
OpenAI and Anthropic have signed groundbreaking deals with the US Government, granting early access for safety testing of their new AI models before public release. This collaboration aims to enhance AI safety by involving the US AI Safety Institute in assessing potential risks and improvements. While some support regulatory efforts, such as California’s proposed AI safety bill, others worry about stifling innovation. The partnerships underscore the significance of government involvement in AI development to ensure responsible deployment.
Read more via Ars Technica.
Top companies ground Microsoft Copilot over data governance concerns (21/Aug/2024)
Due to security and corporate governance concerns, many large enterprises are cautious about integrating Microsoft Copilot into their systems. As Jack Berkowitz from Securiti explains, businesses face challenges with Copilot accessing data they shouldn't, such as sensitive salary information. Around half of the chief data officers surveyed have paused their Copilot implementations. Berkowitz suggests that to make AI copilots effective, organizations need to ensure clean data and robust security measures are in place, similar to past challenges with enterprise search security.
Read more via The Register.
Mapping the misuse of generative AI - Google DeepMind (2/Aug/2024)
Google DeepMind's latest research, in collaboration with Jigsaw and Google.org, analyzes the misuse of generative AI technologies to enhance safety measures. By reviewing nearly 200 media reports from January 2023 to March 2024, the study identifies prevalent misuse tactics, such as impersonation and scams, and encourages the development of comprehensive safety evaluations. The research highlights the importance of proactive strategies to combat misuse and emphasizes initiatives like AI literacy campaigns and transparency in digital content.
Read more via Google DeepMind.
Venezuela's newest news agency employs AI anchors for reporter safety (2/Sep/2024)
A new Venezuelan news agency has started using AI-generated news anchors as a strategy to safeguard their journalists amid a government crackdown. This innovative approach aims to reduce the risks faced by human reporters operating in a politically sensitive environment. The initiative underscores the increasing adoption of AI in journalism, particularly in regions where press freedom is under threat.
Read more via US News & World Report (Reuters).
EFF: NO FAKES – A dream for lawyers, a nightmare for everyone else (19/Aug/2024)
The NO FAKES Act is criticized for potentially enabling censorship and complicating free speech. It allows individuals to sue over unauthorized digital replicas of their image or likeness, without Section 230 protections, leading to a ‘hecklers’ veto’. The Act's broad definitions and lack of safeguards for lawful speech could result in private censorship, with platforms at risk for hosting content and facing hefty penalties. Critics argue the bill is an overreach, lacking proportionate measures to address AI-generated replica concerns.
Read more via Electronic Frontier Foundation.
AI runs for office in Wyoming, USA (22/Aug/2024)
In Cheyenne, Wyoming, Victor Miller has introduced an AI entity named VIC (Virtual Integrated Citizen) as a candidate for mayor, arguing that AI could potentially manage city functions more efficiently than humans. Despite a modest voter turnout for VIC, the initiative highlights emerging discussions about AI's role in governance. Concerns around AI's involvement in elections include ethical issues, potential misinformation, and privacy risks, as AI tools are often trained on internet data without explicit consent from content creators.
Read more via The US Sun.
Police officers are starting to use AI chatbots to write crime reports. Will they hold up in court? (27/Aug/2024)
A handful of police departments, including Oklahoma City, have begun experimenting with AI chatbots to draft initial crime reports, utilizing technology similar to ChatGPT. While officers appreciate the time saved, there are concerns from prosecutors and legal scholars about the potential for inaccuracies and biases in these AI-generated documents, which are fundamental to the criminal justice process. The AI tool, named Draft One, is being used selectively for minor incidents to avoid high-stakes legal implications.
Read more via Yahoo News.
Google Gemini will again support AI image generation of people (28/Aug/2024)
Google announced it will reintroduce the capability for its Gemini AI tool to generate images of people, following improvements to their Imagen 3 image generator. This comes after the feature was previously paused due to inaccuracies and controversies. The updated tool will not support photorealistic images of identifiable individuals, depictions of minors, or any excessively violent or sexual content. Google aims to gradually roll out this feature, initially in English, while continuing to refine it based on user feedback.
Read more via CNBC.
How Oprah will screw up the AI story (31/Aug/2024, 12/Sep/2024)
I’ve put this under Policy because loud voices influence the acceptance of AI in society, and Oprah has a uniquely loud voice.
CEO of Fog Creek Software, Anil Dash, critiques Oprah’s upcoming broadcast special on AI, predicting that it will focus narrowly on commercial, large-scale generative AI systems, ignoring the broader history and public aspects of AI development. Dash argues that the program will likely overlook critical issues such as labor rights, consent regarding training data, and the societal impacts of AI, while potentially promoting commercial interests without transparency about financial ties.
Read more via Anil Dash.
The special, titled “AI and the Future of Us: An Oprah Winfrey Special,” is set to air 12/Sep/2024 at 8PM ET, and stream on Hulu the next day.
Sam Altman, CEO of Open AI, will explain how AI works in layman's terms and discusses the immense personal responsibility that must be borne by the executives of AI companies.
Microsoft Co-Founder and Chair of the Gates Foundation Bill Gates will lay out the AI revolution coming in science, health and education, and warns of the once-in-a-century type of impact AI may have on the job market.
Toys to Play With
Claude Artifacts: Is My Blue Your Blue? (Sep/2024)
Dr Patrick Mineault is a neuroscience and AI researcher. He made this side project using Claude 3.5 Sonnet.
It takes about 20 seconds to run through the test, distinguishing green from blue. My boundary was at hue 176 on my Macbook Pro, and I think lower than that on my phone.
Try it (free, no login): https://ismy.blue/
How Anthropic built Artifacts (27/Aug/2024)
Anthropic's development of Artifacts—a feature allowing interactive creation of websites, code snippets, and more—was achieved in just three months by a small, distributed team. Using their language model, Claude, the team enhanced rapid prototyping and collaboration, showcasing their ability to leverage AI for software development. The feature, integrated with Claude 3.5 Sonnet, is noted for potentially leading a shift in how generative AI can be used collaboratively, making complex engineering projects more efficient.
Read more via The Pragmatic Engineer.
Watch Anthropic’s fast and choppy video, perhaps edited by a squirrel (link):
System Prompts - Anthropic (12/Jul/2024)
Anthropic's latest updates to system prompts for Claude's web interface and mobile apps focus on enhancing information delivery and user interaction. The updated prompts guide Claude in providing accurate, concise, and context-aware responses, while ensuring it doesn't open URLs or links. Claude's behavior is fine-tuned to avoid unnecessary apologies or affirmations, and it uses markdown for coding tasks, with options for users to request code explanations.
Read more via Anthropic.
Cerebras Voice (Sep/2024)
Talk to the world's fastest AI voice assistant, powered by Cerebras.
Try it (free, no login): https://cerebras.vercel.app/
New Comprehensive LLM evaluation framework (Aug/2024)
The BenchmarkAggregator framework provides a consistent model evaluation in the AI community by comparing Large Language Models (LLMs) across respected benchmarks. It aims to offer a holistic and fair view of model performance, balancing evaluation depth with resource constraints. Featuring accepted benchmarks such as MMLU-Pro and GPQA-Diamond, the framework is designed for easy integration of new models using OpenRouter, making it highly extensible and practical.
View the repo: https://github.com/mrconter1/BenchmarkAggregator
View the leaderboard: https://benchmark-aggregator-lvss.vercel.app/
George Hotz announces availability of Tinybox hardware (27/Aug/2024)
George Hotz announced that Tinybox, AI servers running Ubuntu are now available for purchase. The Tinybox red model includes 6× AMD Radeon 7900XTX 24GB GPUs ($15k), while the Tinybox green model features 6× NVIDIA GeForce RTX 4090 24GB GPUs ($25k). Hotz claims it is the best performance-per-dollar machine learning box in the world and highlights its full networking capabilities, which he cites as the important metric.
Read more: https://x.com/realgeorgehotz/status/1828197925874463166
See the specs or buy it: https://tinygrad.org/#tinybox
Video: How might LLMs store facts | Chapter 7, Deep Learning (31/Aug/2024)
I enjoyed skimming this video about ‘Unpacking the multilayer perceptrons in a transformer, and how they may store facts’.
Set aside some time to watch (link):
Jack Clark + Dr Andrew Critch + The Unbearable Slowness of Being (3/Aug/2024)
This is too much to write about completely, but you might like to set aside an hour to fully digest a different angle on the speed of AI, and a bit about the safety of AI.
Dr Andrew Critch says:
The attached video is from the perspective of an AI system just 50x faster than us. This is about the rate at which the fastest LLMs I've publicly heard about can produce text — 300-600 tokens/second — compared to the fastest human speech and typing (around 10-20 tokens/second; so we have a 15x-60x speed disadvantage).
Read Dr Andrew’s interesting-but-doomer post: https://x.com/AndrewCritchPhD/status/1802400857818079254
The related paper by Caltech is scathing. Or is that just ‘confronting’… again…
One species that operates at much higher rates is machines. Robots are allowed to play against humans in StarCraft tournaments, using the same sensory and motor interfaces, but only if artificially throttled back to a rate of actions that humans can sustain.
Sidenote: IBM had to apply the same throttling to Watson back in 2011 during the live Jeopardy! recording with humans: ‘After Watson gets the enable signal, the third and most discussed interface comes into play: the physical buzzer. If and only if Watson has computed an answer with a sufficiently good confidence will it send an electronic signal to its hand.’ See Kurzweil’s commentary: https://www.thekurzweillibrary.com/the-buzzer-factor-did-watson-have-an-unfair-advantage
It is clear that machines will excel at any task currently performed by humans, simply because their computing power doubles every two years. So the discussion of whether autonomous cars will achieve human-level performance in traffic already seems quaint: roads, bridges, and intersections are all designed for creatures that process at 10 bits/s. When the last human driver finally retires, we can update the infrastructure for machines with cognition at kilobits/s. By that point, humans will be advised to stay out of those ecological niches, just as snails should avoid the highways.
Read the Caltech paper ‘The Unbearable Slowness of Being’: https://arxiv.org/abs/2408.10234
Read Anthropic advisor Jack Clark’s take here: https://importai.substack.com/p/import-ai-384-accelerationism-human
If you want to go further down the rabbit hole, check out my research on the fastest brains in the world, during my human intelligence career. This is from my 2015 Mensa article, Children with superpowers: The magic of advanced brains.
Parallel processing
At 10 years old, Australian boy Chris Otway had a tested mental age of a 22-year-old, and an IQ of 200. Chris was a child with a high performing brain. Some of the superpowers that came with this brain included the ability to process tasks in the background—while his mind was resting or doing other things. In a discussion with Miraca Gross, Professor of Gifted Education at the University of NSW, Chris described his ability to “parallel process” problems. One example of this was his natural capacity to work on two complex maths problems at the same time. Miraca reported:“He seemed to be able to sense the point at which one set of predictions/ speculations/calculations (Problem 1) was nearing the point of resolution. At that time he would put that problem on hold and bring Problem 2 to the forefront of his mind, aware that his subconscious mind was simultaneously working in parallel on Problem 1. When Problem 1 had attained resolution it would explode back into his conscious mind as an “aha!” moment which would bring the keenest intellectual and emotional pleasure. The solution would remain with Chris in detail and with complete clarity while he continued to work on Problem 2.”
One of the most striking things about the gifted population is the ongoing research helping with their advancement, and the advancement of average brains as well! Chris’ ability to harness his subconscious—to “slow down” and let background processes do their job—is one that benefits a much larger population than just the top 2%.
I hope it’s obvious why I moved out of the field of human intelligence and back into artificial intelligence a short time later!
A version of the above was published in my 2016 book, Bright, and full subscribers can download a complimentary copy of the book here: