There’s a part of me that is still a 12-year-old boy who wants to live in a Star Trek universe. And so this month I’ve been reading a big stack of books about AI, preparing for the new world ahead.
They fall roughly into three clusters:
Narratives about the early days of AI research, the origins of companies such as OpenAI and DeepMind, and the obsessive personalities driving those teams.
Cultish “sci-fi” scenarios of possible Advanced AI futures, both good and bad.
Skeptical critiques of the current AI giants, questioning their scientific claims, and pointing out the real present-day harms being done by them (environmental damages, labor exploitation, biased data).
How We Got Here
For decades AI research had been an academic dead zone, its incremental progress serving to keep it perpetually deprived of funding and top scientists. But in 2010 three young researchers (Demis Hassabis, Mustafa Suleyman, and Shane Legg) founded DeepMind in London, with no money – just the audacious goal of creating a human-level Artificial General Intelligence (AGI). Hassabis’ utopian motto was “Solve intelligence, and use it to solve everything else.”
They made progress using a “neural network” approach, creating a basic AI that taught itself how to win at Atari videogames through reinforcement learning. The game demo impressed Google management, which bought DeepMind in 2014 and supplied the team with far more resources.
This step alarmed Google’s rivals in Silicon Valley, and in 2015 a paranoid Elon Musk agreed to fund Sam Altman’s alternative project, OpenAI, thus triggering an AI arms race. Soon Microsoft and the other tech giants started allocating vast resources to a handful of competing teams, and major technical breakthroughs began to cascade.
Most notably, LLMs (Large Language Models) made dramatic advances in predicting the appropriate context for conversational text, and the AI then generating surprisingly useful responses. OpenAI made ChatGPT available to the public in November 2022, which became an immediate sensation. Suddenly humans could talk to an alien mind (or had the illusion of doing so).
THE SCI-FI CULT
Reading about all the personalities involved in these research teams, one is struck by their similarities: cohorts of obsessed young men convinced that they could build an ASI (Artificial SuperIntelligence) that would either transform human civilization into a technological paradise, or drive our species to extinction.
These AI pioneers are not the typical business entrepreneurs just marketing a shiny new vacuum-cleaner. Instead they often sound like religious zealots with messianic visions, espousing cultish sci-fi philosophies about the spread of humanity throughout the galaxy.
For example, here are a range of typical quotes from Sam Altman:
"Most of the world's problems... could be solved by really advanced AGI. We could solve climate change, we could cure all disease, we could have infinite energy.”
"In a decade, the amount of intelligence we have on Earth will increase by a factor of a million.”
"My worst fears are that we cause significant — or even extinction-level — harm to the world. And if this technology goes wrong, it can go quite wrong.”
"I expect AGI to be the most powerful technology humanity has yet invented. I expect it to create a world of abundance, where everyone has access to whatever they need, and the existential risks posed by resource scarcity, including those leading to climate change, are largely mitigated.”
Altman is not a rare outlier – many of the other prime players in the AI field have made similarly bold sci-fi claims, insisting that our species is on the verge of cataclysmic change (for good or ill).
These bizarre perspectives help to explain how we got here: the priests who founded DeepMind & OpenAI needed faith in a transcendent vision if they were going to pursue what seemed quite impossible back in 2010. They were dreaming of an AI God.
SKEPTICAL CRITIQUES
Finally, I read books by folks who challenge many of the scientific and social assumptions of the AI priesthood.
Computer scientist Melanie Mitchell points out that despite recent advances, current AI models are quite brittle, easily confused, and only excel in very narrow domains. She believes we are very far from creating an Artificial General Intelligence that will be competent across a broad spectrum of real-world functions.
Karen Hao’s book, Empire of AI, looks at OpenAI and its rivals through the lens of colonialism. Rather than “saving humanity” as Altman claims, she says that they are just profit-driven corporations exploiting cheap labor, embedding biases in their data, and causing terrible environmental damage.
In order for “deep learning” to occur, AI models have to be trained on vast databases, primarily of text and images scraped from the internet. Much of this trawled material contains the worst of human content: racist, violent, and sexist. As a result, AI firms do make some attempts to filter the data. Hao has interviewed low-paid workers in Kenya employed by OpenAI to read through this vile material (child abuse scenarios etc) for hours each day, causing them emotional stress and insomnia.
Hao notes other hidden costs, such as the environmental impact of the gargantuan data centers required:
“AI computing globally could use more energy than all of India.”
These data centers, often located in poor communities, require vast amounts of water to keep the systems cooled, harming the local ecosystems.
When asked about these grim climate damages, Altman and Hassabis both make the same claim: after they’ve built a SuperIntelligent AI, it will solve the climate crisis for us. We should simply trust the priests, and have faith in the New God they are summoning.
THE COMING STORM
Any prediction that I make today (May 31 2025) is certain to be wrong. I don’t have the technical expertise to say whether we’ll have an Artificial SuperIntelligence within the next decade, or this century, or ever.
I’ll admit that the 12-year-old boy in me would still love to see a luminous Caretaker AI & its angelic robots guiding humanity into global abundance. But my 60-year-old self has a more jaded perspective, and lower expectations.
Multiple specialty-AI agents will become embedded in every system we use daily. Rivalries between corporations and nations (USA, China) will likely ensure rapid deployment to achieve tribal dominance, at the expense of safety measures or ethical concerns.
Whatever their benevolent public marketing claims, AI systems will ultimately reflect the needs of their powerful creators. (For example, Elon Musk eventually feuded with Sam Altman, and is now developing the Grok AI that aligns with his own distinct interests.)
There’s a storm coming, a longterm messy conflict between powerful, wealthy forces (corporate and political), leaving us to navigate the turbulent weather. There are no external saviors. We’ll each have to find an internal anchor, paying attention to the rapid changes, and highlight the hidden costs, to meet the best and worst of what's to come with equanimity and civic responsibility.
(And okay, getting Star Trek's holographic AI doctor would be pretty cool.)
—------------------
Thanks for reading this newsletter!
There are links at the bottom of the page: Like, Comment, and Share. Your responses attract new readers, and I'd love to hear your thoughts about the essays.
How useful/damaging AI becomes for the planet and we humans is, unfortunately, dependent upon our ability as a species to set aside our worst instincts and act with integrity and altruism. Not holding my breath, but you never know!
Very informative piece. Thank you.