Supremacy: AI, ChatGPT, and the Race That Will Change the World – A Riveting Dive into the AI Arms Race
- Mahendra Rathod
- 17 hours ago
- 5 min read

Supremacy: AI, ChatGPT, and the Race That Will Change the World – A Riveting Dive into the AI Arms Race
Introduction
Parmy Olson’s Supremacy: AI, ChatGPT, and the Race that Will Change the World isn’t just a chronicle of technological milestones — it’s a deep exploration of how ambition, ideology, and capitalism converge in the pursuit of artificial general intelligence. By tracing the rise of OpenAI and DeepMind, Olson reveals how this race is reshaping not just the tech industry, but the very foundations of society — from governance and ethics to privacy and power dynamics. The book offers a sobering reminder that in our rush to automate intelligence, we risk hardcoding inequality unless the direction of this race is deliberately and democratically steered.
With surgical precision, Olson dissects the rivalry between OpenAI and DeepMind, the two most prominent labs in the pursuit of Artificial General Intelligence (AGI). But this isn’t a book about code or models — it’s about philosophy, ambition, power, and ethics. Through Sam Altman and Demis Hassabis — two very different visionaries — we witness a high-stakes competition playing out on a global scale.
If you think AI is just about faster chatbots, Supremacy will disabuse you of that notion — fast.
OpenAI vs DeepMind: Origins and Opposites
This section reads like a tech industry mythos: two labs, one dream — two very different roadmaps.
OpenAI
Born in Silicon Valley in 2015, founded by Sam Altman, Elon Musk, and a constellation of tech idealists.
Original mission: ensure AGI benefits all of humanity. Emphasis on open research, collaboration, and accessibility.
In 2019, the lab shifted to a “capped-profit” model to attract the funding required for AI at scale — a pragmatic pivot.
A multibillion-dollar partnership with Microsoft followed. Cue the launch of ChatGPT in 2022 — the AI that brought LLMs into dinner table conversations.
Sam Altman’s style: move fast, ship, gather real-world feedback, and deal with messiness later. He’s as much a strategist as he is a technologist.
DeepMind
Founded in London in 2010 by Demis Hassabis, a neuroscientist and chess prodigy with a passion for the brain.
Acquired by Google in 2014 but operated semi-independently with a strong research-first ethos.
Focused on scientific AI: solving Go with AlphaGo, protein folding with AlphaFold, and working on real-world healthcare challenges.
Hassabis’s philosophy: understand intelligence deeply before scaling its use.
Prefers publishing in Nature over trending on Twitter.
Bottom line? OpenAI is the sprinter with a public stage. DeepMind is the marathoner with a research lab.
Rivalry in Action: Key Turning Points
Parmy Olson doesn’t paint a direct feud. Instead, she shows how the existence of one team constantly shaped the decisions of the other.
Major Milestones:
2016 – DeepMind’s AlphaGo defeats a world Go champion, shocking even AI insiders. A “moon landing moment” for AI.
2019 – OpenAI goes profit-positive, announcing a major Microsoft partnership to fund bigger models.
2022 – ChatGPT launches, breaking the internet and redefining AI's relationship with the public.
2023 – Google merges DeepMind and Google Brain to catch up with OpenAI’s momentum, pushing out their Gemini project.
Notable contrasts:
Altman opened the AI playground to the world; Hassabis preferred to perfect it behind closed doors.
Altman believes wide deployment is a learning mechanism. Hassabis believes premature deployment is risky, maybe reckless.
Altman speaks to regulators and Reddit. Hassabis speaks to academic journals and research councils.
The rivalry never got ugly — but it was always existential. Each new release from one lab forced a strategic recalibration at the other.
Ethics, Power, and the Stakes for Society
This is where Supremacy shines. It’s not just a tech drama. It’s a social mirror.
Olson warns that AI’s danger isn't limited to sci-fi “Skynet” fears. It’s about who builds it, why, and how much of it the public actually controls.
Core Themes:
1. AI Safety vs. AI Ethics
Safety = Preventing future extinction scenarios (e.g., rogue AGI).
Ethics = Making sure today’s AI doesn’t harm users, marginalize groups, or create hidden societal costs.
Many companies focus disproportionately on “future risks” while ignoring today’s biases, transparency problems, and accountability gaps.
2. Surveillance Capitalism 2.0
AI models need massive compute — which means they need massive revenue.
If paid subscriptions and partnerships don’t scale, the fallback may be familiar: data harvesting and hyper-targeted monetisation.
Every interaction with a chatbot can become a new data point — and companies may not always be transparent about how that’s used.
3. Concentration of Power
Training large models requires data, compute, and capital — all cornered by a few tech giants.
Microsoft (OpenAI) and Google (DeepMind) now effectively control the frontier of generative AI.
Governments, nonprofits, and even academia can’t compete at the same scale — raising fears of monopolized digital cognition.
4. Lagging Governance
Laws aren’t keeping up. The EU’s AI Act is ahead, but global consensus is missing.
Altman himself called for regulation — but critics question whether these are preemptive power grabs or sincere safeguards.
Olson draws parallels to how industries like cars and pharmaceuticals evolved under regulation — suggesting AI needs similar checks without killing innovation.
Author’s Lens & Your Takeaway
Parmy Olson writes with restraint and clarity. She never romanticizes her subjects, but she respects their intelligence and intent.
What comes through:
Admiration for Altman and Hassabis as visionaries.
Deep concern about corporate concentration of power.
Frustration with the idealism-to-pragmatism drift in companies like OpenAI.
A call to not blindly trust benevolent narratives — even when dressed in mission statements and open-source platitudes.
Your takeaway as a reader?
This isn’t just a story of who builds AGI first.
It’s a test of whether we, as a species, can ensure wisdom scales with power.
Olson doesn’t give you easy answers — she hands you a flashlight and says:“Look closer. The race is bigger than the racers.”
Key Takeaways
The book documents the rise of two competing visions of AI — OpenAI (move fast, broad deployment) and DeepMind (slow, scientific rigor).
Altman’s ChatGPT changed public understanding of AI overnight. Hassabis’s AlphaGo and AlphaFold changed science forever.
Ethical questions loom large: Who controls AI? Is it biased? Will it serve public good or private gain?
AI’s development is now tied to tech monopolies — raising new issues of power, access, and regulation.
Olson’s narrative is balanced, fact-rich, and filled with urgency — this isn’t just about software, it’s about the soul of our future.
Further Reading: If You Liked Supremacy, You’ll Love These
AI & Tech Power
The Coming Wave by Mustafa Suleyman A gripping, insider view on AI, synthetic biology, and how to regulate what’s coming next — from DeepMind’s co-founder.
Tools and Weapons by Brad Smith (Microsoft President) A powerful look at the responsibilities tech giants have as they reshape society.
The Age of AI and Our Human Future by Henry Kissinger, Eric Schmidt, Daniel Huttenlocher A diplomatic and philosophical view on AI’s impact on geopolitics and civilization.
The Philosophy & Ethics of AI
Life 3.0: Being Human in the Age of Artificial Intelligence by Max Tegmark What happens when intelligence becomes decoupled from biology? A broad, speculative look.
The Alignment Problem by Brian Christian Why building AI that reflects human values is so hard — and so necessary.
You Look Like a Thing and I Love You by Janelle Shane A hilarious and revealing take on the weirdness and limitations of machine learning.
Society, Surveillance, and Ethics
Surveillance Capitalism by Shoshana Zuboff The definitive book on how our data is being commodified by Big Tech.
Futureproof: 9 Rules for Humans in the Age of Automation by Kevin Roose Smart, optimistic guide to staying human in an automated world.
Big Ideas, Big Questions
Homo Deus by Yuval Noah Harari A look at the future of humanity in a world of superintelligent systems.
The Singularity Is Nearer by Ray Kurzweil Bold predictions on the future of AI, biotech, and human-machine convergence.
Happy Reading!
Comments