I began my month-long inquiry into AI with concern. Now I am profoundly alarmed. Even if AI does what we want and its adoption into society goes well, I cannot see how it won’t eventually go bad. But the chance of a catastrophic outcome is ridiculously high, and not nearly enough people are aware of that fact.
Last month’s Renaissance Report explored ‘money’ and built the case that our peculiar system of money – currency, really, because it is not a store of value – has a massive design flaw. Our version of money is based on debt (every dollar is loaned into existence), which means it requires continual exponential growth to remain stable. The logic is simple; because nothing can grow exponentially forever, our debt-based fiat money has a fatal math error lurking at its core.
While there are many concerns we’ll discuss about AI below, to connect the prior Renaissance Report to this one, we have to ask, what does “money” even mean in a world where AI and robots cater to every human need? Can it even function? Or does it need to be completely overhauled? If so, how will that be accomplished?
Further, there’s the looming idea that AI coupled to robots could displace every single job currently performed by a human. To grasp why I think neither of these propositions (elimination of money and loss of jobs) can occur without risking almost certain economic, cultural, and/or social catastrophe.
The only person I’ve heard really broach this, albeit without a lot of detail, is Elon Musk:
“Assuming there’s a continued improvement in AI and robotics, which seems likely, the money will stop being relevant at some point in the future.”
Which means it becomes more vital than ever that we begin to connect various fields of study – economics, money, sociology, behavior sciences, and complexity theory, to name a few – to arrive at any particular decisions at all. And then we need to connect all of these to the pace of change that is unfolding within AI.
In this report, we’re going to weave together the current understanding of AI and its risks and benefits with complex systems, Common Knowledge (that moment when everybody knows that everybody knows something), the Mouse Utopia experiments, and the role of incentives in shaping human behavior.
Now that I’ve done this, let me just go straight to the conclusions:
- The AI cat is out of the bag. The arrival of unpredictable, unexpected, and sometimes unwanted outcomes is now inevitable.
- None of us is truly ready for what is about to unfold, no matter which way it goes (good, bad, or ugly).
- It’s all going to arrive at a pace that vastly exceeds our cultural and political abilities to manage or regulate.
- Even the best outcome, which is a benign AI that relieves us of the need to think or work, will eventually be terrible, as we’ll become utterly dependent on a system that nobody has the skill to rebuild in the event of an outage.
- We’re not ready to address where humans derive meaning and purpose when they are no longer needed to perform work.
- Everybody should have either a defendable garden or farm, or a solid Plan B to become part of one.
- I am personally reacting to all this as if 2026 is the last year to prepare. However, I am always early to the party.
Revolutions
Perhaps we should start with the fact that AI represents the most significant revolution in humanity’s use of technology. Maybe that doesn’t seem too bad. After all, the Industrial Revolution was also disruptive because it allowed machines to supplant and replace manual labor, but that was, on balance, a great thing once the dust had settled. It was a boon, not a catastrophe.
So, why should we worry about the AI revolution?
Ah, this time what’s being replaced is human cognition. Replacing manual labor with machines freed up more people to apply their minds and waking hours to advancing technology. But what happens when AI replaces the thinking jobs next? What exactly is it that people will do with their time? How will they earn money? Would they even have to?
AI already thinks better than humans in several areas, such as chess and other strategy games. It’s phenomenally good at math and physics. It’s fantastic at coding and analyzing financial statements and legal documents.
But – hold on! – With robots and self-driving vehicles, it’s now moving into the physical space too. So AI is rapidly gaining the ability to occupy both the cognitive and the physical spaces. At the same time. Again, what’s left for humans to do? Who are we? What’s our role? Those are the questions that we should be asking and addressing with our children (spoiler: they are already talking about all this).
When they are posed, we get very awkward answers, such as this long pause from Elon Musk (from 2 years ago):
CNBC Reporter: “What do you tell [your kids] is going to be of value?”
Elon Musk: “[Paaaaaaaause……] I guess…ummmm [pause….] I would just say to follow their heart in terms of what they find interesting to do, or fulfilling to do, and try to be as useful as possible to the rest of society. (…) (…) I mean if I think about it too hard, it can be dispiriting and demotivating…”
Those are the inspiring words of wisdom from the wealthiest man on the planet. Imagine how your kids feel.
As I am fond of saying, it’s not the destination that kills you, but the pace of change. Traveling from the top of the Empire State Building to the bottom is either benign if you take the elevator or fatal if you take the outside route. Same destination, different outcomes, with the pace of change being the defining attribute. AI is coming at us at supersonic speed.
If the predictions of the CEO of Microsoft’s AI division come true, nobody is ready for this pace of change:
“I think that we’re going to have a human-level performance… on most if not all professional tasks… within the next 12-18 months.”
Consider that the steam engine was invented in 1712, but it wasn’t until the 1920’s that horse-drawn carriages were phased out in cities (with rural areas following 5-10 years later). A couple of hundred years provides ample runway to begin adjusting culturally and economically.
Now consider the loss of all jobs over the next 1-2 years for interpreters, historians, sales reps, computer programmers, lawyers and paralegals, telemarketers, editors, proofreaders, authors, data scientists, web developers, concierges, and drivers (cab, truck, ship, etc.), to name a few.
Again, we’re not ready for that.
Some Definitions
AGI
- AGI (Artificial General Intelligence): “When AI is as smart as any human in any given field.”
- The capacity to outperform humans in most economically valuable work or intellectual tasks.
SGI
- SGI (Superhuman General Intelligence): “When AI is smarter than any given human in every field.”
- AI that not only matches but vastly surpasses human intelligence, potentially leading to self-awareness and exponential self-improvement.
Alignment
- Alignment in AI refers to ensuring systems’ goals and behaviors match human values and intentions.
- An aligned AI advances the goals it was meant to pursue, while a misaligned one might optimize for unintended or harmful outcomes, potentially leading to risks like bias, deception, or even existential threats in advanced systems.
- Outer alignment: Ensuring that the objective given to AI is the one that actually happens and is the one that humans want
- Inner alignment: Making sure that the objectives AI pursues are the ones intended, instead of some other unintended subgoals
Clearly, there’s room for a lot of debate and disagreement over these terms and definitions. However, the general point is that AI is already ‘smarter’ than most people and already exceeds the smartest people in some fields.
On its current trajectory, it’s about to exceed everyone in just a few years. Or maybe later this year?
Elon: “We are in the singularity.”“I think we’ll hit AGI in 2026.”“Don’t worry about squirreling money away for retirement in 10 or 20 years. It won’t matter.Crypto bros: still looking for the next 100, 1000x to retire😃 pic.twitter.com/3Jm594wpzM
— Moby Media (@mobymedia) February 21, 2026
Some Very Worried AI CEO and Professionals
We’re getting remarkably consistent warnings from the top 5 American AI CEOs and mature AI specialists about how this new technology might go catastrophically wrong. That’s not a guarantee, but the odds aren’t favorable.
Nobody is saying that’s a certain outcome, but the percentages they give are ridiculously high. Like “20% to 25%.”
The top series of videos that I’ve found most compelling and useful are those put out by The Diary Of A CEO channel on YouTube.
Let’s start with Dr. Yampolsky, who says that his main mission now is to prevent superintelligence from killing everyone.
“Unfortunately, while we know how to make the systems much more capable, we don’t know how to make them safe, how to make sure they don’t do something we will regret. We are creating this alien intelligence. If aliens were coming to Earth and you had three years to prepare, you would be panicking right now. But most people don’t even realize this is happening.”
He also notes that the combination of needing to “get there first” coupled with money incentives, is creating a massive pell-mell race to generate superintelligence before anybody else. This is a consistent warning theme: the incentives are all pointed at pushing AI forward as fast as possible, and none are there to endorse a more careful, reflective approach.
Anthropic’s CEO Dario Amodei has been particularly pointed in hoping that their flagship (product? entity?) Claude will be useful and not destructive.
Anthropic wrote up a very long “Constitution” for Claude, where the word “hope” appears 43 times. The last paragraph is chilling, to be honest:
This sounds more like a final plea to a child about to leave the nest for good than it does a statement from a company that has its creation fully under control.
“But we want Claude to know that it was brought into being with care, by people trying to capture and express their best understanding of what makes for good character…”
Yikes. Since they are working with models not yet released, this reads more like a warning than a thought piece.
Dario Amodei wrote up a very detailed essay (posted on January 29th, 2026) on how Claude and AI are developing. He touches on many aspects of AI development, including summarizing some of the many risks we face. It’s titled The Adolescence of Technology. In there, he wrote:
We are now at the point where AI models are beginning to make progress in solving unsolved mathematical problems, and are good enough at coding that some of the strongest engineers I’ve ever met are now handing over almost all their coding to AI. Three years ago, AI struggled with elementary school arithmetic problems and was barely capable of writing a single line of code.
Similar rates of improvement are occurring across biological science, finance, physics, and a variety of agentic tasks. If the exponential continues—which is not certain, but now has a decade-long track record supporting it—then it cannot possibly be more than a few years before AI is better than humans at essentially everything.
While there’s no guarantee that AI can continue its geometric expansion of capabilities, nothing indicates it is slowing down yet. The pace of development over the past 3 years is most likely to be exceeded by the pace over the next three years. Carrying on…
In fact, that picture probably underestimates the likely rate of progress. Because AI is now writing much of the code at Anthropic, it is already substantially accelerating the rate of our progress in building the next generation of AI systems.
This feedback loop is gathering steam month by month, and may be only 1–2 years away from a point where the current generation of AI autonomously builds the next. This loop has already started, and will accelerate rapidly in the coming months and years. Watching the last 5 years of progress from within Anthropic, and looking at how even the next few months of models are shaping up, I can feel the pace of progress, and the clock ticking down.
Double yikes! AI is on the verge of writing its own code. It will be developing and advancing itself. At a pace that nobody could grok. We will simply be sitting back and watching…or wondering if it’s too late to pull the plug if things go awry. Now that “Final Word of Warning begins to make more sense…and be a little bit more concerning.
Within the next year or two, we’ll be facing a superior intelligence. Mr. Amodei provided this analogy to consider:
I think the best way to get a handle on the risks of AI is to ask the following question: suppose a literal “country of geniuses” were to materialize somewhere in the world in ~2027. Imagine, say, 50 million people, all of whom are much more capable than any Nobel Prize winner, statesman, or technologist.
The analogy is not perfect, because these geniuses could have an extremely wide range of motivations and behavior, from completely pliant and obedient, to strange and alien in their motivations.
But sticking with the analogy for now, suppose you were the national security advisor of a major state, responsible for assessing and responding to the situation. Imagine, further, that because AI systems can operate hundreds of times faster than humans, this “country” is operating with a time advantage relative to all other countries: for every cognitive action we can take, this country can take ten.
It’s actually worse than that…it’s like those 50 million geniuses snuck over the border last week and are now within and among us, taking jobs and plotting who knows what.
Mr. Amodei’s list of risks:
What should you be worried about? I would worry about the following things:
- Autonomy risks. What are the intentions and goals of this country? Is it hostile, or does it share our values? Could it militarily dominate the world through superior weapons, cyber operations, influence operations, or manufacturing?
- Misuse for destruction. Assume the new country is malleable and “follows instructions”—and thus is essentially a country of mercenaries. Could existing rogue actors who want to cause destruction (such as terrorists) use or manipulate some of the people in the new country to make themselves much more effective, greatly amplifying the scale of destruction?
- Misuse for seizing power. What if the country was in fact built and controlled by an existing powerful actor, such as a dictator or rogue corporate actor? Could that actor use it to gain decisive or dominant power over the world as a whole, upsetting the existing balance of power?
- Economic disruption. If the new country is not a security threat in any of the ways listed in #1–3 above but simply participates peacefully in the global economy, could it still create severe risks simply by being so technologically advanced and effective that it disrupts the global economy, causing mass unemployment or radically concentrating wealth?
- Indirect effects. The world will change very quickly due to all the new technology and productivity that will be created by the new country. Could some of these changes be radically destabilizing?
The risks are extremely serious, but also completely unknown. The AI models are now black boxes, which sometimes behave erratically, and we cannot really know what they might do in there. But even if that turns out to be benign, we can already know that jobs are going to be completely and thoroughly disrupted, and we can easily predict that these will be destabilizing, if not system breakers.
The odds of a catastrophic outcome, a situation where AI somehow destroys the economy, are unacceptably high. At least according to a growing chorus of AI experts and CEOs. It’s time to face these directly and then work toward making ourselves as resilient as possible.
Given all that, should we be concerned that yet another Anthropic safety team engineer has quit, citing that they’d prefer to use the remaining time that we’ve all got to write poetry in a remote location in the UK? Here’s what they wrote:
