The AI revolution is here. Will the economy survive the transition?
The man who predicted the 2008 crash, Anthropic’s co-founder, and a leading AI podcaster jump into a Google doc to debate the future of AI—and, possibly, our lives
Michael Burry called the subprime mortgage crisis when everyone else was buying in. Now he’s watching trillions pour into AI infrastructure, and he’s skeptical. Jack Clark is the co-founder of Anthropic, one of the leading AI labs racing to build the future. Dwarkesh Patel has interviewed everyone from Mark Zuckerberg to Tyler Cowen about where this is all headed. We put them in a Google doc with Patrick McKenzie moderating and asked: Is AI the real deal, or are we watching a historic misallocation of capital unfold in real time?
The story of AI
Patrick McKenzie: You’ve been hired as a historian of the past few years. Succinctly narrate what has been built since Attention Is All You Need. What about 2025 would surprise an audience in 2017? What predictions of well-informed people have not been borne out? Tell the story as you would to someone in your domain—research, policy, or markets.
Jack Clark: Back in 2017, most people were betting that the path to a truly general-purpose system would come from training agents from scratch on a curriculum of increasingly hard tasks, and through this, create a generally capable system. This was present in the research projects from all the major labs, like DeepMind and OpenAI, trying to train superhuman players in games like Starcraft, Dota 2, and AlphaGo. I think of this as basically a “tabula rasa” bet—start with a blank agent and bake it in some environment(s) until it becomes smart.
Of course, as we all know now, this didn’t actually lead to general intelligences—but it did lead to superhuman agents within the task distribution they were trained on.
At this time, people had started experimenting with a different approach, doing large-scale training on datasets and trying to build models that could predict and generate from these distributions. This ended up working extremely well, and was accelerated by two key things:
the Transformer framework from Attention Is All You Need, which made this type of large-scale pre-training much more efficient, and
the roughly parallel development of “Scaling Laws,” or the basic insight that you could model out the relationship between capabilities of pre-trained models and the underlying resources (data, compute) you pour into them.
By combining Transformers and the Scaling Laws insights, a few people correctly bet that you could get general-purpose systems by massively scaling up the data and compute.
Now, in a very funny way, things are coming full circle: people are starting to build agents again, but this time, they’re imbued with all the insights that come from pre-trained models. A really nice example of this is the SIMA 2 paper from DeepMind, where they make a general-purpose agent for exploring 3D environments, and it piggybacks on an underlying pre-trained Gemini model. Another example is Claude Code, which is a coding agent that derives its underlying capabilities from a big pre-trained model.
Patrick: Due to large language models (LLMs) being programmable and widely available, including open source software (OSS) versions that are more limited but still powerful relative to 2017, we’re now at the point where no further development on AI capabilities (or anything else interesting) will ever need to be built on a worse cognitive substrate than what we currently have. This “what you see today is the floor, not the ceiling” is one of the things I think best understood by insiders and worst understood by policymakers and the broader world.
Every future Starcraft AI has already read The Art of War in the original Chinese, unless its designers assess that makes it worse at defending against Zerg rushes.
Jack: Yes, something we say often to policymakers at Anthropic is “This is the worst it will ever be!” and it’s really hard to convey to them just how important that ends up being. The other thing which is unintuitive is how quickly capabilities improve—one current example is how many people are currently playing with Opus 4.5 in Claude Code and saying some variation of “Wow, this stuff is so much better than it was before.” If you last played with LLMs in November, you’re now wildly miscalibrated about the frontier.
Michael Burry: From my perspective, in 2017, AI wasn’t LLMs. AI was artificial general intelligence (AGI). I think people didn’t think of LLMs as being AI back then. I mean, I grew up on science fiction books, and they predict a lot, but none of them pictured “AI” as something like a search-intensive chatbot.
For Attention Is All You Need and its introduction of the transformer model, these were all Google engineers using Tensor, and back in the mid-teens, AI was not a foreign concept. Neural networks, machine learning startups were common, and AI was mentioned a lot in meetings. Google had the large language model already, but it was internal. One of the biggest surprises to me is that Google wasn’t leading this the whole way given its Search and Android dominance, both with the chips and the software.
Another surprise is that I thought application-specific integrated circuits (ASICs) would be adopted far earlier, and small language models (SLMs) would be adopted far earlier. That Nvidia has continued to be the chip for AI this far into inference is shocking.
The biggest surprise to me is that ChatGPT kicked off the spending boom. The use cases for ChatGPT have generally been limited from the start—search, students cheating, and coding. Now there are better LLMs for coding. But it was a chatbot that kicked off trillions in spending.
Speaking of that spending, I thought one of the best moments of Dwarkesh’s interview with Satya Nadella was the acknowledgement that all the big software companies are hardware companies now, capital-intensive, and I am not sure the analysts following them even know what maintenance capital expenditure is.
Dwarkesh Patel: Great points. It is quite surprising how non-durable leads in AI so far have been. Of course, in 2017, Google was far and away ahead. A couple years ago, OpenAI seemed way ahead of the pack. There is some force (potentially talent poaching, rumor mills, or reverse engineering) which has so far neutralized any runaway advantages a single lab might have had. Instead, the big three keep rotating around the podium every few months. I’m curious whether “recursive superintelligence” would actually be able to change this, or whether we should just have a prior and strong competition forever.
Jack: On recursion, all the frontier labs are speeding up their own developers using AI tools, but it’s not very neat. It seems to have the property of “you’re only as fast as the weakest link in the chain”—for instance, if you can now produce 10x more code but your code review tools have only improved by 2x, you aren’t seeing a massive speedup. A big open question is whether it’ll be possible to fully close this loop, in which case you might see some kind of compounding R&D advantage.
Do AI tools actually improve productivity?
Dwarkesh: The million-dollar question is whether the METR productivity study (which shows that developers working in codebases they understood well had a roughly 20% decrease on merging pull requests from coding tools) or human equivalent time horizons of self-contained coding tasks (which are already in the many-hours range and doubling every four to seven months) is a better measure of how much speedup researchers and engineers at labs are actually getting. I don’t have direct experience here, but I’d guess it’s closer to the former, given that there isn’t a great feedback verification loop and the criteria are open-ended (maintainability, taste, etc.).
Jack: Agreed, this is a crucial question—and the data is conflicting and sparse. For example, we did a survey of developers at Anthropic and saw a self-reported 50% productivity boost from the 60% of those surveyed who used Claude in their work. But then things like the METR study would seem to contradict that. We need better data and, specifically, instrumentation for developers inside and outside the AI labs to see what is going on. To zoom out a bit, the massive and unprecedented uptake of coding tools does suggest people are seeing some major subjective benefit from using them—it would be very unintuitive if an increasing percentage of developers were enthusiastically making themselves less productive.
Dwarkesh: Not to rabbit hole on this, but the self-reported productivity being way higher than—and potentially even in the opposite direction of—true productivity is predicted by the METR study.
Jack: Yes, agreed. Without disclosing too much, we’re thinking specifically about instrumentation and figuring out what is “true” here, because what people self-report may end up being different from reality. Hopefully we’ll have some research outputs on this in 2026!
Which company is winning?
Michael: Do you think the podium will keep rotating? From what I’m hearing, Google is winning among developers from both AWS and Microsoft. And it seems the “search inertia” has been purged at the company.
Dwarkesh: Interesting. Seems more competitive than ever to me. The Twitter vibes are great for both Opus 4.5 and Gemini 3.5 Pro. No opinion on which company will win, but it definitely doesn’t seem settled.
Jack: Seems more competitive than ever to me, also!
Dwarkesh: Curious on people’s take on this: how many failed training runs/duds of models could Anthropic or OpenAI or Google survive? Given the constant need to fundraise (side question: for what exactly?) on the back of revenue and vibes.
Michael: The secret to Google search was always how cheap it was, so that informational searches that were not monetizable (and make up 80% or more) did not pile up as losses for the company. I think this is the fundamental problem with generative AI and LLMs today—they are so expensive. It is hard to understand what the profit model is, or what any one model’s competitive advantage will be—will it be able to charge more, or run cheaper?
Perhaps Google will be the one that can run cheapest in the end, and will win the commodity economy that this becomes.
Dwarkesh: Great point. Especially if you think many/most of the gains over the last year have been the result of inference scaling, which requires an exponential increase in variable cost to sustain.
Ultimately, the price of something is upper-bounded by the cost to replace it. So foundation model companies can only charge high margins (which they currently seem to be) if progress continues to be fast and, to Jack’s point, becomes eventually self-compounding.
Why hasn’t AI stolen all our jobs?
Dwarkesh: It’s really surprising how much is involved in automating jobs and doing what people do. We’ve just marched through so many common-sense definitions of AGI—the Turing test is not even worth commenting on anymore; we have models that can reason and solve difficult, open-ended coding and math problems. If you showed me Gemini 3 or Claude 4.5 Opus in 2017, I would have thought it would put half of white-collar workers out of their jobs. And yet the labor market impact of AI requires spreadsheet microscopes to see, if there is indeed any.
I would have also found the scale and speed of private investment in AI surprising. Even as of a couple years ago, people were talking about how AGI would have to be a government, Manhattan-style project, because that’s the only way you can turn the economy into a compute and data engine. And so far, it seems like good ol’-fashioned markets can totally sustain multiple GDP percentages of investment in AI.
Michael: Good point, Dwarkesh, re: the Turing test—that was definitely the discussion for a good while. In the past, for instance, during the Industrial Revolution and the Services Revolution, the impacts on labor were so great that mandatory schooling was instituted and expanded to keep young people out of the labor pool for longer. We certainly have not seen anything like that.
Jack: Yes, Dwarkesh and Michael, a truism for the AI community is they keep on building supposedly hard tasks that will measure true intelligence, then AI systems blow past these benchmarks, and you find yourself with something which is superficially very capable but still likely makes errors which any human would recognize as bizarre or unintuitive. One recent example is LLMs were scored “superhuman” on a range of supposedly hard cognitive tasks, according to benchmarks, but were incapable of self-correcting when they made errors. This is now improving, but it’s an illustration of how unintuitive the weaknesses of AI models can be. And you often discover them alongside massive improvements.
Dwarkesh: I wonder if the inverse is also true—humans reliably make classes of errors that an LLM would recognize as bizarre or unintuitive, lol. Are LLMs actually more jagged than people, or just jagged in a different way?
Patrick: Stealing an observation from Dwarkesh’s book, a mundane way in which LLMs are superhuman is that they speak more languages than any human—by a degree that confounds the imagination—and with greater facility than almost all polyglots ever achieve. Incredibly, this happens by accident, even without labs specifically training for it. One of the most dumbfounding demos I’ve ever seen was an LLM trained on a corpus intended to include only English documents yet able to translate a CNN news article to Japanese at roughly the standard of a professional translator. From that perspective, an LLM that hadn’t had politeness trained into it might say, “Humans are bizarre and spiky; look how many of them don’t speak Japanese despite living in a world with books.”
Why many workers aren’t using AI (yet)
Patrick: Coding seems to be the leading edge for widespread industrial adoption of AI, with meteoric revenue growth for companies like Cursor, technologists with taste taking to tools like Claude Code and OpenAI Codex, and the vibes around “vibe coding.” This causes a pronounced asymmetry of enthusiasm for AI, since most people are not coders. What sector changes next? What change would make this visible in earnings, employment, or prices rather than demos?
Jack: Coding has a nice property of being relatively “closed loop”—you use an LLM to generate or tweak code, which you then validate and push into production. It really took the arrival of a broader set of tools for LLMs to take on this “closed loop” property in domains outside of coding—for instance, the creation of web search capabilities and the arrival of stuff like Model Context Protocol (MCP) connectivity has allowed LLMs to massively expand their “closed loop” utility beyond coding.
As an example, I’ve been doing research on the cost curves of various things recently (e.g. dollars of mass to orbit, or dollars per watt from solar), and it’s the kind of thing you could research with LLMs prior to these tools, but it had immense amounts of friction and forced you to go back and forth between the LLM and everything else. Now that friction has been taken away, you’re seeing greater uptake. Therefore, I expect we’re about to see what happened to coders happen to knowledge workers more broadly—and this feels like it should show up in a diffuse but broad way across areas like science research, the law, academia, consultancy, and other domains.
Michael: At the end of the day, AI has to be purchased by someone. Someone out there pays for a good or service. That is GDP. And that spending grows at GDP rates, 2% to 4%—with perhaps some uplift for companies with pricing power, which doesn’t seem likely in the future of AI.
Economies don’t have magically expanding pies. They have arithmetically constrained pies. Nothing fancy. The entire software pie—SaaS software running all kinds of corporate and creative functions—is less than $1 trillion. This is why I keep coming back to the infrastructure-to-application ratio—Nvidia selling $400 billion of chips for less than $100 billion in end-user AI product revenue.
AI has to grow productivity and create new categories of spending that don’t cannibalize other categories. This is all very hard to do. Will AI grow productivity enough? That is debatable. The capital expenditure spending cycle is faith-based and FOMO-based. No one is pointing to numbers that work. Yet.
There is a much simpler narrative out there that AI will make everything so much better that spending will explode. It is more likely to take spending in. If AI replaces a $500 seat license with a $50 one, that is great for productivity but is deflationary for productivity spend. And that productivity gained is likely to be shared by all competitors.
Dwarkesh: Michael, isn’t this the “lump of labor” fallacy? That there’s a fixed amount of software to be written, and that we can upper bound the impact of AI on software by that?
Michael: New markets do emerge, but they develop slower than acutely incentivized futurists believe. This has always been true. Demographics and total addressable market (TAM) are too often marketing gimmicks not grounded in reality. China’s population is shrinking. Europe’s is shrinking. The U.S. is the only major Western country growing, and that is because of immigration, but that has been politicized as well. FOMO is a hell of a drug. You look at some comments from Apple or Microsoft, and it seems they realize that.
Dwarkesh: As a sidenote, it’s funny that AI comes around just when we needed it to save us from the demographic sinkhole our economies would have otherwise been collapsing into over the next few decades.
Michael: Yes, Dwarkesh. In medicine, where there are real shortages, there is no hope for human doctors to be numerous enough in the future. Good medical care has to become cheaper, and technology is needed to extend the reach and coverage of real medical expertise.
Are engineers going to be out of work?
Patrick: AppAmaGooFaceSoft [Apple, Amazon, Google, Facebook, Microsoft] presently employ on the order of 500,000 engineers. Put a number on that for 2035 and explain your thinking—or argue that headcount is the wrong variable, and name the balance-sheet or productivity metric you’d track instead.
Michael: From 2000, Microsoft added 18,000 employees as the stock went nowhere for 14 years. In fact headcount barely moved at Cisco, Dell, and Intel, despite big stock crashes. So I think it is the wrong variable. It tells us nothing about value creation, especially for cash-rich companies and companies in monopoly, duopoly, or oligopoly situations. I think it will be lower, or not much higher, because I think we are headed for a very long downturn. The hyperscalers laid off employees in 2022 when their stocks fell, and hired most of them back when their stocks rose. This is over a couple years.
I would track shareholder-based compensation’s (SBC) all-in cost before saying productivity is making a record run. At Nvidia, I calculated that roughly half of its profit is eliminated by compensation linked to stock that transferred value to those employees. Well, if half the employees are now worth $25 million, then what is the productivity gain on those employees? Not to mention, margins with accurate SBC costs would be much lower.
The measure to beat all measures is return on invested capital (ROIC), and ROIC was very high at these software companies. Now that they are becoming capital-intensive hardware companies, ROIC is sure to fall, and this will pressure shares in the long run. Nothing predicts long-term trends in the markets like the direction of ROIC—up or down, and at what speed. ROIC is heading down really fast at these companies now, and that will be true through 2035.
In his interview with Dwarkesh, Satya Nadella said that he’s looking for software to maintain ROIC through a heavy capital expenditure cycle. I cannot see it, and even to Nadella, it sounds like only a hope.
Dwarkesh: Naive question, but why is ROIC more important than absolute returns? I’d rather own a big business that can keep growing and growing (albeit as a smaller fraction of investment) than a small business that basically prints cash but is upper-bounded in size.
So many of the big tech companies have lower ROIC, but their addressable market over the next two decades has increased from ads ($400 billion in revenue a year) to labor (tens of trillions in revenue a year).
Michael: Return on invested capital—and, more importantly, its trend—is a measure of how much opportunity is left in the company. From my perspective, I have seen many roll-ups where companies got bigger primarily through buying other companies with debt. This brings ROIC into cold focus. If the return on those purchases ends up being less than the cost of debt, the company fails in a manner akin to WorldCom.
At some point, this spending on the AI buildout has to have a return on investment higher than the cost of that investment, or there is just no economic value added. If a company is bigger because it borrowed a lot more or spent all its cash flow on something low-return, that is not an attractive quality to an investor, and the multiple will fall. There are many non-tech companies printing cash with no real prospects for growth beyond buying it, and they trade at about 8x earnings.
Where is the money going?
Patrick: From a capital-cycle perspective, where do you think we are in the AI build-out—early over-investment, mid-cycle shakeout, or something structurally different from past tech booms? What would change your mind?
Michael: I do see it as different from prior booms, except in that the capital spending is remarkably short-lived. Chips cycle every year now; data centers of today won’t handle the chips of a few years from now. One could almost argue that a lot of this should be expensed, not capitalized. Or depreciated over two, three, four years.
Another big difference is that private credit is financing this boom as much as or more than public capital markets. This private credit is a murky area, but the duration mismatch stands out—much of this is being securitized as if the assets last two decades, while giving the hyperscaler outs every four to five years. This is just asking for trouble. Stranded assets.
Of course, the spenders are the richest companies on earth, but whether from cash or capital markets, big spending is big spending, and the planned spending overwhelms the balance sheets and cash flow of even today’s massive hyperscalers.
Also, construction in progress (CIP) is now an accounting trick that I believe is already being used. Capital equipment not yet “placed into service” does not start depreciating or counting against income. And it can be there forever. I imagine a lot of stranded assets will be hidden in CIP to protect income, and I think we are already seeing that potential.
In Dwarkesh’s interview, Nadella said he backed off some projects and slowed down the buildout because he did not want to get stuck with four or five years of depreciation on one generation of chips. That is a bit of a smoking-gun statement.
We are mid-cycle now—past the point where stocks will reward investors for further buildout, and getting into the period where the true costs and the lack of revenue will start to show themselves.
In past cycles, stocks and capital markets peaked about halfway through, and the rest of the capital expenditure occurred as a progressively pessimistic, or realistic, view descended on the assets of concern.
Dwarkesh: I think this is so downstream of whether AI continues to improve at a rapid clip. If you could actually run the most productive human minds on a B200 (Nvidia’s B200 GPU), then we’re obviously massively underinvesting. I think the revenues from the application layer so far are less informative than raw predictions about progress in AI capabilities themselves.
Jack: Agreed on this—the amount of progress in capabilities in recent years has been deeply surprising and has led to massive growth in utilization of AI. In the future, there could be further step-change increases in model capabilities, and these could have extremely significant effects on the economy.
What the market gets wrong
Patrick: Where does value accrue in the AI supply chain? How is this different from recent or historical technological advances? Who do you think the market is most wrong about right now?
Michael: Well, value accrues, historically, in all industries, to those with a durable competitive advantage manifesting as either pricing power or an untouchable cost or distribution advantage.
It is not clear that the spending here will lead to that.
Warren Buffett owned a department store in the late 1960s. When the department store across the street put an escalator in, he had to, too. In the end, neither benefited from that expensive project. No durable margin improvement or cost improvement, and both were in the same exact spot. That is how most AI implementation will play out.
This is why trillions of dollars of spending with no clear path to utilization by the real economy is so concerning. Most will not benefit, because their competitors will benefit to the same extent, and neither will have a competitive advantage because of it.
I think the market is most wrong about the two poster children for AI: Nvidia and Palantir. These are two of the luckiest companies. They adapted well, but they are lucky because when this all started, neither had designed a product for AI. But they are getting used as such.
Nvidia’s advantage is not durable. SLMs and ASICs are the future for most use cases in AI. They will be backward-compatible with CUDA [Nvidia’s parallel computing platform and programming model] if at all necessary. Nvidia is the power-hungry, dirty solution holding the fort until the competition comes in with a completely different approach.
Palantir’s CEO compared me to [bad actors] because of an imagined billion-dollar bet against his company. That is not a confident CEO. He’s marketing as hard as he can to keep this going, but it will slip. There are virtually no earnings after stock-based compensation.
Dwarkesh: It remains to be seen whether AI labs can achieve a durable competitive advantage from recursive self-improvement-type effects. But if Jack is right and AI developers should already be seeing huge productivity gains, then why are things more competitive now than ever? Either this kind of internal “dogfooding” cannot sustain a competitive advantage or the productivity gains from AI are smaller than they appear.
If it does turn out to be the case that (1) nobody across the AI stack can make crazy profits and (2) AI still turns out to be a big deal, then obviously the value accrues to the customer. Which, to my ears, sounds great.
Michael: In the escalator example, the only value accrued to the customer. This is how it always goes if no monopoly rents can be charged by the producers or providers.
What would change their minds
Patrick: What 2026 headline—technological or financial—would surprise you and cause you to recalibrate your overall views on AI progress or valuation? Retrospectively, what was the biggest surprise or recalibration to date?
Michael: The biggest surprise that would cause me to recalibrate would be autonomous AI agents displacing millions of jobs at the biggest companies. This would shock me but would not necessarily help me understand where the durable advantage is. That Buffett escalator example again.
Another would be application-layer revenue hitting $500 billion or more because of a proliferation of killer apps.
Right now, we will see one of two things: either Nvidia’s chips last five to six years and people therefore need less of them, or they last two to three years and the hyperscalers’ earnings will collapse and private credit will get destroyed.
Retrospectively, the biggest surprises to date are:
Google wasn’t leading the whole way—the eight authors of Attention Is All You Need were all Google employees; they had Search, Gmail, Android, and even the LLM and the chips, but they fumbled it and gave an opening to competitors with far less going for them. Google playing catch-up to a startup in AI: that is mind-blowing.
ChatGPT—a chatbot kicked off a multi-trillion-dollar infrastructure race. It’s like someone built a prototype robot and every business in the world started investing for a robot future.
Nvidia has maintained dominance this far into the inference era. I expected ASICs and SLMs to be dominant by now, and that we would have moved well beyond prompt engineering. Perhaps the Nvidia infatuation actually held players back. Or anticompetitive behavior at Nvidia did.
Dwarkesh: Biggest surprises to me would be:
2026 cumulative AI lab revenues are below $40 billion or above $200 billion. It would imply that things have significantly sped up or slowed down compared to what I would have expected.
Continual learning is solved. Not in the way that GPT-3 “solved” in-context learning, but in the way that GPT-5.2 is actually almost human-like in its ability to understand from context. If working with a model is like replicating a skilled employee that’s been working with you for six months rather than getting their labor on the first hour of their job, I think that constitutes a huge unlock in AI capabilities.
Since 2020, my timelines to AGI have narrowed considerably—the scaling results have made it hard to place much probability on “completely wrong track, wait until end of century.” At the same time, there is a core of human-like learning missing that a true AGI must have. I now expect something like 5 to 15 years. If progress deviates significantly from that trend line in either direction, I’ll update.
Jack: If “scaling hits a wall,” that would be truly surprising and would have very significant implications for both the underlying research paradigm as well as the broader AI economy. Obviously, the infrastructure buildout, including the immense investments in facilities for training future AI models, suggests that people are betting otherwise.
One other thing I’d find surprising is if there was a combination of a technological breakthrough that improved the efficiency of distributed training, and some set of actors that put together enough computers to train a very powerful system. If this happened, it would suggest you can not only have open-weight models but also a form of open model development where it doesn’t take a vast singular entity (e.g. a company) to train a frontier model. This would alter the political economy of AI and have extremely non-trivial policy implications, especially around the proliferation of frontier capabilities. Epoch has a nice analysis of distributed training that people may want to refer to.
How they actually use LLMs
Patrick: What was your last professionally significant interaction with an LLM? File off the serial numbers, if need be. How did you relate to the LLM in that interaction?
Michael: I use Claude to produce all my charts and tables now. I will find the source material, but I spend no time on creating or designing a professional table, chart, or visual. I still don’t trust the numbers and need to check them, but that creative aspect is in the past for me. Relatedly, I will use Claude in particular to find source material, as so much source material these days is not simply at the SEC or in a mainstream report.
Patrick: I think people outside of finance do not understand how many billions of dollars have been spent having some of the best-paid, best-educated people in the world employed as Microsoft PowerPoint and Excel specialists. There is still value in that, for the time being, and perhaps the shibboleth value of pivot tables and VLOOKUP() will endure longer than they do, but my presentation at the Bank of England also used LLMs for all the charts. It feels almost bizarre that we once asked humans to spend hours carefully adjusting them.
Dwarkesh: They are now my personal one-on-one tutors. I’ve actually tried to hire human tutors for different subjects I’m trying to prep for, and I’ve found the latency and speed of LLMs to just make for a qualitatively much better experience. I’m getting the digital equivalent of people being willing to pay huge premiums for Waymo over Uber. It inclines me to think that the human premium for many jobs will not only not be high, but in fact be negative.
Michael: On that point, many point to trade careers as an AI-proof choice. Given how much I can now do in electrical work and other areas around the house just with Claude at my side, I am not so sure. If I’m middle class and am facing an $800 plumber or electrician call, I might just use Claude. I love that I can take a picture and figure out everything I need to do to fix it.
Risk, power, and how to shape the future
Patrick: The spectrum of views on AI risk among relatively informed people runs the gamut from “it could cause some unpleasantness on social media” to “it would be a shame if China beat the U.S. on a very useful emerging technology with potential military applications” to “downside risks include the literal end of everything dear to humanity.” What most keeps you up at night? Separately, if you had five minutes with senior policymakers, what new allocation of attention and resources would you suggest?
Jack: The main thing I worry about is whether people succeed at “building AI that builds AI”—fully closing the loop on AI R&D (sometimes called recursively self-improving AI). To be clear, I assign essentially zero likelihood to there being recursively self-improving AI systems on the planet in January 2026, but we do see extremely early signs of AI getting better at doing components of AI research, ranging from kernel development to autonomously fine-tuning open-weight models.
If this stuff keeps getting better and you end up building an AI system that can build itself, then AI development would speed up very dramatically and probably become harder for people to understand. This would pose a range of significant policy issues and would also likely lead to an unprecedented step change in the economic activity of the world, attributable to AI systems.
Put another way, if I had five minutes with a policymaker, I’d basically say to them, “Self-improving AI sounds like science fiction, but there’s nothing in the technology that says it’s impossible, and if it happened it’d be a huge deal and you should pay attention to it. You should demand transparency from AI companies about exactly what they’re seeing here, and make sure you have third parties you trust who can test out AI systems for these properties.”
Michael: Jack, I imagine you have policymakers’ ears, and I hope they listen.
AI as it stands right now does not worry me much at all, as far as risks to humanity. I think chatbots have the potential to make people dumber—doctors that use them too much start to forget their actual innate medical knowledge. That is not good, but not catastrophic.
The catastrophic worries involving AGI or artificial superintelligence (ASI) are not too worrying to me. I grew up in the Cold War, and the world could blow up at any minute. We had school drills for that. I played soccer with helicopters dropping Malathion over all of us. And I saw Terminator over 30 years ago. Red Dawn seemed possible. I figure humans will adapt.
If I had the ear of senior policymakers, I would ask them to take a trillion dollars (since trillions just get thrown around like millions now) and bypass all the protests and regulations and dot the whole country with small nuclear reactors, while also building a brand-new, state-of-the-art grid for everyone. Do this as soon as possible and secure it all from attack with the latest physical and cybersecurity; maybe even create a special Nuclear Defense Force that protects each facility, funded federally.
This is the only hope of getting enough power to keep up with China, and it is the only hope we have as a country to grow enough to ultimately pay off our debt and guarantee long-term security, by not letting power be a limiting factor on our innovation.
Jack: Strongly agree on the energy part (though we might have a different subjective worry level about the other stuff!). AI will play a meaningful role in the economy, and it fundamentally depends on underlying infrastructure to deliver it efficiently and cheaply to businesses and consumers—analogous to how, in the past, countries have decided to do large-scale electrification, road building, sewer building, etc. (massive capital expenditure projects!). We need to urgently do the same for energy.
I also think large-scale AI data centers are very useful test customers for novel energy technologies, and am particularly excited to see the fusion (pun intended!) of AI energy demand and nuclear technologies in the future. More broadly, I think “economic security is national security,” so making sure we have the infrastructure in place to build out the AI economy will have knock-on positive effects on our industrial base and overall robustness.
More on the participants:
Michael Burry is a former hedge fund manager and writer who publishes investment analysis and market commentary on his Substack, Cassandra Unchained. He is best known for predicting the subprime mortgage crisis, as depicted in The Big Short, and has more recently voiced skepticism about AI-driven market exuberance.
Jack Clark is the co-founder and head of policy at Anthropic, where he works on AI safety, governance, and the societal implications of frontier models. He also writes Import AI, a long-running newsletter analyzing advances in artificial intelligence, state power, and technological risk.
Dwarkesh Patel is the founder and host of the Dwarkesh Podcast, where he interviews leading thinkers on AI, economics, and scientific progress. He also publishes essays and interviews on his Substack, focusing on long-term technological trajectories, AI alignment, and civilizational risk.
Patrick McKenzie is a writer and software entrepreneur best known for his newsletter Bits About Money, where he explains finance, markets, and institutions. He also hosts the Complex Systems podcast and has previously worked in tech and payments, including at Stripe.























AI revolution is a bunch of rich people trying to be puppet masters as they make every day citizens dumber while convincing them they are empowering them, while at the same time destabilizing the economy and destroying the environment. Data centers are poisoning people and the earth. All it’s doing is pushing us closer to a new Great Depression and dust bowl.
Surprise, Surprise. More AI fear-porn. Honestly, AI gets nothing right, but uses eloquent speech and perfect grammar in its attempt to get you to believe the misinformation it just produced.
If you haven't figured it out by now, the predator-class wants you fearful, and AI is the next tool to get you scared. Don't play along. AI is retarded, and will always be a tool to regurgitate the crap it finds online.
If we refuse to read any more false prognostications of the end-of-human civilization due to AI, THEY WILL HAVE TO STOP WRITING ABOUT IT.
Get outside a bit today- please.