Opinion
The Palantir Debate
Is Michael Burry Right That Anthropic Is Eating Its Lunch?
A single deleted post on X wiped $23 billion from Palantir's market cap. Michael Burry, the investor who famously shorted the 2008 housing market, argued that Anthropic is capturing enterprise AI spending at a pace that makes Palantir's business model obsolete. Wedbush analyst Dan Ives fired back, calling it a "fictional narrative." One of them is materially wrong about the data. Here is what the numbers actually show.
April 14, 2026 · NASDAQ: PLTR
The Setup
On April 9, 2026, Michael Burry posted on X that Anthropic is "eating Palantir's lunch." He cited spending data from Ramp, contrasted the two companies' growth trajectories, and reiterated his long-standing bearish thesis on Palantir Technologies (NASDAQ: PLTR). The post was deleted within hours, but the damage was done. Palantir fell 7.3% in a single session, erasing roughly $23 billion in market value.
Within 24 hours, Wedbush Securities analyst Dan Ives fired back, calling Burry's take "the wrong take and fictional narrative." Ives reaffirmed his Outperform rating with a $230 price target and declared Palantir "at the epicenter of leaders in the AI Revolution." Futurum CEO Daniel Newman piled on, noting that Anthropic is currently blacklisted by the Pentagon, and asking pointedly: "How exactly does it plan to eat $PLTR lunch?"
The debate touched a nerve because it forced two questions that Palantir bulls rarely have to answer simultaneously. First: does Palantir have a durable competitive moat in a world where AI model providers are going directly to enterprise customers? And second: does any moat, however deep, justify a stock that trades at 186 times trailing earnings?
This article is not a trade recommendation on Palantir. It is an attempt to evaluate who actually has the data on their side in this debate, where both sides overstate their case, and what the numbers suggest about the competitive landscape between AI platforms and the companies that deploy them.
What Burry Actually Said
Burry's argument was not a vague gesture at competition. It was specific and data-driven, built on three pillars.
First, he cited Anthropic's growth trajectory, noting that the company's annualized revenue surged from $9 billion at the end of 2025 to $30 billion by early April 2026. He contrasted this with Palantir's own journey: "It took $PLTR 20 years to get to $5 billion."
Second, he cited data from corporate spending tracker Ramp, arguing that Anthropic is capturing 73% of all new enterprise AI spending when companies make their first direct choice between AI providers. The implication: businesses are bypassing middleware platforms like Palantir and going directly to model providers.
Third, and most structurally, Burry argued that Palantir is not a software company in the way the market prices it. He pointed out that Palantir embeds its own engineers inside client organizations for months at a time, and that the company's own SEC filings categorize much of this work under professional services. In Burry's framing, this makes Palantir a consulting business collecting a software multiple.
He summed up his view bluntly: "PLTR can have government, which is low margin and small."
This is not a new position for Burry. His fund, Scion Asset Management, disclosed put options on approximately 5 million Palantir shares in its Q3 2025 13F filing, with a notional value of $912 million. Burry later clarified that the actual premium paid was $9.2 million, not the headline notional figure. He has since rolled those positions into longer-dated contracts, including December 2026 $100 puts and June 2027 $50 puts, according to reports. This is a high-conviction, multi-year bet.
The Anthropic Growth Machine
Whether or not Anthropic is a direct competitor to Palantir (more on that below), the revenue trajectory Burry cited is real. And it is, by any historical standard, unprecedented.
Anthropic Revenue Timeline
Jan 2025: ~$1 billion annualized run rate
Aug 2025: ~$5 billion
Dec 2025: ~$9 billion
Feb 2026: ~$14 billion (confirmed by Anthropic)
Mar 2026: ~$19 billion (Bloomberg, confirmed by CEO Dario Amodei at Morgan Stanley TMT conference)
Apr 2026: ~$30 billion (multiple sources including TechCrunch, Bloomberg)
That is $1 billion to $30 billion in approximately 15 months. SaaStr's Alex Clayton reviewed the IPO trajectories of over 200 public software companies and said a growth rate like this has never happened before. For context, Salesforce took roughly 20 years to reach $30 billion in annual revenue.
The composition matters as much as the headline number. Approximately 80% of Anthropic's revenue comes from business customers, not consumers. Over 1,000 companies spend more than $1 million per year with Anthropic, a figure that doubled from 500 in less than two months following the company's February 2026 Series G funding round. Claude Code, the company's AI coding tool launched in May 2025, reached $2.5 billion in annualized revenue by February 2026.
The Ramp data that Burry cited tells the adoption story from the buyer's side. According to Ramp's April 2026 AI Index, 30.6% of businesses on their platform now pay for Anthropic, up from just 4% a year ago. The gap between Anthropic and OpenAI (at 35.2%) has narrowed to 4.6 percentage points, down from 11 points just two months earlier. Among VC-backed firms, Anthropic already leads OpenAI at 66% to 59%.
Perhaps most striking: when companies making their first direct choice between Anthropic and OpenAI, Anthropic wins roughly seven out of ten of those contests. Just ten weeks before that data was published, the split was 50/50.
None of this proves that Anthropic is a direct threat to Palantir specifically. But it establishes the factual backdrop that Burry is working from: enterprise AI spending is consolidating around model providers, and Anthropic is winning that consolidation at a pace the market has never seen before.
The Forward Deployed Problem
The strongest part of Burry's argument is not about Anthropic at all. It is about how Palantir actually delivers its product.
Palantir's model is built around what the company calls Forward Deployed Engineers (FDEs), or "Deltas." These are software engineers who embed directly inside client organizations, sometimes for months at a time, to configure Palantir's platforms for specific use cases. A separate role, Deployment Strategists, bridges the gap between technology and operational priorities. Palantir's own SEC filings acknowledge that much of this work falls under professional services revenue.
Palantir's defenders push back on the "consulting company" label. They argue that FDEs are a product development mechanism, not a service delivery arm. The field work generates product insights that become platform features. Over time, as the company's own literature describes it, the revenue mix tilts toward software subscription.
That distinction is intellectually interesting but economically secondary. The question investors should ask is not whether FDEs are philosophically different from consultants. It is whether the business can scale revenue without proportionally scaling headcount. A pure software company can add customers with near-zero marginal cost. A company that embeds engineers on-site for months cannot. Revenue that requires human labor at the point of delivery is structurally different from revenue that does not, regardless of what you call the humans.
Anthropic's model illustrates the contrast. Its API can be integrated into existing enterprise workflows without any on-site staffing or prolonged implementation. A company signs up, gets an API key, and starts running production workloads. The marginal cost of serving an additional enterprise customer is a fraction of what it costs Palantir to deploy a team of engineers into a new account.
This does not mean Palantir's model is broken today. The company's gross margin sits at 82%, which is legitimately high. But it does mean Burry's structural critique deserves more engagement than Ives gave it. Calling the argument "fictional" without addressing the actual business model is not a rebuttal. It is a press release.
The Pentagon Paradox
One of the most interesting subplots in this debate is the Pentagon's standoff with Anthropic, and what it reveals about Palantir's own position.
In late February 2026, Defense Secretary Pete Hegseth designated Anthropic a "supply chain risk to national security" after the company declined to grant the Pentagon unrestricted use of its Claude AI model. Anthropic CEO Dario Amodei said the company could not "in good conscience" allow Claude to be used for autonomous weapons without human oversight or for mass surveillance of American citizens. President Trump directed all federal agencies to stop using Anthropic's technology within six months.
The designation is historically unprecedented. It is a label normally reserved for foreign adversaries, not American companies. A federal judge in California, Judge Rita Lin, issued a preliminary injunction calling the government's actions "classic illegal First Amendment retaliation." But a D.C. Circuit appeals court subsequently denied Anthropic's emergency stay, siding with the Pentagon on the grounds that wartime AI procurement decisions outweigh financial harm to a private company. The result is a split: Anthropic is excluded from Defense Department contracts but can continue working with other federal agencies while litigation plays out.
Bulls like Daniel Newman cite this as proof that Anthropic cannot threaten Palantir: "Anthropic is great, but it is also currently blacklisted by the Pentagon. U.S. Govt is largest customer of Palantir. How exactly does it plan to eat $PLTR lunch?"
That argument has surface appeal but ignores the more important data point. Claude was the first major AI model to be embedded in classified military operations. It was integrated into Palantir's Maven Smart Systems, the digital assistant for military commanders. When the Pentagon blacklisted Anthropic, Palantir was ordered to remove Claude from Maven and rebuild affected sections of the platform.
Think about what that means. Palantir, the company whose entire value proposition is being the secure AI deployment layer for government, was dependent on a third-party model provider. When access was revoked, Palantir did not seamlessly switch to its own proprietary AI. It had to scramble to rebuild. This is the dependency Burry is pointing at. Palantir is the integration layer, not the intelligence layer. And in a world where the intelligence layer is where value is concentrating, that distinction matters.
The Bull Case for Palantir
The bear case is not the whole story. Palantir's operational performance has been genuinely strong, and the bull thesis has real substance that Burry's framing understates.
PLTR Snapshot (as of April 2026)
Q4 2025 Revenue: $1.4 billion (+70% YoY)
U.S. Commercial Revenue: $507 million (+137% YoY)
FY2026 Revenue Guidance: ~$7.2 billion
Gross Margin: 82.4%
Operating Margin: 31.6%
Free Cash Flow (TTM): $2.1 billion
Cash on Hand: $7.2 billion (net cash $6.95B)
Revenue growth of 70% at Palantir's scale is not trivial. The U.S. commercial business, which is the segment most relevant to the Anthropic competition question, grew 137% year-over-year. This is a company with real momentum in enterprise sales, not just government contracts.
The AIP (Artificial Intelligence Platform) product, launched in 2023, has been the primary growth driver. AIP allows organizations to deploy large language models within Palantir's operational framework, with built-in access controls, data governance, and audit trails. For highly regulated industries (healthcare, financial services, defense) where data residency and compliance are non-negotiable, Palantir's security-first architecture provides something that a raw API from Anthropic or OpenAI does not: the governance layer that sits between the model and the mission.
The government business, while lower-margin than pure software, is also deeply sticky. Multi-year defense contracts with significant switching costs create a revenue base that is more predictable than most enterprise SaaS. And the sheer complexity of deploying AI in classified environments, with air-gapped networks, security clearances, and operational requirements that commercial AI providers are not equipped to handle, creates a genuine moat that Burry dismisses too quickly with "low margin and small."
The counter-argument to Burry is that Palantir and Anthropic occupy different layers of the stack. Anthropic provides the reasoning engine. Palantir provides the operational deployment infrastructure that governs how that engine is used inside complex organizations. These are complementary, not competitive, and the Ramp spending data does not measure the same category of purchase.
The Valuation Question
Here is where the bull case starts to fracture.
Palantir trades at approximately 186 times trailing earnings. Its forward P/E is roughly 104 times expected earnings for 2026. It is the third-highest earnings multiple in the entire S&P 500. The stock's enterprise value to EBITDA ratio sits near 208x.
At 186x trailing earnings, Palantir is not priced as an enterprise software company. It is not priced as a defense contractor. It is not priced as a consulting firm, or as a data analytics platform, or as any category of business that generates $2.1 billion in free cash flow. It is priced as the singular, indispensable operating system for the AI era, with no credible competition, no margin for execution error, and decades of compound growth ahead.
That is the standard the valuation demands. Every incremental data point that suggests competition is real, or that the moat is narrower than assumed, or that revenue growth might eventually moderate, threatens the multiple. A stock at 186x earnings is not a stock with room for ambiguity.
Dan Ives has a $230 price target on Palantir. To reach that target at anything resembling a reasonable forward multiple, earnings would need to roughly double from current levels. That is possible, but it is the most optimistic scenario, and it is being treated as the base case. When Ives calls Burry's critique "fictional," he is asking investors to pay 186 times earnings for a company that embeds human engineers at client sites and was recently forced to rip out its AI model provider on short notice. That is a lot of faith for a price tag that leaves no room for error.
What Both Sides Get Wrong
Burry's framing has a significant flaw. Anthropic and Palantir are not direct competitors in the way he implies. They occupy different layers of the enterprise AI stack. Anthropic sells reasoning capabilities via API. Palantir sells the data integration, governance, and deployment infrastructure that wraps around those capabilities. A company buying Anthropic's API is not necessarily replacing Palantir. In many cases, it is buying a component that Palantir itself might orchestrate.
The Ramp spending data, while real, measures something different from what Burry implies. Ramp tracks corporate card and invoice payments. It captures businesses buying AI subscriptions and API access. It does not capture the kind of large, multi-year platform contracts that Palantir signs. Saying Anthropic captures 70% of "new enterprise AI spending" is accurate within Ramp's dataset, but it is not the same as saying Anthropic is capturing 70% of the total enterprise AI market. The categories are different.
Ives, on the other hand, is engaging in sell-side theater. His response contained no data, no engagement with the structural arguments about Palantir's delivery model, and no acknowledgment that 186x earnings leaves the stock extraordinarily vulnerable to competitive narratives, whether or not those narratives are precisely calibrated. Calling Palantir a "Core AI winner" at the "epicenter of leaders in the AI Revolution" is marketing language, not analysis. A $230 price target is a conclusion. It is not a rebuttal.
Newman's point about the Pentagon blacklist is the strongest argument from the bull side, but it proves less than he thinks. The blacklist is a political and legal dispute over safety guardrails, not a verdict on Anthropic's technology. A federal judge called the designation "Orwellian" and blocked it as unconstitutional retaliation. The D.C. Circuit allowed it on narrow wartime grounds. The underlying technology dispute could resolve in any number of ways. Building a long-term investment thesis on the assumption that the Pentagon will permanently exclude the most capable AI model provider from its systems is a bet on politics, not fundamentals.
The Bottom Line
Burry is asking the right question at the right valuation. When a stock trades at 186 times earnings, the burden of proof is on the bulls to explain why every competitive threat is fictional, why the business model is infinitely scalable, and why the current multiple is justified. Ives did not meet that burden. He responded with branding language where the moment demanded data.
The structural argument is the one that matters most. As AI model providers get better, cheaper, and easier to integrate via API, the middleware layer that Palantir occupies gets squeezed from both sides. The model providers move closer to enterprise customers. The customers get more sophisticated at deploying models directly. Palantir's value proposition is data integration and operational deployment in complex environments. That is defensible in classified government settings. It is increasingly commoditized in commercial enterprise.
That does not make Palantir worthless. It makes it mispriced. A company with $2.1 billion in free cash flow, 82% gross margins, and 70% revenue growth is a real business generating real value. But a real business at 186x earnings requires a level of certainty about future dominance that the competitive landscape does not support.
Burry's timing could easily be wrong. The stock has enormous momentum, a loyal retail following, and the kind of narrative magnetism that can sustain elevated multiples for longer than any short seller's patience. He was right about housing in 2005 and spent two painful years waiting for the market to agree.
But the directional thesis, that a company trading at nearly 200 times earnings without proprietary AI, with a consulting-heavy delivery model, and with documented dependency on third-party model providers, is vulnerable to the kind of spending shift the Ramp data reveals, is harder to dismiss than Ives would like.
The data does not pick sides. It does not care about reputations, price targets, or deleted posts. It tells you that Anthropic's revenue tripled in four months. It tells you that Palantir trades at the third-highest multiple in the S&P 500. It tells you that Claude was embedded in classified military operations and had to be ripped out on short notice, exposing a dependency the market had not priced. And it tells you that 70% of companies choosing an AI provider for the first time are choosing Anthropic.
At Wealth Engine Pro, the philosophy is straightforward: evaluate companies based on what they are, not what someone hopes they will become. Look at the financial data, the competitive positioning, and the price you are being asked to pay. The data does not prove Burry right or Ives right. But it does suggest that the question Burry is asking, whether 186 times earnings is justified for a company in this competitive position, deserves a better answer than "fictional narrative."
Research Stocks Like PLTR with Wealth Engine Pro
Wealth Engine Pro scores 5,500+ stocks across financial health, trend strength, and intrinsic value. Our Confidence Score, Fair Value estimates, and Market Intelligence tools help you evaluate companies based on what they are, not what someone hopes they will become. Data over narrative.