Opinion

Who Is Winning the AI Race?

The Answer Might Surprise You

The conventional narrative puts OpenAI in the lead, with Google and Meta close behind. But there's a company that has quietly built the single most important advantage in artificial intelligence: a self-reinforcing feedback loop where better coding AI builds better AI, which builds even better coding AI. That company is Anthropic. And the evidence that the loop is working just landed with seismic force.

April 9, 2026

The Feedback Loop That Matters

There is a thesis about AI development that doesn't get enough attention: the company that builds the best coding AI will win the entire race. Not because coding is the only thing AI does, but because coding is how AI gets built. Better coding models write better training infrastructure. Better training infrastructure produces better models across every domain. Those better models, in turn, become even better at coding.

This is a flywheel, and once it starts spinning fast enough, the company operating it begins to pull away from everyone else. Not linearly, but exponentially. The gap compounds with every cycle.

I've believed for some time that when one company creates the better feedback loop on AI development, they would begin to separate from the pack. And I believe Anthropic is doing that right now.

Anthropic's Coding Dominance

The evidence starts with benchmarks. Claude Opus 4.5 was the first model to break 80% on SWE-bench Verified, the industry's most respected test of real-world software engineering, built from actual GitHub issues in production repositories. Claude Opus 4.6, released in February 2026, extended that lead further and currently ranks as the top model on the Artificial Analysis Intelligence Index. On the LogRocket AI dev tool power rankings for March 2026, Opus 4.6 debuted at number one.

But benchmarks only tell part of the story. What matters more is adoption. Are real developers actually choosing Claude for their hardest work? The answer is a resounding yes. Claude powers the two most popular AI coding editors: Cursor and Windsurf. Claude Code, Anthropic's terminal-based coding agent launched in February 2025, has become the tool of choice for professional developers doing complex, multi-file refactoring and autonomous task execution. Claude Sonnet 4.6 is preferred over Opus 4.5 in Claude Code 59% of the time, suggesting that the product engineering around the model matters as much as raw capability.

Anthropic's Coding Position

SWE-bench Verified: Opus 4.6 leads publicly available models at 80.8%. Mythos Preview (restricted) scores 93.9%.

Developer tooling: Powers Cursor, Windsurf, Codeium, and Replit. Claude Code scores 80.9% on SWE-bench , higher than any raw model.

Revenue signal: Claude Code alone hit $2.5B in annualized revenue by February 2026, doubling since the start of the year.

Market share: An estimated 54% of the AI coding market.

The revenue trajectory tells the same story. Anthropic's annualized revenue has gone from roughly $100 million in January 2024 to $1 billion in January 2025 to an estimated $30 billion run rate in March 2026. That's not a typo. Approximately 300x growth in two years. Claude Code alone is generating over $2.5 billion in annualized revenue. Over 500 customers spend more than $1 million annually on Claude, and eight of the Fortune 10 are now customers.

This commercial traction isn't incidental to the AI race. Revenue funds compute. Compute trains better models. Better models generate more revenue. The flywheel, again.

The Mythos Moment: When the Feedback Loop Becomes Visible

On April 7, 2026, Anthropic published something that should have sent a shockwave through the AI industry. Their new model, Claude Mythos Preview, demonstrated cybersecurity capabilities that are qualitatively different from anything that came before. Not incrementally better. A different category entirely.

Mythos Preview autonomously discovered and exploited zero-day vulnerabilities in every major operating system and every major web browser. It found a 27-year-old bug in OpenBSD, an operating system famous for its security focus. It identified a 16-year-old vulnerability in FFmpeg's H.264 codec that had survived every fuzzer and human audit for over a decade. It wrote a complete remote code execution exploit for a 17-year-old FreeBSD vulnerability, fully autonomously, without any human intervention after the initial prompt.

The scale of the jump is staggering. The previous generation model, Opus 4.6, had a near-0% success rate at autonomous exploit development. On a benchmark using Firefox JavaScript engine vulnerabilities, Opus 4.6 developed working exploits 2 times out of several hundred attempts. Mythos Preview succeeded 181 times on the same test. That is not a marginal improvement. That is a phase transition.

The Capability Jump

Firefox exploit development: Opus 4.6: 2 successes. Mythos Preview: 181 successes. Same benchmark.

OSS-Fuzz corpus: Opus 4.6 achieved 1 tier-3+ crash. Mythos Preview achieved 10 full control-flow hijacks (tier 5) plus additional tier 3-4 crashes.

Autonomous exploitation: Chained together 4 vulnerabilities to escape a web browser sandbox. Wrote Linux kernel privilege escalation exploits combining 3-4 separate vulnerabilities. Built a 20-gadget ROP chain across multiple network packets for remote root access.

Here is the critical detail that most coverage has missed: Anthropic did not explicitly train Mythos Preview for cybersecurity. These capabilities emerged as a downstream consequence of general improvements in code, reasoning, and autonomy. In their own words, “the same improvements that make the model substantially more effective at patching vulnerabilities also make it substantially more effective at exploiting them.”

This is the feedback loop made visible. Anthropic built the best coding AI. That coding AI got so good at understanding code that it can now find and exploit bugs that human experts missed for 27 years. The capabilities weren't engineered . They emerged from being the best at code. And being the best at code came from the flywheel of better models building better models.

Anthropic's response was equally telling. Rather than releasing Mythos Preview broadly, they launched Project Glasswing, a coordinated initiative to use the model defensively, working with critical infrastructure partners to patch vulnerabilities before models with similar capabilities become widely available. They are treating their own model's capability as a national security-level event. That level of seriousness, and the capability that warrants it, is not something any competitor has demonstrated.

Why the Others Are Behind

OpenAI has the brand. Google has the compute. Meta has the open-source community. But none of them have Anthropic's combination of coding dominance, developer adoption, and the demonstrated capability leap that Mythos represents.

OpenAI's GPT-5.4 is competitive on coding benchmarks, scoring roughly 75-80% on SWE-bench Verified. But it trails Claude on real-world developer adoption. Claude powers the most popular coding tools, while OpenAI's Codex remains more narrowly focused. And critically, OpenAI has shown nothing comparable to the Mythos capability jump. Their models improve incrementally. Anthropic just demonstrated a discontinuous leap.

Google's Gemini 3.1 Pro is strong on reasoning benchmarks and offers excellent pricing, but has not established the same developer ecosystem foothold. Google's AI efforts also remain split across DeepMind, Google Brain, and Cloud, an organizational complexity that Anthropic, as a focused AI lab, doesn't carry.

xAI's Grok 4 posts competitive benchmark numbers, but the company is burning cash at an alarming rate, was acquired by SpaceX in February 2026, and has seen most of its co-founders depart. It is hard to sustain a leading AI research lab in the middle of that kind of corporate restructuring.

The Chinese open-source models (DeepSeek, Kimi, Qwen) are closing the gap on public benchmarks, which matters for commoditization pressure over time. But none have demonstrated anything near the emergent capabilities that Mythos Preview showed. The frontier is still being defined in San Francisco.

How to Get Exposure

Anthropic is private, valued at $380 billion after its $30 billion Series G round in February 2026. An IPO is being explored (the company has hired Wilson Sonsini to advise), but no date has been set. So how can investors participate?

The most direct public exposure comes through Anthropic's two largest strategic investors: Amazon and Alphabet.

Amazon (AMZN) has invested approximately $8 billion in Anthropic, giving it roughly 7.8% ownership according to SEC filings. Amazon Web Services is Anthropic's primary cloud and training partner, meaning Amazon benefits on both the equity appreciation and the cloud revenue side. In Q3 2025, Amazon reported a $9.5 billion pretax gain tied to the rising valuation of its Anthropic stake. Claude also powers Amazon's Alexa+, bringing the technology to millions of Prime households.

Alphabet (GOOG) holds approximately 14% of Anthropic for roughly $3 billion invested, making it the better deal per percentage point. In Q3 2025, Alphabet recorded $10.7 billion in net gains on equity securities, with sources confirming a significant portion came from Anthropic's rising valuation. Google also signed a major cloud deal to provide Anthropic with up to one million TPUs starting in 2026, a contract worth tens of billions and bringing over a gigawatt of compute capacity.

Nvidia (NVDA) and Microsoft (MSFT) jointly invested up to $15 billion in Anthropic in November 2025, alongside a commitment from Anthropic to purchase $30 billion in compute from Microsoft Azure running on Nvidia hardware. For both companies, Anthropic exposure is one piece of a much broader AI portfolio.

Other notable Anthropic investors include Salesforce Ventures, Fidelity, Goldman Sachs, BlackRock, JPMorgan Chase, Lightspeed Venture Partners, Sequoia Capital, and Founders Fund. For investors seeking more concentrated pre-IPO exposure, the KraneShares Artificial Intelligence and Technology ETF (AGIX) holds direct positions in both Anthropic and xAI alongside its top holdings of Microsoft, Alphabet, Amazon, and Nvidia. It has outperformed both the S&P 500 and Nasdaq since launch.

Public Market Exposure to Anthropic

Amazon (AMZN): ~7.8% stake. $8B invested. Primary cloud partner. $9.5B pretax gain in Q3 2025.

Alphabet (GOOG): ~14% stake. ~$3B invested. $10.7B equity gains in Q3 2025. Million-TPU cloud deal.

Nvidia + Microsoft: Up to $15B combined. Plus $30B compute purchase commitment from Anthropic.

AGIX ETF: Direct Anthropic equity position plus AMZN, GOOG, NVDA, MSFT holdings.

The Bottom Line

The AI race coverage focuses on brand names and fundraising headlines. OpenAI raises $110 billion. Google has the most compute. Meta open-sources everything. These are real advantages, and this race is far from over.

But the structural advantage, the one that compounds, belongs to whoever builds the best AI for building AI. Right now, that's Anthropic. They lead coding benchmarks. They dominate developer tooling. They're generating revenue at a pace that funds the next cycle of the flywheel. And Mythos Preview just demonstrated what happens when the feedback loop starts producing emergent capabilities that nobody, including Anthropic themselves, explicitly designed.

There is an enormous amount of uncertainty in how the AI landscape evolves from here. Competition is fierce, compute is expensive, and no lead is permanent in a field moving this fast. But if you're looking for signal amid the noise, watch the coding benchmarks and the developer adoption data. That is where the real race is being decided.

At Wealth Engine Pro, we follow the data, not the hype, not the brand, not the fundraising press release. And the data right now points to a company that most people still think of as the underdog. The feedback loop is spinning, and Anthropic is pulling away.

Make Data-Driven Decisions with Wealth Engine Pro

At Wealth Engine Pro, we believe in data over narrative. Our platform scores 5,500+ stocks across financial health, trend strength, and valuation, so you can separate signal from noise and make informed investment decisions backed by real numbers.

This article represents the opinions of the author and is not financial advice. The views expressed are based on publicly available information and publicly reported financial data. Anthropic makes the Claude AI that powers portions of the Wealth Engine Pro platform, which the author discloses as a potential conflict of interest. Always do your own research before making investment decisions.