Meta has spent years building its AI identity around one simple idea: open source is the right way forward. That identity is now under pressure — and a delayed, underperforming internal model called ‘Avocado’ is where that pressure is most visible.

Meta’s AI Stack: From Llama to ‘Avocado’

To understand what ‘Avocado’ signals, it helps to trace how Meta got here. Meta AI launched in September 2023 as a generative AI chatbot embedded across WhatsApp, Instagram, Facebook, and Messenger. By April 2025, it had grown into a standalone app with a Discover Feed, voice capabilities, and deeper personalization — unveiled at Meta’s LlamaCon developer conference.

Powering all of it is Llama, Meta’s open-source large language model family. Llama was originally positioned as a tool to democratize AI access for researchers without large infrastructure budgets. It has since expanded to four model generations and a limited Llama API preview for developers.

‘Avocado’ was supposed to be what comes next — a proprietary frontier model built by the newly formed Meta Superintelligence Labs, led by Scale AI founder Alexandr Wang following Meta’s $14.3 billion investment in Scale AI for a 49% stake. Unlike Llama, Avocado would be closed-source: no public weights, no external access, no community iteration.

The Delay and What’s Behind It

Avocado was originally expected to ship in March 2026. It didn’t. According to sources familiar with the matter who spoke to Reuters, the release has been pushed to May or June — because in internal tests, Avocado is falling short of Google’s Gemini 2.5 and Gemini 3, as well as other leading models, on reasoning, coding, and writing benchmarks.

That alone would be concerning. What makes it more so is the reported contingency plan: Meta’s leadership is apparently discussing temporarily licensing Gemini from Google to power Avocado and other AI products while it works to close the gap. No final decision has been made, but the fact that the conversation is happening at all marks a significant shift for a company that has long insisted on building its own core AI capabilities.

Avocado isn’t the only thing behind schedule. Llama 4’s flagship model, internally dubbed ‘Behemoth’, was designed as a large teacher model to anchor the next generation of Llama variants. Its release has been repeatedly delayed as engineers struggle to hit capability targets. And Llama 4’s initial rollout received a mixed reception from developers, with some reporting underperformance relative to competing systems and lower adoption than previous generations.

The Strategic Tension Avocado Exposes

The move toward a closed-source model is a direct reversal of the position Mark Zuckerberg championed publicly in 2024, when open source was framed as Meta’s core differentiator — a way to close the AI development gap by enabling external developers to improve and build on Llama.

A year later, Zuckerberg issued a second memo walking that position back, stating that Meta would remain committed to open source in principle but would be more selective about “what we choose to open source” — citing safety concerns. The real catalyst, however, was competitive: DeepSeek’s R1 model, built in part on Llama and Qwen architectures, demonstrated that open-source components could be used to build highly capable rival systems. Meta’s openness had inadvertently handed a significant advantage to a competitor.

A closed-source model also makes financial sense. Meta has committed $600 billion to US AI infrastructure, data centers, energy projects, and workforce programs through 2028. Proprietary models that generate revenue can help offset that scale of investment in a way that freely distributed open-source models cannot.

What This Means for Meta’s Position in AI

Taken together, the delays, the mixed model receptions, and the reported discussions about licensing Gemini paint a picture of a company navigating genuine uncertainty at the frontier — not just executing a planned transition.

The most structurally significant signal is the potential external dependency on Google. If Meta licenses Gemini to support its own AI products, it would shift from building foundational AI capabilities to operating as a distribution layer on top of someone else’s model. That is a fundamentally different role in the AI stack — and a difficult one to reverse.

Meta still has real assets: billions of users across its social platforms, a massive data advantage, and significant infrastructure investment already in motion. But Avocado’s struggles raise a harder question than whether Meta can ship a competitive model. The question is whether it has a consistent, coherent strategy for what it wants that model to actually be.

Without that clarity, infrastructure and scale alone may not be enough to hold a leading position in the AI race.

Comments

No comments yet. Be the first to share your thoughts.

or to leave a comment.