TL;DR: Meta’s “Avocado” is its first true Gemini/ChatGPT rival
Meta is working on a new large language model codenamed “Avocado”, built inside its Meta Superintelligence Labs (MSL) and a smaller elite unit called TBD.
Multiple reports based on internal memos and leaks say:
Launch window: Targeted for early 2026 / Q1 2026, after slipping from an original late‑2025 goal.
Positioning: Designed as a direct competitor to frontier systems like Google’s Gemini and OpenAI’s ChatGPT / GPT‑5‑class models.
Closed, not open: Unlike the open‑weight Llama family, Avocado is likely to be a proprietary, closed model, with Meta selling controlled access via APIs and products.
Strategic pivot: It marks a major shift from Meta’s “open source everything” rhetoric toward money‑making, tightly controlled AI at the high end.
Built by a new AI power lab: Avocado sits at the center of Meta’s new superintelligence strategy, led by Alexandr Wang at MSL, with heavy investment in GPUs and custom infrastructure.
Training recipe: The TBD group is reportedly distilling from rival models, including Google’s Gemma, OpenAI’s gpt‑oss, and Alibaba’s Qwen.
Nothing about Avocado is officially announced yet, and timelines could still move. But taken together, the leaks tell a clear story: Meta is done playing catch‑up with open Llama alone and wants a flagship, closed “super‑model” to sit in the same league as Gemini and ChatGPT.
The backstory: from Llama 4 to “superintelligence”
Llama 4 was powerful — but not enough
Meta’s current big family of models is Llama 4, which introduced a mixture‑of‑experts architecture and multiple variants (like Scout and Maverick) aimed at better efficiency and specialization.
Mark Zuckerberg has repeatedly described Llama 4 as highly steerable, designed to power everything from the Meta AI assistant to business messaging, AI Studio bots, and developer use cases across Meta’s platforms.
But despite strong benchmarks, Llama 4:
Struggled to match the hype and performance of frontier closed models from OpenAI and Google in public perception.
Arrived mostly as open‑weight models, meaning competing labs and companies — including Chinese players — could reuse or adapt its architecture and weights.
Became part of a debate over whether open-weight frontier models are sustainable in a world of geopolitical tension and safety concerns.
Reports describe “mixed reception” for Llama 4 and suggest Meta leadership concluded that open models alone would not win the top of the AI race.
Enter Meta Superintelligence Labs (MSL)
In response, Zuckerberg created Meta Superintelligence Labs (MSL) — a consolidated, top‑priority AI unit bringing together foundation‑model research, training, and production under one banner.
Key moves include:
Reorganisation into four pillars: research, training, products, and infrastructure, with leaders reporting into MSL head Alexandr Wang.
Folding in efforts from FAIR, Meta’s long‑running AI research group, to feed directly into large‑scale training runs.
Dissolving or reshuffling prior AGI teams, including an AGI Foundations unit, to streamline around the superintelligence push.
Aggressive hiring from OpenAI, Google, Anthropic and others to create what Zuckerberg has called “the lab with the highest talent density in the industry.”
In leaked internal messaging, leadership explicitly framed the mission as moving faster toward “superintelligence” — models that can reach or exceed human‑level capability across many domains.
Avocado is the first flagship model emerging from this retooled machine.
Project Avocado: what’s actually known so far
Codename, teams and ownership
Across multiple outlets citing CNBC and Bloomberg sources, Avocado is described as:
A next‑generation large language model, internally codenamed “Avocado”.
Positioned as the successor line to Llama, but not just “Llama 5” — rather, a more radical new flagship tier.
Built within Meta Superintelligence Labs, specifically by a compact advanced‑model team called “TBD”, which reports into MSL.
TBD is characterized as a small, elite group focused on training the very largest models and exploring “omni”‑style directions — models that can reason across modalities and tasks in a more unified way.
Release window: from late 2025 to early 2026
Initial internal hopes had Avocado landing before the end of 2025.
Reality:
Training and performance‑testing challenges reportedly pushed the schedule to early 2026, specifically Q1 2026 / “next spring”.
Meta spokespeople have insisted to reporters that training is “progressing as planned”, even as external sources describe the slip as a delay.
Most consistent reading of the leaks: expect a launch window around Q1 2026, with a non‑zero chance of further slippage if benchmarks or safety reviews fall short.
Closed-source by design
This is the most dramatic shift.
Unlike Llama 2–4, which were released as open‑weight models, Avocado is widely expected to be:
Closed-source / closed‑weight, with no public release of its full weights or architecture details.
Accessed via APIs, Meta’s own products, and potentially paid enterprise offerings, rather than direct downloads.
Engadget, Bloomberg‑sourced coverage, and regional outlets all converge on the same picture: Avocado is being treated as a money‑making, tightly controlled model, closer to Gemini and ChatGPT than to the Llama open‑weight series.
Zuckerberg, who once published a memo titled “Open Source AI is the Path Forward” and publicly swore off closed platforms, has more recently warned that Meta cannot “open source everything that we do,” especially at the superintelligence frontier.
Training cocktail: distilled from rival models
Perhaps the most eyebrow‑raising detail: Meta is reportedly using competitors’ models in Avocado’s training process.
According to a Moneycontrol/Bloomberg report based on people familiar with the project, the TBD group is:
Using several third‑party models as part of Avocado’s training pipeline.
Specifically distilling knowledge from Google’s Gemma, OpenAI’s gpt‑oss, and Alibaba’s Qwen.
In practice, that likely means:
Running these models on curated prompts.
Training Avocado (or intermediate models) to match or improve on their outputs (distillation).
This approach is common at a technical level but politically sensitive. It raises obvious questions about:
Licensing terms and whether such use strictly adheres to provider policies.
The ethics of frontier labs using each other’s outputs to bootstrap closed commercial models.
Nothing here is confirmed publicly by Meta — but it’s a consistent element in several serious reports.
Ambition: a Gemini/ChatGPT-class “super‑model”
Analyses and leaks paint Avocado as aiming squarely at the frontier tier:
Targeting advanced reasoning, planning, and multimodal understanding, not just chat or code.
Meant to “directly challenge” Gemini and ChatGPT on high‑end benchmarks and real‑world tasks.
Backed by aggressive investment in GPU superclusters and data center upgrades to support much larger model scales and faster training cycles.
Sitting at the core of Meta’s expectation to spend hundreds of billions of dollars — up to 600 billion USD over several years — on AI efforts.
In short: this is not “Llama 4.5.” It is Meta’s planned Gemini/ChatGPT‑class flagship.
Why Meta is pivoting away from open models at the top end
Open Llama was powerful — but it empowered rivals too
Several factors appear to be pushing Meta toward a closed Avocado:
Competitive leakage: Chinese labs such as DeepSeek incorporated elements of Llama’s design into their own R1 model, and a Reuters analysis found Chinese military‑linked institutions using Llama technology to build military AI systems.
Ecosystem free‑riding: While open Llama improved Meta’s reputation among developers, many third parties captured value on top without necessarily strengthening Meta’s own products or revenue.
Disappointing market response: Llama 4, despite strong engineering, did not decisively shift perception that OpenAI and Google still lead the frontier.
Regional coverage bluntly frames this as a reality check: open weights alone did not win the race, and may even have advantaged some competitors.
Monetization and control
Meta’s internal focus has now clearly turned to making AI pay:
Moneycontrol/Bloomberg describe Avocado as a “money‑making AI model” that Meta can tightly control and sell access to, as opposed to Llama’s more permissive release.
Closed models make it easier to enforce usage policies, safety constraints, and tiered pricing, and to bundle capabilities into Meta’s own consumer and enterprise offerings.
Zuckerberg has already outlined four big AI monetization pillars — ads, engagement, business messaging, and AI‑native products like Meta AI — and a flagship model like Avocado slots neatly into all of them.
Safety, regulation and geopolitics
Finally, there is the regulatory and geopolitical backdrop:
Governments are moving toward heavier scrutiny of frontier models, especially around dual‑use risks, misinformation, and military applications.
Meta’s open‑weight releases have already been caught up in debate about export controls and proliferation when used by foreign military‑linked actors.
A closed Avocado gives Meta more tools to:
Restrict sensitive use cases.
Region‑gate access.
Respond to regulator demands about logging, auditing, and kill‑switch style controls.
The cost: less openness for researchers and independent developers.
Avocado vs Gemini vs ChatGPT: how do they compare (on paper)?
Nothing about Avocado’s final specs is public, but the leaks let us sketch a provisional comparison with today’s frontier players.
Aspect | Meta Avocado | Google Gemini (Gemini 3–era) | OpenAI ChatGPT (GPT‑5‑era) |
|---|---|---|---|
Launch window | Targeting Q1 2026 / early 2026, slipped from late 2025. | Gemini 3‑class updates expected around 2025, with ongoing iterations. | Successor to GPT‑4 (GPT‑5‑class) expected in 2025, with new models already in testing. |
Openness | Likely closed‑source, API/product‑only access. | Fully closed‑source; API and Google products only. | Closed‑source; API and ChatGPT product only. |
Lineage | Successor to Llama but breaking from open‑weight tradition; built by MSL + TBD. | Successor to PaLM → Gemini families, integrated across Google ecosystem. | Successor to GPT‑4, integrated into ChatGPT and enterprise stacks. |
Training recipe | Includes distillation from rival models (Gemma, gpt‑oss, Qwen) plus Meta’s own data. | Proprietary mix of web, code, and multimodal data; details undisclosed. | Proprietary mix of web, code, expert data and reinforcement; details undisclosed. |
Primary goal | Catch up or surpass in reasoning, planning, multimodal understanding, tightly integrated into Meta apps. | Maintain frontier lead in multimodal, search‑integrated AI across Google. | Maintain frontier lead in general intelligence + tools ecosystem (agents, memory, APIs). |
The key point: Avocado is designed to sit in the same closed, frontier tier as Gemini and ChatGPT — not as yet another open alternative.
What Avocado could mean for Meta’s products and users
A new brain for Meta AI, WhatsApp, Instagram and Messenger
Zuckerberg has already hinted at an AI future where:
Every business on Meta’s platforms has an AI agent for customer support, sales, and marketing.
Meta AI assistants operate across Facebook, Instagram, WhatsApp, and Messenger, driving engagement and ad opportunities.
Avocado is the obvious candidate to:
Become the core engine behind Meta AI, improving reasoning, long‑context, and multimodal answers.
Power smarter business messaging agents on WhatsApp and Messenger — a huge monetization lever.
Enhance content creation tools (reels, posts, ads, AR filters) inside Meta’s apps, making them stickier for creators and brands.
If Avocado performs as hoped, ordinary users may first encounter it as:
A noticeably smarter Meta AI chatbot.
More capable in‑app assistants (e.g., “draft my campaign,” “summarize my group chat,” “design an ad creative”).
Richer tools for small businesses operating entirely inside Meta’s ecosystem.
For developers: less freedom, more polish?
For developers, the trade‑offs are sharper:
Less raw access: No open weights means no fine‑tuning on local hardware, no deep inspection of internals.
More productized capabilities: In exchange, Meta is likely to offer higher‑level APIs and tools, embedded into its social and messaging products, tuned for business workflows and scale.
Open‑weight Llama models will probably continue to exist, but Avocado looks set to be the top‑shelf option that never leaves Meta’s walled garden.
The risks, controversies and big open questions
1. Legal and ethical questions around using rival models
If Avocado’s training pipeline really does distill from Gemma, gpt‑oss and Qwen, the industry will be watching how Meta navigates licensing and compliance.
Key questions:
Do the terms of use for these models clearly permit this kind of distillation for a closed commercial model?
Even if legally allowed, will it escalate tensions among top AI labs already locked in a high‑stakes race?
In a world where every lab learns from the others’ outputs, competitive and legal norms are far from settled.
2. The end of open frontier models from big tech?
Meta was the loudest champion of open‑weight large models, and its apparent pivot to a closed flagship sends a strong signal:
Other tech giants already sit firmly in the closed camp at the frontier.
If Meta also locks down its top model, the most capable systems may increasingly live behind APIs and paywalls, with open innovation pushed to a lower tier.
The result could be:
A two‑tier AI ecosystem: frontier closed models from a few giants, and a long tail of smaller or slightly older open models.
Tighter control by a small set of companies over the most powerful general‑purpose reasoning engines.
3. Regulatory blowback
Avocado will almost certainly face heavy scrutiny from:
US and EU regulators, who are drafting or enforcing rules around frontier AI risk management, transparency, and safety.
Governments worried about dual‑use and military applications, particularly given Llama’s prior use in defense‑linked projects abroad.
Meta’s shift to a closed model may help with access control, but it will also make calls for auditing and evaluation access louder.
4. Can Meta actually catch up?
Finally, there is the execution risk:
OpenAI and Google are not standing still; both are rolling out next‑gen models and agents on a rapid cadence.
Meta’s own AI teams have experienced reorgs, layoffs (e.g., within FAIR), and leadership changes, including the departure of long‑time chief scientist Yann LeCun.
Even with billions invested and top‑tier talent, catching up at the very frontier is hard. Avocado will need to hit convincingly frontier‑level benchmarks and user experience to be seen as more than a late follower.
Realistic outlook: what to expect in early 2026
Putting all of this together, the most grounded expectations for Avocado are:
Timeline: A major announcement and/or limited rollout in Q1 2026, but with timelines that could slip if Meta prioritizes performance and safety over dates.
Form factor: Launch inside Meta AI and key Meta products first, with API access to follow for select partners and enterprises.
Openness: A firmly closed‑weight frontier model, sitting above a still‑important, but now secondary, Llama open‑weight line.
Impact: If successful, Avocado could move Meta from “fast follower” to tier‑one contender alongside Google and OpenAI at the top of the AI stack.
For users, the shift will show up as smarter assistants and tools inside the Meta ecosystem.
For developers, it may feel like more power, but on Meta’s terms.
For the AI industry, Avocado is another sign that the frontier is consolidating into a small club of closed, massively‑resourced models.



