Meta and Groq Collaborate to Deliver Fast Inference for the Official Llama API

Written by:
Groq

Introducing the fastest way to run the world’s most trusted openly available models with no tradeoffs

MOUNTAIN VIEW, Calif., April 29, 2025 – Groq, a leader in AI inference, announced today its partnership with Meta to deliver fast inference for the official Llama API – giving developers the fastest, most cost-effective way to run the latest Llama models. 

Coming soon in preview, the Llama 4 API model accelerated by Groq will run on the Groq LPU, the world’s most efficient inference chip. That means developers can run Llama models with no tradeoffs: low cost, fast responses, predictable low latency, and reliable scaling for production workloads.

“Teaming up with Meta for the official Llama API raises the bar for model performance,” said Jonathan Ross, CEO and Founder of Groq. “Groq delivers the speed, consistency, and cost efficiency that production AI demands, while giving developers the flexibility and control they need to build fast.”

Unlike general-purpose GPU stacks, Groq is vertically integrated for one job: inference. Builders are increasingly switching to Groq because every layer, from custom silicon to cloud delivery, is engineered to deliver consistent speed and cost efficiency without compromise.

The Llama API is the first-party access point for Meta’s openly available models, optimized for production use. 

With Groq infrastructure, developers get:

  • Speeds of up to 625 tokens/sec throughput
  • Minimal lift to get started – just three lines of code to migrate from OpenAI
  • Consistent low latency, even at scale

Fortune 500 companies and more than 1.4 million developers already use Groq to build real-time AI applications with speed, reliability, and scale.

The Llama API is available to select developers in preview here with broader rollout planned in the coming weeks.

Learn more about the Llama API x Groq partnership here

About Groq
Groq is the AI inference platform redefining price and performance. Its custom-built LPU and cloud run powerful models instantly, reliably, and at the lowest cost per token—without compromise. Over a million developers use Groq to build fast and scale smarter.

Groq Media Contact: [email protected]

Never miss a Groq update! Sign up below for our latest news.

The latest Groq news. Delivered to your inbox.

The latest Groq Developer news. Delivered to your inbox.