-

Inception Raises $50M to Power Diffusion LLMs, Increasing LLM Speed and Efficiency by up to 10X and Unlocking Real-Time, Accessible AI Applications

  • New funding will scale the development of faster, more efficient AI models for text, voice, and code
  • Inception dLLMs have already demonstrated 10x speed and efficiency gains over traditional LLMs

PALO ALTO, Calif.--(BUSINESS WIRE)--Inception, the company pioneering diffusion large language models (dLLMs), today announced it has raised $50 million in funding. The round was led by Menlo Ventures, with participation from Mayfield, Innovation Endeavors, NVentures (NVIDIA’s venture capital arm), M12 (Microsoft’s venture capital fund), Snowflake Ventures, and Databricks Investment.

Today’s LLMs are painfully slow and expensive. They use a technique called autoregression to generate words sequentially. One. At. A. Time. This structural bottleneck prevents enterprises from deploying scaled AI solutions and forces users into query-and-wait interactions.

Inception applies a fundamentally different approach. Its dLLMs leverage the technology behind image and video breakthroughs like DALL·E, Midjourney, and Sora to generate answers in parallel. This shift enables text generation that is 10x faster and more efficient while delivering best-in-class quality.

Mercury, Inception’s first model and the only commercially available dLLM, is 5-10x faster than speed-optimized models from providers including OpenAI, Anthropic, and Google, while matching their accuracy. These gains make Inception’s models ideal for latency-sensitive applications like interactive voice agents, live code generation, and dynamic user interfaces. It also reduces the GPU footprint, allowing organizations to run larger models at the same latency and cost, or serve more users with the same infrastructure.

“The team at Inception has demonstrated that dLLMs aren’t just a research breakthrough; it’s a foundation for building scalable, high-performance language models that enterprises can deploy today,” said Tim Tully, Partner at Menlo Ventures. “With a track record of pioneering breakthroughs in diffusion models, Inception’s best-in-class founding team is turning deep technical insight into real-world speed, efficiency, and enterprise-ready AI.”

“Training and deploying large-scale AI models is becoming faster than ever, but as adoption scales, inefficient inference is becoming the primary barrier and cost driver to deployment,” said Inception CEO and co-founder Stefano Ermon. ”We believe diffusion is the path forward for making frontier model performance practical at scale.”

The funds raised will enable Inception to accelerate product development, grow its research and engineering teams, and deepen work on diffusion systems that deliver real-time performance across text, voice, and coding applications.

Beyond speed and efficiency, diffusion models enable several other breakthroughs that Inception is building toward:

  • Built-in error correction to reduce hallucinations and improve response reliability
  • Unified multimodal processing to support seamless language, image, and code interactions
  • Precise output structuring for applications like function calling and structured data generation

The company was founded by professors from Stanford, UCLA, and Cornell, who led the development of core AI technologies, including diffusion, flash attention, decision transformers, and direct preference optimization. CEO Stefano Ermon is a co-inventor of the diffusion methods that underlie systems like Midjourney and OpenAI’s Sora. The engineering team brings experience from DeepMind, Microsoft, Meta, OpenAI, and HashiCorp.

Inception’s models are available via the Inception API, Amazon Bedrock, OpenRouter, and Poe – and serve as drop-in replacements for traditional autoregressive (AR) models. Early customers are already exploring use cases in real-time voice, natural language web interfaces, and code generation.

For more information, visit www.inceptionlabs.ai.

About Inception

Inception creates the world’s fastest, most efficient AI models. Today’s autoregression LLMs generate tokens sequentially, which makes them painfully slow and expensive. Inception’s diffusion-based LLMs (dLLMs) generate answers in parallel. They are 10X faster and more efficient, making it possible for any business to create instant, in-the-flow AI solutions. Inception’s founders helped invent diffusion technology, which is the industry standard for image and video AI, and the company is the first to apply it to language. Based in Palo Alto, CA, Inception is backed by A-list venture capitalists, including Menlo Ventures, Mayfield, M12 (Microsoft’s venture fund), Snowflake Ventures, Databricks Investment, and Innovation Endeavors.

For more information, visit www.inceptionlabs.ai.

Contacts

Press Contact:
Natalie Bartels
VSC, on behalf of Inception
inception@vsc.co

Inception


Release Versions

Contacts

Press Contact:
Natalie Bartels
VSC, on behalf of Inception
inception@vsc.co

More News From Inception

Inventor of Diffusion Technology Underlying Sora and Midjourney Launches Inception to Bring Advanced Reasoning AI Everywhere, from Wearables to Data Centers

PALO ALTO, Calif.--(BUSINESS WIRE)--Inception today introduced the first-ever commercial-scale diffusion-based large language models (dLLMs), a new approach to AI that significantly improves models’ speed, efficiency, and capabilities. Stemming from research at Stanford, Inception’s dLLMs achieve up to 10x faster inference speeds and 10x lower inference costs while unlocking advanced capabilities in reasoning, controllable generation, and multi-modal data analysis. Inception’s technology enable...
Back to Newsroom