Palo Alto, CA, September 24, 2025 — Modular, the AI infrastructure startup founded by ex-Apple and Google engineers Chris Lattner and Tim Davis, announced today that it has raised $250 million in its latest funding round. The new capital brings Modular’s total funding to $380 million and values the company at $1.6 billion — nearly triple its valuation from its previous financing.
A Hardware-Agnostic Vision: “AI’s Unified Compute Layer”
Modular’s mission is ambitious: build a software stack that lets developers run AI workloads across CPUs, GPUs, ASICs, and custom silicon without rewriting code. Its flagship tools include:
- Mojo — a programming language that bridges the ease of Python with high-performance execution, enabling faster inference with reduced vendor lock-in.
- MAX / Modular Platform — a runtime and deployment layer that abstracts away hardware differences and enables portability across vendors.
In Modular’s own blog, the team describes their stack as a kind of “hypervisor for AI” — a neutral compute fabric that can deploy to diverse underlying hardware.
Chris Lattner, speaking to Reuters, emphasized that the goal is not to “crush Nvidia,” but to enable a level playing field in AI infrastructure.

Betting Against Lock-In: Challenge to CUDA Dominance
For years, Nvidia’s dominance in AI has rested not just on its powerful GPUs, but on the ecosystem lock-in created by its CUDA software stack — used by millions of developers.
Modular’s approach is a direct response to that: if developers can write once and deploy anywhere, the reliance on a single vendor’s stack weakens. Analysts have likened Modular’s software strategy to the role VMware played in abstracting away server hardware.
Still, it’s a tall order: modular software must match or exceed performance across all architectures, handle edge cases, and win developer trust.
Market Traction & Strategic Positioning
Despite the challenge, Modular is already working with heavyweight customers and partners. Its clients include Oracle, Amazon, NVIDIA, and AMD, giving it credibility in both cloud and chip spaces.
In its blog announcement, Modular claimed:
- Downloads in the “tens of thousands per month,”
- More than 24,000 GitHub stars,
- Trillions of tokens served in production,
- A developer base across over 100 countries,
- Up to 70% latency reduction and 80% cost savings for some workloads vs. existing stacks.
On the staffing front, the company now has about 130 employees and maintains offices or teams in North America, the UK, and Europe.
What the $250M Will Power
Modular plans to use the new capital to:
- Accelerate hiring across engineering, sales, and go-to-market teams
- Expand efforts beyond inference into AI training, where the compute demands (and hence the margin opportunity) are higher
- Grow globally and scale infrastructure to meet increasing demand
Thomas Tull’s US Innovative Technology Fund led the round, with DFJ Growth, GV (Google Ventures), General Catalyst, and Greylock participating.
Challenges & Risks Ahead
While Modular’s vision is compelling, it faces several hurdles:
- Entrenched ecosystems: CUDA isn’t just a runtime — it’s deeply embedded in tooling, libraries, and developer habits. Overcoming inertia will take time.
- Performance parity: For developers to trust Modular, its cross-platform stack must deliver competitive performance across a wide variety of workloads and hardware.
- Evolving hardware landscape: AI computing is fast-moving (new chips, architectures, accelerators). The software layer must adapt sustainably.
- Business model and monetization: Modular needs to find effective ways to capture value — e.g. licensing, partnerships, revenue share with cloud providers.
- Partnership tensions: Some of its partners are also competitors (e.g. Nvidia, AMD) — balancing openness with commercial strategy is delicate.
Why This Matters
Modular’s raise and strategic positioning signal a maturing phase in AI infrastructure. The industry is moving beyond “more hardware = more power” and toward software-defined flexibility.
By abstracting away hardware constraints, Modular could help accelerate:
- Innovation (e.g. hardware startups don’t need to build entire stacks)
- Competition (vendors compete on performance & cost, rather than ecosystem lock-in)
- Efficiency (better utilization of heterogeneous compute)
- Portability (models and workloads that scale across devices)
If successful, Modular could redefine how AI systems are built and deployed — much like how virtualization redefined compute decades ago.

Share your work with UNI Network Magazine. Upload your PDF below.