CoreWeave Inc., the Livingston-based AI-focused cloud provider, said it plans to add NVIDIA’s Rubin technology to its cloud platform, expanding options for customers building and deploying agentic AI, reasoning and large-scale inference workloads.
The company, which trades on Nasdaq as CRWV, said it expects to be among the first cloud providers to deploy the NVIDIA Rubin platform in the second half of 2026. CoreWeave said the addition is intended to give enterprises, AI labs and startups more flexibility as AI systems scale and as compute requirements evolve.
CoreWeave said its cloud platform is designed to run large-scale AI across multiple generations of technology, allowing customers to match systems to specific workloads. The company positioned Rubin as the next extension of that approach, citing expected gains in performance, efficiency and scale for production AI workloads.
“The NVIDIA Rubin platform represents an important advancement as AI evolves toward more sophisticated reasoning and agentic use cases,” Michael Intrator, CoreWeave co-founder, chairman and CEO, said in a statement. “Enterprises come to CoreWeave for real choice and the ability to run complex workloads reliably at production scale.”
NVIDIA CEO Jensen Huang said CoreWeave will be among the first to deploy Rubin and framed the collaboration around moving advanced AI systems into production. “Together, we’re not just deploying infrastructure — we’re building the AI factories of the future,” Huang said.
CoreWeave said Rubin is designed to support demanding workloads including agentic AI, drug discovery, genomic research, climate simulation and fusion energy modeling. The company said Rubin enables large-scale mixture-of-experts models that require sustained compute, and that its cloud platform will support customers that need to train, serve and scale those workloads with consistent performance.
The company also pointed to its recent track record bringing NVIDIA-based infrastructure to market, saying it was the first cloud provider to offer general availability of NVIDIA GB200 NVL72 instances and the NVIDIA Grace Blackwell Ultra NVL72 platform.
CoreWeave said it will deploy Rubin using its Mission Control operating standard for training, inference and agentic AI workloads, integrating it with NVIDIA’s Reliability, Availability, and Serviceability Engine. The company said the approach provides real-time diagnostics and observability across fleet, rack and cabinet levels.
To handle power delivery, liquid cooling and network integration, CoreWeave said it developed a Rack Lifecycle Controller, described as a Kubernetes-native orchestrator that treats an NVIDIA Vera Rubin NVL72 rack as a single programmable entity.
Dan O’Brien, president and COO of The Futurum Group, said reliable, large-scale operations are critical for advanced workloads. “The NVIDIA Rubin platform expands what is possible, and platforms like CoreWeave are what make those capabilities available in practice,” he said.







