New offering empowers NVIDIA Cloud Partners and GPU Cloud Providers to rapidly launch high-margin AI services on Rafay-powered infrastructure—accelerating time-to-market and maximizing ROI
Sunnyvale, CA – April 30, 2025 – Today, Rafay Systems, a leader in cloud-native and AI infrastructure orchestration & management, announced general availability of the company’s Serverless Inference offering, a token-metered API for running open-source and privately trained or tuned LLMs.
Many NVIDIA Cloud Providers (NCPs) and GPU Clouds are already leveraging the Rafay Platform to deliver a multi-tenant, Platform-as-a-Service experience to their customers, complete with self-service consumption of compute and AI applications. These NCPs and GPU Clouds can now deliver Serverless Inference as a turnkey service at no additional cost, enabling their customers to build and scale AI applications fast, without having to deal with the cost and complexity of building automation, governance, and controls for GPU-based infrastructure.
The Global AI inference market is expected to grow to $106 billion in 2025, and $254 billion by 2030. Rafay’s Serverless Inference empowers GPU Cloud Providers (GPU Clouds) and NCPs to tap into the booming GenAI market by eliminating key adoption barriers—automated provisioning and segmentation of complex infrastructure, developer self-service, rapidly launching new GenAI models as a service, generating billing data for on-demand usage, and more.
“Having spent the last year experimenting with GenAI, many enterprises are now focused on building agentic AI applications that augment and enhance their business offerings. The ability to rapidly consume GenAI models through inference endpoints is key to faster development of GenAI capabilities. This is where Rafay’s NCP and GPU Cloud partners have a material advantage,” said Haseeb Budhani, CEO and co-founder of Rafay Systems.
“With our new Serverless Inference offering, available for free to NCPs and GPU Clouds, our customers and partners can now deliver an Amazon Bedrock-like service to their customers, enabling access to the latest GenAI models in a scalable, secure, and cost-effective manner. Developers and enterprises can now integrate GenAI workflows into their applications in minutes, not months, without the pain of infrastructure management. This offering advances our company’s vision to help NCPs and GPU Clouds evolve from operating GPU-as-a-Service businesses to AI-as-a-Service businesses.”
By offering Serverless Inference as an on-demand capability to downstream customers, Rafay helps NCPs and GPU Clouds address a key gap in the market. Rafay’s Serverless Inference offering provides the following key capabilities to NCPs and GPU Clouds:
Rafay’s Serverless Inference offering is available today to all customers and partners using the Rafay Platform to deliver multi-tenant, GPU and CPU based infrastructure. The company is also set to roll out fine-tuning capabilities shortly. These new additions are designed to help NCPs and GPU Clouds rapidly deliver high-margin, production-ready AI services, eradicating complexity. To read more about the technical aspects of the capabilities, visit the blog. To learn more about Rafay, visit www.rafay.co and follow Rafay on X and LinkedIn.
Rafay builds infrastructure orchestration and workflow automation software that powers self-service compute consumption for Sovereign AI Clouds, Cloud Service Providers & large Enterprises. Customers leverage the Rafay Platform to orchestrate multi-tenant consumption of AI infrastructure along with AI platforms and applications such as AI-Models-as-a-Service, Accenture's AI Refinery, and other 3rd-party applications. The Rafay Platform provides the automation and governance capabilities that platform teams need to standardize Kubernetes toolsets and workflows. With Rafay, platform teams at MoneyGram, GuardantHealth, Verizon and many other companies are operating Kubernetes environments across data centers, public cloud and Edge environments with centralized visibility and access control, environment standardization, and guardrail enforcement. As a result, platform teams are able to deliver self-service and automation capabilities that delight developer and operations teams. For more information, please visit www.rafay.co.