As cloud spend zooms past a $200B revenue run rate, the debate on the benefits of the cloud has largely dissipated. From startups to large enterprises, everyone is moving over. The reasons are simple and definitive: lower TCO, greater flexibility, and a shift from capex to opex.
But if you dig into the details with a cloud-native company (or an enterprise that has moved to the cloud), cracks in that iron-clad value prop begin to appear. Expanding cloud provider margins are eating into TCO. Flexibility is great in theory, but most organizations lack the infrastructure for true cloud elasticity. And the specialized DevOps and SRE talent required to scale in the cloud, while not technically capex, certainly doesn’t feel flexible to CFOs.
Ultimately what the cloud did was move the infrastructure you need to manage to someone else’s data center. But at the end of the day, the infrastructure still needs to be managed. Sure, you don’t need to rack and stack servers, but scaling and availability issues still keep every DevOps team up at night. And you’d be hard-pressed to find a CTO who believes they actually “only pay for what they need.”
We believe the first two decades of the cloud were only Cloud 1.0: the lift-and-shift era. Companies moved to the cloud but brought with them an on-prem philosophy. Teams still provision for peak resource requirements, leaving much of their infrastructure idle. Creating a globally performant app still requires managing multiple regions, sharding your database, and setting up caching. And despite the rise of DevOps, the folks building applications and the folks running them largely live in separate worlds.
Cloud 1.0 abstracted infrastructure away to someone else’s data center. Cloud 2.0 is abstracting infrastructure up to the service it’s providing developers.
Companies like Vercel*, Replit, MongoDB, Fly.io, and Neon* are allowing developers to focus on building differentiated apps without having to think about infrastructure—simply deploy and get the best infrastructure setup by default. In effect, they’re shifting teams from infrastructure-centric primitives to app-centric primitives, and moving the burden of infrastructure management to the service provider that can achieve much greater efficiencies across a collection of customers than any one customer could achieve alone.
The paradox of the name “serverless” (yes, there are still servers—the users just don't need to think about them) and the desire for developer tools to brand themselves as the hot new thing creates plenty of confusion as to what fits the serverless definition. We’re particularly fond of Momento’s core tenets of serverless products:
In practice, this means that serverless companies create a simpler developer experience centered on the goal the developer is seeking to accomplish and allows them to ship instantly and scalably. What’s nice about this framework is it shows the extensibility of the serverless approach:
What’s striking about each of these examples is how they give more autonomy to developers. The users of these products are no longer reliant on DevOps teams—it’s click and go. Armon Dadgar, co-founder and CTO of HashiCorp*, explains it well: “As organizations try to get the value of cloud without the complexity, serverless offerings increasingly provide the right balance of flexibility and simplicity.”
Building each of these products, however, requires an exceptionally rare combination of skills. We’ve seen consistently that serverless products win because of better developer experience. Snowflake and Vercel were step-change improvements in the workflows of data analysts and front-end developers, respectively. They generated a cult following that helped the companies grow extremely quickly.
But the simplest way to deliver instant starts, instant elasticity, and usage-based pricing that scales to zero is to eat a whole lot of cost on the vendor side. The reason these products hadn’t been built in the past is not because they were bad ideas—they were simply not profitable. So a successful serverless company must couple their obsessive focus on DX with systems-level innovation to effectively use and share resources across their customer base.
In the early days of a company’s life, this can lead to low gross margins. At $100M revenue, Snowflake was running at 46% gross margins. As the company scaled, the benefits of their separation of storage and compute and the leverage they could achieve on a larger cloud footprint kicked in.
As serverless companies grow, they have the opportunity to not only define the best DX for their users, but also to become the world’s experts in running those workloads efficiently. A company fully dedicated to running Postgres can afford to invest in low-level innovations to improve running Postgres in the cloud, as Neon did with their separation of storage and compute.
DX and infrastructure innovation are in fact two sides of the same coin. Great serverless companies can drive an amazing DX with improvements in their infrastructure. The benefit of this tight integration only compounds with time.
Outsourcing infrastructure management to companies fully dedicated to that craft has massive benefits for customers. In fact, we’re already seeing a new generation of companies build intentionally on serverless products, dramatically simplifying their stacks.
Building on a tightly integrated serverless front-end platform (e.g. Vercel), serverless database (e.g. Neon, PlanetScale), and serverless workflow system (e.g. Inngest) allows teams to move off of a spaghetti system connecting 20-plus disparate AWS services. The benefits are massive:
For most companies, the technology is not quite there yet, but over the next few years we expect to see more “serverless-native” companies fully committing to the benefits of these new platforms.
Much like how cloud-native companies had an advantage in the Cloud 1.0 era, serverless-native companies have the opportunity to leapfrog competitors by shipping faster and focusing on their core value prop.
This is particularly important in the fast-moving AI era we’re entering. As Guillermo Rauch, founder and CEO of Vercel* puts it, “as AI pushes companies to iterate quickly, serverless becomes the standard because iteration velocity is so much greater. It unlocks innovation by making tedious infrastructure work dissipate.”
Nikita Shamgunov, co-founder and CEO of Neon*, also believes AI will be a major accelerating force in the adoption of Cloud 2.0. He shared: “When you build for human developers, you’re also building for AI developers or AI agents. It’s a lot more natural for AI to generate code against clean well-thought-out APIs versus a very large disparate set of services on AWS that let hallucinations shoot yourself in the foot.”
Any way you cut it, the lion’s share of the profits generated by the cloud migration has gone to the cloud providers. They’ve owned the core compute and data-driven workloads, leaving room at the margins for new startups. Serverless creates an opportunity to change that dynamic.
Serverless companies excel where the cloud providers are weakest—on developer experience. This allows them to go after the workloads that were previously difficult for startups to access: app hosting, databases, queues, etc. This doesn’t cut the cloud providers out completely, but it does push them to earn more proportionally to the utility they’re providing. Most of today’s serverless platforms still build on cloud primitives, though they may not necessarily have to in the future.
Owning the developer experience gives serverless providers leverage over their infrastructure providers. This is similar to how cloud providers have demonstrated leverage over hardware providers. For the developer, the underlying infrastructure is an implementation detail. Ultimately, the power lies with whoever owns the compute.
We believe this makes serverless a $200B+ opportunity for exceptional founders to go out and capture. We’ve been investing heavily behind that opportunity, backing the teams at Vercel*, Neon*, and Inngest*. As the Cloud 2.0 era unfolds, we’re excited to double down.
So many components of building best-in-class infrastructure in the cloud remain complex to the point of being out of reach for the average developer. Training and inference for LLMs is one timely example. Caching (especially globally) is often just as hard. We believe each of these categories can support a multibillion-dollar serverless platform. If you’re building those platforms, we’d love to hear from you.
* Represents a company in GGV Capital U.S.’s portfolio
Thanks to Guillermo Rauch, Nikita Shamgunov, Armon Dadgar, Tony Holdstock-Brown, Dan Farelly, and Oren Yunger for your input and feedback on this article.