I am a big advocate of “utility compute”, which is a service provisioning model, where computing resources are made available on demand and charged based in usage (not a flat rate). Platform as a Service is an example of utility compute as it abstracts the lower levels of the technology stack (making them a commodity), allowing the developer to focus on the application and data.

I believe Heroku is the purest example of utility compute for Platform as a Service, thanks to the use of public-cloud infrastructure (Amazon Web Services) and the focus on the developer experience.

However, over recent years a new trend has started to emerge known as “Serverless Computing”.

What is Serverless Computing?

Serverless Computing (AKA Functions as a Service) aims to take utility compute to the next level, where the actual application components, alongside the traditional technology stack, are made a commodity.

Serverless Computing has a number of similarities to Platform as a Service, but also some key differences. For example, both paradigms require the developer to think differently about how they write and maintain an application. With Platform as a Service, this involved the shift to “Twelve-Factor App” and (generally speaking) the evaluation to a Microservices architecture.

Twelve-Factor App and Microservices are still relevant concepts for Serverless Computing. The real difference is in the operation of the application.

In Platform as a Service, the system continually runs at least one server process and the developer must manage the scaling either manually or via an auto-scaler. Either way, the developer must be aware of scaling, which can actually be very challenging when working with applications that don’t have a consistent traffic profile.

With Serverless Computing, scaling (from zero) is completely transparent, with an architecture designed to enable functions to start within milliseconds. Therefore, Serverless Computing is generally more efficient than Platform as a Service, making it more cost effective as you are only charged for the execution time of the function (brining us back to utility compute).

It is important to note that Serverless Computing is very opinionated, requiring a specific type of application architecture. For example, applications are event-driven and stateless at the function level. This is key, as it is what enables infinite scale and higher efficiency when compared to traditional compute instances (e.g. AWS EC2) or containers (e.g. Docker).

Getting started with Serverless Computing?

The biggest name in Serveless Computing is AWS Lambda, which…

Lets you run code without provisioning or managing servers. You pay only for the compute time you consume - there is no charge when your code is not running. With Lambda, you can run code for virtually any type of application or backend service - all with zero administration.

AWS Lambda is creating waves across the industry, as it unlocks the power of Amazon Web Services for developers, at an aggressively low price point. For example, using AWS Lambda, you can write event-driven functions that seamlessly connect with other AWS services, such as API Gateway, S3, Kinesis, EC2, Redshift, etc.

Another Serverless Computing player is Auth0 Webtask, which is not as comprehensive as AWS Lambda, but puts a real focus on the developer experience (making it easier to get started).

Is Serverless Computing the future? What about Platform as a Service?

Serverless Computing is still maturing and right now is not a good fit for every use case. The maturity is being helped by initiatives such as the Serverless Framework (AKA JAWS), but it still has a long way to go before the tools, patterns and examples match Platform as a Service.

Services like AWS Lambda are also highly opinionated (even more so than Platform as a Service), meaning traditional development teams will need to learn new skills and migrate to a new workflow. This will likely result in additional development time and cost, as well as increased risk for new projects.

Finally, one of the real selling points of Serverless Computing is the transparent scaling, which results in the efficiencies and cost savings. However, Platform as a Service continues to mature and optimize (e.g. Kubernetes Horizontal Pod Autoscaling), therefore it is possible that over time, we will see this advantage reduce.

In conclusion, regardless of what the future of Serverless Computing looks like, I’m confident the core idea is solid and will continue to drive the world towards true utility compute.