Is Quarkus the magic bullet for Java and AWS Lambda?

When should I care about cold starts?

There are situations where cold starts are a problem. In this article I’ll bring up two. First, in the case that you really need your backend to always respond in under a second or two, cold starts will be problem for you. Second, if you are using Spring, even though cold starts are infrequent, the amount of time it takes for the Spring application to initialize, and therefore be ready to serve requests, is so long it’s probably unacceptable if your backend serves a user interface. Java already has a performance disadvantage (only in regard to cold starts) compared to other languages you can use with lambda. In addition to the cold start overhead of the JVM itself, your app written with Spring also has to initialize the Spring context with all of the classes your application will use at runtime. In the past, when Spring was becoming popular, this was no concern because a calling client would never need to wait for a Spring application to start up. Startup was something you did in the background while the previous version of your app was still up and serving client requests as they came in. Only when your new deployment was fully booted and ready to respond to traffic would traffic actually be directed to it.

Enter Quarkus

Quarkus, in contrast to Spring is engineered to boot-up quickly and use less memory. It is therefore well suited to use with Lambda. Even more important is the fact that it is designed to work seamlessly with GraalVM so that if desired, you can build your app as a native executable. Here is a blurb from the official GraalVM page:

  • Moves a lot of runtime work to build time
  • No JVM needed to run the software artifact (./application instead of application.jar)

Cold starts, Quarkus, Custom Runtimes, Graal what does it all mean?

We’ve established that in some situations cold starts are a problem. By building our application as a native image we will still face cold starts but they will be significantly faster. Lambda will support us running a native image since the Lambda service now allows us to specify our own runtime. By applying Quarkus, and GrallVM to fix the cold start problem, we are effectively authoring our source code in Java but we are not executing our Java classes inside of a JVM. Instead we are doing a lot of the heavy lifting at application build time so at runtime things will go faster.

Comparing 4 “flavors”

I’m going to compare 4 different variations of a completely bare bones “hello world” backend micro-service. Each deployment will have one RESTful route exposed to the internet that a client can call. Each will return a simple string response without doing any extra processing or calling out to any external systems like another API or a database. I’ve deployed one Node.js Lambda with the Express framework so we have something to compare Java against.

the four different “flavors” I will compare
  1. Node.js with Express backend framework
  2. Java 11 (Amazon Corretto) with Spring Boot backend framework
  3. Java 11 (Amazon Corretto) with Quarkus backend framework
  4. Code authored in Java with Quarkus then built with graalVM and deployed as native binary executable.
  • How long it takes for a fresh Lambda Execution environment to start (cold start)
  • How much memory is consumed on a cold start
  • Cold start vs. warm start (already initialized) Lambda execution time
average cold starts in milliseconds across runtimes with 256mb of ram configured

What’s going on with the “init” phase?

The init phase includes everything that has to happen before your actual Lambda code can run. The only time we will see an init phase is during a cold start. Remember that Lambda itself is a service offered by AWS. Any time we invoke a Lambda we are asking AWS to run our code in one of their billions of ephemeral sandboxes. In the case of a cold start, that sandbox doesn’t exist yet and needs to be built. You can read more about the different lifecycle phases of a Lambda function here.

The JVM flavored Lambdas

the same table data depicted in previous image summed up in a chart
number of cold starts compared to warm starts over a few days

Response time for warm invocations

Up to this point I’ve only discussed cold starts. As I said, cold starts only account for roughly .1 % of our function invocations in a best case scenario and possibly up to 1% worst case. The vast majority of the time our Lambda response times will look like this

averages across the same time span as samples above. (duration is in milliseconds)

What about price ?

One compelling reason we could have for making execution times as short as possible is price. With Lambda we are billed for every millisecond of compute we use and no more. Take note also that we are not billed for function initialization time, only for execution time so although Spring Boot takes a long time to initialize, we won’t be billed for that time. In the examples above we have configured 256mb of ram. We could actually up the ram to 512mb and still pay the same price. On the Lambda pricing page you can see the increments used for billing

Why you wouldn’t use native-image

After reading this article you might think to yourself, “why would I ever choose the regular old JVM when I can have a native image that runs faster, uses less memory, and is therefore cheaper than the alternative?”. As you’ve probably guessed using the Quarkus native image feature is not without its tradeoffs. The graalVM team has put together a nice explanation of the limitations and tradeoffs involved with this technology. You can read about them here:

Conclusions: when I think you should and shouldn’t use Quarkus with Lambda

If you have an existing Spring Boot application that you want to move to Lambda, Quarkus with native image may be a good option. This will allow you to overcome the problem of occasionally having very slow response times due to cold starts. When switching from Spring Boot to Quarkus you will have to do a bit of refactoring. Quarkus offers a number of extensions that make it easy to keep a lot of your Spring code as is. Here is a good article about it

  • each Lambda function is very simple making it easy to understand and debug
  • a bug or venerability in one part of your app won’t effect the rest
  • you’re cold start times and memory use will improve drastically without having to do any hacks to the AWS Lambda golden path. (By golden path I mean the standard recommend AWS way)

Notes on what I deployed

Here are the guides I used to deploy each different Lambda runtime flavor in case you want to try yourself.



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Brian McCann

Brian McCann

I take things and make them serverless. Living in Greenpoint Brooklyn, employed as Developer @CedrusDigital. Opinions are my own.