Java Developers, It’s Time To Give AWS Lambda a Try

cartoon by Kate Nielsen

This is the first post of more to come

Everyone has their own definition of what serverless is, here is mine:

Serverless is just the logical next step in the evolution of computers and software following along the trend of adding more and more abstraction on top of the low level processes that actually make hardware and software run. For example it used to be common for programmers to have to allocate and manage memory usage in programs, now most popular languages do it automatically for us. Today it’s common for people to allocate and set up server environments or Kubernetes clusters to run their programs, serverless just means this happens automatically without programmers or operations people thinking about it.

AWS Lambda is the cornerstone of AWS’s serverless product offerings. It offers us functions as a service, or in other words, it allows us to run arbitrary bits of code on demand. Initially it had limited use cases and limited capability. People used it for simple background tasks like encoding images or sending notifications. These days people use it for all sorts of compute tasks. We’re getting close to the point that Lambda can do everything that our old mental model of “server-full” computing is capable of. On top of that it often scales up and down better and costs us less. Now that it’s possible to integrate with Elastic File System people are even using fleets of Lambdas as a distributed supercomputer for machine learning.

Spring Boot is a common way to build microservices with Java, our end goal in this series will be to get an existing RESTful backend web service implemented in Spring Boot running on AWS Lambda. If you’re starting greenfield on a project I wouldn’t necessarily recommend using a framework like Spring Boot because it is not needed in a Serverless environment and actually creates some difficulty. However, I think it’s a good exercise for this series since it will teach you about some of the limitations of Lambda and how the way Lambda executes code is fundamentally different than how our code would run as a traditional long running server process. Once you understand these limitations and differences you’ll be well equipped to make a decision yourself. This post is geared toward Java programmers who are interested in serverless computing, but may be new to AWS Lambda.

I aim to show that migrating an existing Spring Boot application to a serverless environment may not be as tough as you think, and can offer benefits over the standard “on a server” way. Here is some criteria with which we can evaluate success and decide if its worth doing:

This is a lot to talk about and I tend to be pretty detailed so we better break it up into a few posts. This series is the result of me experimenting with Java, AWS Lambda, Spring Boot, and Quarkus (which I’ll talk about in a different post). This post will be all theoretical, I will not get into the mechanics of actually writing code. I plan to dive deep and walk you through the steps in later posts. Here is what you can expect in this post and later posts to come in this series:

Part 1 (this post)

Later Posts

Spring Boot on Lambda, is it even a good idea?

There was a time where I would have answered no to this question. I also used to cringe at the idea of people putting a Node/Express.js application on Lambda. In a nutshell if you’re adding a framework like Spring Boot you’re kind of missing the point of Lambda and not taking full advantage of working in a serverless environment. I’ve since changed my tune since I think it’s ok for people to use tools they already know where possible since overhauling an entire tech stack at once is almost never practical. On top of that, both advances to the Lambda Service, and cool solutions built by clever developers have actually made lifting and shifting a Java or Node microservice to Lambda a fine idea. Both Spring Boot and Express are backend frameworks that were designed to work this way:

AWS Lambda is not a physical or virtual server the way EC2 is. Similar to EC2, the Lambda service falls under the category of on demand compute but it gives us an even higher level of abstraction than EC2 does. Another important distinction is that we don’t tell a Lambda function to “start running” when we deploy it. It can be really tough to pin down exactly what “Serverless” means but one key differentiator is that when we deploy traditional workloads that respond to requests, our last step of the deployment process is to start our application running. Serverless doesn’t work like that. AWS Lambda is kind of like a car in that we wouldn’t buy a car, turn it on, and leave it running for years. We start it when we need to go somewhere and turn it off when we arrive (after we’ve invoked it 🤓 ). Actually, a more accurate analogy would be to compare a Lambda function to a modern fuel efficient car that implicitly turns itself off and on. I’ve always owned old crappy cars but I’ve driven newer ones that turn off at stop lights, and back on again when you put your foot on the gas. AWS Lambda works kind of the same way. It will turn off completely and start again many times over the course of an hour or day based on what you ask it to do.

As a developer when we build for Lambda, our process is like this:

** Some example criteria we could provide to the Lambda service telling it when we want our code to run:

As bulleted above, a fun fact you may not know is that most Alexa skills are powered by AWS Lambda. When you talk to an Alexa, the compute environment that processes your request is either a Lambda environment that has only been around a few minutes or one that was brought into being by your words 🤯. The technology that makes AWS Lambda work was engineered to help Amazon operate at the scale and speed that they do. Amazon / AWS innovates technological solutions that are necessary for them to do business, and they make money again when they provide that technology as a service that implementors like us can use.

AWS lets Alexa skill builders host their skill’s code right on the AWS platform as Lambda functions. There are roughly 100,000 Alexa skills. AWS isn’t going to pay its dev-ops team to keep 100,000 servers running or even 100,000 containers. Instead, AWS came up with the genius solution of Lambda, which allows the user to run their application logic in a temporary runtime environment. When someone wants to use an Alexa skill the runtime will be provisioned in milliseconds and de-commissioned quickly after. You can think of the Lambda service as an infinite amount of lightweight micro servers that only run one application each, with millions starting up and shutting down every second. I should note that AWS was not the first to come up with this concept, they were just able to engineer it in a way that is much faster, more user friendly, and cheaper than predecessors. This was no small engineering feat, if you’re curious to hear about it check out this podcast at the 7:20 mark.

I got sidetracked there, but what I meant to get across is that Lambda is completely unconcerned and decoupled from the job of listening for http requests and responding to them or even using the http protocol. For the nitpickers out there, yes there is http involved but as a user of the service you don’t worry about any of the details of it. I could run this javascript code on my computer to invoke my Lambda

example of how you can call a Lambda in the the cloud using javascript code

I can invoke a Lambda from my computer’s terminal too.

AWS lambda invoke \ 
--function-name my-function \
--payload '{ "name": "Bob" }' \

I can also invoke my lambda via a REST API call if I connect it with an API gateway.<MYROUTE>

So what’s wrong with using Express.js or Spring Boot?

There is nothing wrong with it, it’s just that it’s a bit redundant to use one of these frameworks with Lambda. Lambda already provides many nice interfaces we can use to interact with it, therefore it is not necessary to add a framework whose core purpose is to help you expose a REST interface that interacts with backend code. The most common and probably simplest way to build a RESTful backend with Lambda is to use AWS API gateway. There is nothing stopping you from hooking a Lambda function up to another gateway product like Apigee, IBM API connect, or your own homespun gateway. In most scenarios AWS API gateway takes the responsibility of staying turned on and “always listening”. The client who wants to use our REST API calls API gateway and API gateway triggers our lambda function.

Here is another analogy that may be helpful. Small towns have volunteer fire departments. A firefighter might be home sleeping and get woken up if he or she is on call to respond to fires. If I live in the town and my house catches fire, I don’t call each firefighter individually, I call 911. The 911 line is manned by a “dispatcher” who is awake and waiting at the phone and who also has the capability to alert all the volunteer firefighters at once.

Similarly we can think of an API gateway as a dispatcher. The dispatcher decouples the request to put a fire out from the actual resources that do the fire fighting. The dispatcher may choose not to send the entire volunteer department depending on the severity of the fire. Similarly API gateway can “wake up” as many concurrent Lambda functions as is necessary to handle the amount of traffic.

Just like the sleeping volunteer firefighters our Lambda code is dormant. It’s only some code sitting in an s3 bucket somewhere. When API gateway needs it to process a request, then and only then will the Lambda service spin up a container, create whatever runtime environment our code runs in (node, JVM, python etc…) and execute the code we’ve given it. The output of the Lambda function will be passed back to API gateway and API gateway will pass it back to the client. When it’s all over, the firefighters will go home and go back to sleep, our Lambda execution environment will cease to exist. Sometime in the future it may be cloned and resurrected but it will have no memory of its past life. For the cloning and resurrection part the firefighter analogy does not run parallel to the lambda scenario :).

Serverless framework VS alternatives

I am accustomed to using The Serverless Framework which is a popular option for building REST APIs in a serverless way. The Serverless Framework is an open source tool that helps you go beyond just publishing serverless functions to build complete applications. It works with multiple cloud providers as well as open source Serverless solutions such as knative and fn. There are many other ways to work with Lambda but this is what I like to use. All of this content is relevant regardless of what framework (if any) you choose for working with AWS Lambda. You can build REST APIs on Lambda without any framework just like you can build a REST API with just Java and no Spring Boot. The Serverless framework just makes it faster and more straightforward to continuously deploy code and connect Lambda functions to specific REST endpoints in API gateway (and do other things outside of the scope of this post). We can declare an API endpoint, spin up a gateway service, and point it to function code with a few lines of yaml.

example snippet of a serverless.yml file

As you can see above, we can tie each rest endpoint to a different Lambda function. If we want to get really granular with our Lambdas we can delegate a different Lambda function to POST, GET, and DELETE respectively all under one resource path. Since we have the ability to do this there isn’t a need for a Controller class with @Post, @Get, and other annotations we would traditionally use in Spring. If we were using Express.js we would do the request routing in one or multiple router files and register them all to a main router. This request routing logic that exists in both Node and Spring is not needed when using the Serverless framework + Lambda.

we don’t have to do this routing logic when working with AWS Lambda

If we do want to use a backend framework like Express or Spring Boot, then we won’t use a different lambda handler for different routes and REST operations. Instead we can route all http requests to one single Lambda function that handles routing requests to /thispath over /thatpath as well as business logic. If we set up our Lambda this way, we do need to include the standard request routing logic (pictured above) that we are used to, since all requests will be sent to one runtime vs many. In this scenario Spring will be responsible for examining the URL and operation that the client used and routing the request to the corresponding controller method. This pattern is basically the same as how we would do things if we packaged our Spring Boot microservice as a Docker image and ran it on Kubernetes.

Non Lambda Monolith way

POST /book request => goes to createUser Lambda Function

GET /book request => goes to getUser Lambda Function

Lambda Monolith way (Entire app lives in one Lambda Function)

POST /book request => goes to bookStoreApp Lambda Function

GET /book request => goes to same bookStoreApp Lambda Function

how we can put the whole application in one Lambda function with Serverless Framework

At this point this is probably all seeming a bit abstract. In my next post I’ll get in to the mechanics of how we actually author, build, and deploy Lambda functions. To sum up we talked about how the “always listening / always running” model we’re used to differs from how Lambda operates. We also discussed what frameworks like Spring Boot and Express accomplish and questioned if they are really necessary when building a backend on AWS Lambda. In the next post I’ll show the basics of working with the Serverless framework both locally and in the cloud from a Java Developer perspective.

I’m a Developer @CedrusDigital who enjoys writing and learning about serverless among other things. Find me on Linkedin and twitter: @fun_with_lazers



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Brian McCann

I take things and make them serverless. Living in Greenpoint Brooklyn, employed as Developer @CedrusDigital. Opinions are my own.