Lambda vs. Fargate
A common question I get asked is when should I use a Lambda function and when should I use a container (running on AWS Fargate)? Both are serverless. Both are very different. I hope to provide some general guidance in this post.
What is AWS Lambda and AWS Fargate?
-
AWS Lambda = serverless, event-driven compute service. It runs code in response to events with no infrastructure to provision or manage. Sometimes referred to as Function as a Service.
-
AWS Fargate = serverless compute for containers. Allows you to run ECS or EKS containers without provisioning servers to run them on.
Both technologies are serverless as there is no underlying infrastructure to manage, patch or maintain. Both use AWS’s open source software called Firecracker under the hood to provide lightweight microVMs on bare-metal hardware. Lambda runs each execution environment (more on that in another post!) in a separate microVM. Fargate runs each task/pod in a separate microVM.
Pricing models
This is a fundamental difference.
- AWS Lambda = pay per event (for code duration per ms and allocated memory)
- AWS Fargate = pay per hour (for allocated vCPU and memory)
The key part is that even after Fargate has finished processing a request the container will still continue to run and you will still be charged. With Lambda after the code has finished and a result is returned you won’t be charged until another request comes in.
Serving requests
Below are simplified diagrams of how Lambda compares to Fargate when serving simple web requests.
Lambda execution life cycle
Fargate container life cycle
For a Fargate container to serve a request it must already be in the running state. This has a performance benefit over Lambda in that there is no added latency in warming the environment when a request comes in. It does mean that you will pay for idle resources.
With Lambda when there are no requests being served there is essentially nothing running behind the scenes so costs will be reduced to zero. When a request comes in then an execution environment is created on-demand (if there isn’t one already - more on this in another article!). This leads to an increase in latency known as a cold start.
Scaling
-
AWS Lambda = each execution environment can serve a single request at once. Therefore 100 requests/sec means 100 concurrency Lambda execution environments (assuming 1 second duration). The Lambda service will automatically scale up and down for you and it does this more rapidly than Fargate.
-
AWS Fargate = each running container can service multiple requests at once. Once vCPU and memory resources become stressed you will need to increase the number of containers behind a load balancer. You will need to set an auto scaling policy to increase/decrease the number of running tasks/pods based on your defined metric. Fargate cannot scale down to zero, unlike Lambda.
When can’t I use Lambda?
- If your code takes longer than 15-minutes to run. Lambda has a hard 15-minute timeout. [Use Fargate]
- If your code requires Windows operating system. Lambda only supports Linux kernel. Runtimes use the Amazon Linux operating system (with Lambda container images you can choose your own Linux distribution). [Use Fargate]
- If your code requires more than 10 GB of memory. Lambda functions support a max of 10,240 MB of memory. [Use Fargate]
- If your request size is larger than 6 MB. [Use Fargate]
- If your code requires a GPU or hardware accelerator. [Use EC2]
- If your code serves a TCP/UDP connection. Lambda only responds to events via HTTP requests. [Use Fargate]
When is Lambda a good use case?
- When you want to use one of the AWS built-in event source mappings to automatically process messages from an SQS queue, Kinesis stream, Apache Kafka or DynamoDB Stream.
- For scheduled infrastructure automation tasks triggered by an EventEngine cron expression.
- When your application frequently needs to scale down to zero.
- When you have rapid scaling requirements.
- When your code supports one of the AWS managed run times to take away the burden of managing and patching the runtime. Supported managed runtimes include Node.js, Python, Ruby, Java, dotnet and Go.
- When integrating other other AWS cloud native services such as SNS, SQS, EventBridge, Step Functions, API Gateway etc.
- When you have a low number of requests you may fall under the 1 million requests/month free tier and not be charged. There is no free tier for Fargate.
- When you don’t want to think about architecting across Availability Zones for resilience. Lambda does this automatically. Fargate does not.
When can’t I use Fargate?
- If your container requires more than 30 GB of memory. [Use EC2]
- If your container requires more thn 4 vCPUs. [Use EC2]
- If your container requires a GPU or hardware accelerator. [Use EC2]
- If your container requires less than 512 MB of memory. The minimum is 512 MB so if you are not going to use this then you will be wasting money. [Use Lambda or EC2]
When is Fargate a good use case?
- If you have a fairly steady request pattern it can be cheaper to use Fargate than Lambda.
- If you need to provide a TCP/UDP port for clients to connect to.
- If you need a faster migration path from an existing VM-based application. Containerizing the application is faster than refactoring to fit the Lambda event-driven model.
- When you need to avoid the Lambda cost start latency (Lambda provisioned concurrency can help with this to some extent). Fargate will provide a low latency response consistently as the application will already be warmed when the request arrives.
- When you want to keep existing web frameworks such Django, Java Spring Boot or Flask.
Which is better for getting started?
Typically you can have a Lambda function up and running much faster than a container running on Fargate. For Lambda the typical getting started process is:
- Create a new Lambda function in the AWS function selecting a supported runtime.
- Write your code.
- Trigger the function.
For Fargate, it’s slightly more complex:
- Create a new ECS cluster in the AWS console with desired VPC settings.
- Build a docker container image with your chosen OS, dependencies and code.
- Upload the code to a container repository e.g. Docker Hub or ECR
- Create a task definition inside the ECS console setting the vCPU, memory and container image URI.
- Run a new task using the task definition.
Can’t I just run a container in Lambda?
In short no. Lambda now supports container images as a packaging format, but the execution model is the same. Lambda does not run containers, it just provides a nice way to package your chosen OS, runtime and it’s dependencies.
Conclusion
When you take the time to understand the differences between Lambda and Fargate choosing between them for your application becomes much easier.
In general Lambda will give you a shorter time to delivery for new applications, and is great for when request patterns are spikey or unpredictable. Automatic scaling to zero can help you save money when your apps aren’t doing anything and scaling up to multiple concurrent execution environment happens in milliseconds. If your chosen language is one of the supported managed runtimes then there is very little for you to take care of other than your application code and a few function settings.
Fargate is great if you are coming from an on-premise VM-based or existing container application as the time to production is generally much shorter than Lambda. Fargate will help you provide low latency response as very high transactions per second when scaled correctly. Just be aware of the extra effort required to build, patch and maintain your container images, and the work required to correct set and monitor your auto scaling policies.
Don’t be afraid of making the wrong decision, or you may never make a decision at all! Start with a small proof of concept and test, test, test!