How does improved VPC help reducing Lambda cold start

AWS announced in September, since then I’ve been checking the updates every day, it finally arrives Sydney region!

Cold start issue has been the №1 blocker for many companies that are going Serverless, at Inquisitive we paused the steps going full Serverless, we knew AWS would do something, and here it is! After benchmarking we have seen significant improvement in lambda function starting time!

Go and check out the original post from AWS to have a better understanding of the cold start issue, but basically the improvement is, instead of creating one ENI for each Lambda execution environment, which is super slow, the new VPC setup will create only one ENI in your VPC when your Lambda function is created or its VPC settings are updated, and all following lambda invocations will share the pre-created ENI, hence a significant drop in starting.

Before the improvement
After the improvement

The two tests were done after a complete lambda rebuild, to ensure a complete cold before the traffic, you can see the significant cold start dropped from 12011ms to 5581ms!

XRay before the improvement
XRay after the improvement

Before the improvement, X-Ray shows that the first bunch of requests experienced more than 10s requests, the following requests were much faster because the Lambda function has a few instances in a warm state ready for serving.

After the improvement, because ENI has been created before the requests arrived, so the response time is a lot shorter.


At Inquisitive, we have a number of APIs, each of the API is an isolated Lambda function, which is a lambda execution environment according to the post, before the improvement, every API will experience a or a few cold start because each of them will need to create an ENI. Now, with this shared ENI, all of our APIs will have a lot faster cold start, UNLESS your Lambda function is invoked while the shared ENI is still being created.

Last but not least, actually it’s even more exciting, because

Every network interface created for your function is associated with and consumes an IP address in your VPC subnets. It counts towards your account level maximum limit of network interfaces.

This means that your serverless environments scalability has a limit of your available IP addresses in the subnet. With the shared ENI

Your function scaling is no longer directly tied to the number of network interfaces and Hyperplane ENIs can scale to support large numbers of concurrent function executions

The improvement is definitely a supercharge to AWS lambda function! Happy serverless.

I stir fry JS, steam AWS and code Beijing duck

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store