Create PDF using Chromium Puppeteer in Serverless AWS Lambda using layers

Crespo Wang
4 min readJan 7, 2020

It’s been a while since the last time I published the solution, tech evolves fast, now it’s time for an overhaul.

serverless.yml

As always, Typescript is the choice of programming language, we need to transpile TS to JS so that Lambda can understand, this is all handled by Webpack. In my last post, I used babel-loader as the transpiler, this time let’s use ts-loader.

ts-loader vs babel-loader

webpack.config.js

ts-loader transpile typescript to javascript (es6), babel-loader with the help from @babel/preset-typescript can also do the same job, babel-loader can also convert javascript from es6 to es5 and does polyfills so that we can make different browsers happy. But in this use case, we are not targeting browsers, as long as Lambda nodejs runtime can understand, so ts-loader is enough.

If you need a faster deployment process or local development feedback, you can add transpileOnly: true to the ts-loader options, this will make ts-loader skip the type check and only does the transpile job.

chrome-aws-lambda

serverless.yml

chrome-aws-lambda, which ships the chromium binary for Lambda environment, now supports up to nodejs12.x, so we will set the runtime in serverless.yml to be nodejs12.x.

local development

For your local development, chrome-aws-lambda won’t give you the chromium binary, so make sure you install the full puppeteer as devDependencies. In your code you need to check if the runtime is offline, ie local development, and set executablePath correspondingly.

serverless.yml

As serverless-offline@v6.0.0 has removed isOffline from event, we can only check if it is offline mode by injecting an environment variable to serverless-offline cli, and add IS_OFFLINE to serverless.yml to reflect the env from cli.

Now if you run yarn serverless it will start the local server. To deploy it simply run yarn sls deploy , and remember that the GET request must have Accept:application/pdf , this tells API Gateway the request expects a PDF.

Go Further with the cool kids

> Lambda Layer

If you have a look at the deployment package, given it’s such a small project, 42MB is a pretty big deployment package, be aware that 250 MB (unzipped, including layers) is the limit of AWS lambda deployment package. It can be a real problem for a bigger project.

We can shake off some weight by moving chrome-aws-lambda to lambda layer.

First, we need to make the layer zip, see how-to, and copy the zip file to your project directory.

Then include it in serverless.yml, and make sure the layer is attached to the function.

Lastly, removechrome-aws-lambda from the deployment package

forceExclude:- chrome-aws-lambda

By moving it to layer, the deployment package has reduced from 41MB to 769KB, this means a much faster lambda startup time!

> Provisioned concurrency

AWS Lambda cold start issue has been a real headache for many people, but recently AWS has released a couple of key improvements, №1 being the VPC improvement, see my previous post. No2 improvement is the provisioned concurrency, which means there will always be X number of lambda running.

To enable it is quite easy, add provisionedConcurrency to the function definition. You will see a significant drop in response time for a cold start, from ~4s to ~400ms. But do remember that it doesn’t come for free, you will need to pay the bill for the running Lambda instances.

> Resources

--

--