AWS Compute Blog

New Deployment Options for AWS Lambda

Tim Wagner Tim Wagner, AWS Lambda General Manager


Emma Zhao Emma Zhao, AWS Lambda Software Developer


This blog introduces two new ways to deploy AWS Lambda functions…and as a bonus, we’ll create a “Lambda auto-deploy” service as well!

Deploying AWS Lambda code from Amazon S3 buckets

Many developers use Amazon S3, the AWS object storage system, as an easy-to-use repository for storing build and deployment artifacts. AWS Lambda now has support for uploading code directly from S3, without requiring you to first download it to a client. Using it is simple: In a call to CreateFunction or UpdateFunctionCode, you can now provide the S3 bucket, key (object name), and optional version as an alternative to supplying the code directly, and Lambda will simply load your code directly from S3. (If the bucket owner and the user making these calls aren’t the same, make sure the latter has permission to read the file.)

The CreateFunction parameters now look like this; the “S3” trio are new:

{
    "Code": {
        "S3Bucket": "string",
        "S3Key": "string",
        "S3ObjectVersion": "string",
        "ZipFile": blob
    },
    "Description": "string",
    "FunctionName": "string",
    "Handler": "string",
    "MemorySize": number,
    "Role": "string",
    "Runtime": "string",
    "Timeout": number
}

Here’s what the feature looks like in the AWS Lambda console:
Lambda Deployment From S3 Bucket

AWS CloudFormation support for Lambda Functions

Building on the new Lambda feature, AWS CloudFormation now also supports AWS Lambda functions in templates.

Here’s the CloudFormation template:

{
  "Type" : "AWS::Lambda::Function",
  "Properties" : {
    "Code" : Code,
    "Description" : String,
    "Handler" : String,
    "MemorySize" : Integer,
    "Role" : String,
    "Runtime" : String,
    "Timeout" : Integer
  }
}

and the “code” section looks like:

{
  "S3Bucket" : String,
  "S3Key" : String,
  "S3ObjectVersion" : String
}

Unsurprisingly, it looks a lot like the CreateFunction call in Lambda that it’s making on your behalf. With this new feature in CloudFormation, you can now stand up stacks of resources that include Lambda functions. For example, if you create an S3 bucket in your stack and you have a Lambda function that you use to process notification events when objects are created in that bucket, now you can deploy them together using CloudFormation, name them using stack parameters, and all the other CloudFormation goodness.

CloudFormation also supports using Lambda functions to execute custom resources in a stack, making it easy to add custom processing to a stack rollout without needing any infrastructure to execute the code.

Bonus Section: Lambda Auto-Deployer

Wouldn’t it be nice if there were a microservice that would watch for code zips being uploaded to S3 and then automatically deploy them to Lambda for you? Let’s build it – with the new S3 upload capability in Lambda and the existing S3 bucket notifications that can call Lambda functions, it’s really easy:

  • Create an S3 bucket or pick an existing one to hold your code zips.
  • Optional: Turn on versioning and retention (cleanup) policies on that bucket. Not required, but S3 offers them and they’re nice to have.
  • Create the initial version of your Lambda function. Doesn’t even have to be real code yet, just make a placeholder so you can set the configuration (memory, duration, execution role) as you like.
  • Create a “LambdaDeployment” function using the code below, and configure it to receive events from your S3 bucket. (Don’t forget to change YOUR_BUCKET_NAME, YOUR_CODE, and YOUR_FUNCTION_NAME to match your actual circumstances.)
console.log('Loading function');
var AWS = require('aws-sdk');
var lambda = new AWS.Lambda();
exports.handler = function(event, context) {
    key = event.Records[0].s3.object.key
    bucket = event.Records[0].s3.bucket.name
    version = event.Records[0].s3.object.versionId
    if (bucket == "YOUR_BUCKET_NAME" && key == "YOUR_CODE.zip" && version) {
        var functionName = "YOUR_FUNCTION_NAME";
        console.log("uploaded to lambda function: " + functionName);
        var params = {
            FunctionName: functionName,
            S3Key: key,
            S3Bucket: bucket,
            S3ObjectVersion: version
        };
        lambda.updateFunctionCode(params, function(err, data) {
            if (err) {
                console.log(err, err.stack);
                context.fail(err);
            } else {
                console.log(data);
                context.succeed(data);
            }
        });
    } else {
        context.succeed("skipping zip " + key + " in bucket " + bucket + " with version " + version);
    }
};

If you’re not using versions, skip the version check. Remember that your S3 bucket and Lambda function must be in the same region. That’s all there is to it – a durable, fault-tolerant, versioned code deployment service in 29 lines of code!

It’s also easy to extend this simple example with other bells and whistles:

  • If you want to process multiple functions, you can skip the key check and name the function using the key (the name of the zip file, minus the “.zip” suffix) or any other method you like to determine the function name based on the bucket and key.
  • If you don’t want to create the function manually the first time, you can check to see if the function exists (for example by calling GetFunction) and if not use CreateFunction instead of UpdateFunctionCode.
  • You can override the configuration with UpdateFunctionConfiguration, and you can retrieve the existing configuration with GetFunction if you want to leave some portions of the configuration unchanged while updating others.
  • You can stash the S3 event’s versionId field in the function’s description field as a reminder of which version you’re running, or include it in the function name to keep each version distinct and separately available.
  • To enable rollbacks, you can modify the function (and your S3 upload procedure) to use a layer of indirection: store the version you want to be “current” as another object in your S3 bucket, change your code to watch that file (ignoring the ZIPs themselves), and update it whenever you want to change the version. Your code will need to fetch the content of the pointer file when it changes instead of simply using the metadata in the event to make the Lambda UpdateFunctionCode call.

 

Happy Lambda coding!

Tim and Emma