AWS Lambda and Terraform: A fully automated deployment pipeline

Anton Poznyakovskiy
6 min readDec 7, 2020

I’ve been working with both AWS Lambda and Terraform for some time, and although they are formidable tools on their own, they are somewhat difficult to put to work together.

Lambda works with a packaged source, not with a git repository, and only pulls the code from an S3 bucket once. This means that on every change to the function code, you need to put the code into the same S3 bucket and tell Lambda to load the code again. A lot of manual steps!

Below I’ll demonstrate a way to avoid them using three key elements, assuming both Lambda and Terraform code are hosted on AWS CodeCommit:

  • Using the source hash of the Lambda function to detect changes to the code
  • Calling terraform apply from the buildspec of CodeCommit to update the Lambda after the build
  • Employing CodePipeline to trigger CodeBuild when the function code changes

Preparing the environment

We’ll work with a simple Lambda function hello-world.js:

exports.handler = (event, context, callback) => {
context.callbackWaitsForEmptyEventLoop = false;
}

We’ll put this file into a repository in AWS CodeCommit called medium-lambda. Also, we will need a repository for Terraform code in CodeCommit, let’s name it medium-terraform.

The base Terraform configs look like this:

provider "aws" {
region = var.aws_region
}
terraform {
required_version = "= 0.14.0"
backend "s3" {}
}
data "aws_caller_identity" "current" {}locals {
lambda_bucket_name = "poznyakovskiy-medium-lambda"
lambda_s3_key = "${var.codebuild_project_name}/${var.lambda_zipfile}"
}

The two locals defined here are the name of the bucket to store artifacts for Lambda and the path to the zipfile within that bucket. For simplicity, I will not encrypt the bucket nor the Lambda environment variables, see closing remarks on this.

S3 bucket, IAM role and Lambda

We need to create the S3 bucket mentioned before and a role for the Lambda function. As the Lambda is not have any interactions with other resources, the role will be similarly simple.

resource "aws_s3_bucket" "lambda_artifacts_bucket" {
bucket = local.lambda_bucket_name
}
resource "aws_s3_bucket_public_access_block" "lambda_artifacts_bucket" {
bucket = aws_s3_bucket.lambda_artifacts_bucket.id
block_public_acls = true
block_public_policy = true
ignore_public_acls = true
restrict_public_buckets = true
}
restrict_public_buckets = true}resource "aws_iam_role" "lambda" {
name = "lambda-role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}]
}
EOF
}

Let’s now create the Lambda function proper:

data "aws_s3_bucket_object" "source_hash" {
count = var.is_first_run ? 0 : 1
bucket = local.lambda_bucket_name
key = "${local.lambda_s3_key}.sha256"
}
resource "aws_lambda_function" "this" {
function_name = "hello-world"
s3_bucket = local.lambda_bucket_name
s3_key = local.lambda_s3_key
handler = "hello-world.handler"
role = aws_iam_role.lambda.arn
runtime = "nodejs12.x"
source_code_hash = join("", data.aws_s3_bucket_object.source_hash.*.body)
}

Terraform provides the attribute source_code_hash to detect changes to the Lambda function code and update the Lambda. It expects a SHA256 hash of the zipfile with the code which we pass via aws_s3_bucket_object . This resource could provide access to the zipfile and return its checksum, but there is a catch: Terraform returns the MD5 checksum which, passed as the code hash, would trigger a rebuild every time. So, we build the SHA256 hash via command line and upload it alongside the zipfile as a file named lambda.zip.sha256. More on that in the next paragraph.

The CodeBuild project

We will need a CodeBuild project and a corresponding IAM role with the necessary permissions. The CodeBuild project will have a buildspec such that terraform will be installed and applied to the Lambda function only during build. Here is what the resources look like:

resource "aws_iam_role" "codebuild_role" {
name = "codebuild-role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "codebuild.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
data "aws_iam_policy_document" "codebuild_policy" {
statement {
effect = "Allow"
actions = [
"lambda:*",
"s3:PutObject",
"s3:GetObject",
"s3:GetObjectVersion",
"s3:GetBucketTagging",
"s3:GetObjectTagging",
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents",
"codecommit:GetRepository",
"codecommit:GetBranch",
"codecommit:GitPull",
"iam:GetRole"
]
resources = [
"*",
]
}
}
resource "aws_iam_policy" "codebuild_policy" {
name = "codebuild-policy"
description = "Policy used in trust relationship with CodeBuild"
policy = data.aws_iam_policy_document.codebuild_policy.json
}
resource "aws_iam_policy_attachment" "codebuild_policy_attachment" {
name = "codebuild-policy-attachment"
policy_arn = aws_iam_policy.codebuild_policy.arn
roles = [aws_iam_role.codebuild_role.id]
}
resource "aws_codebuild_project" "lambda" {
name = var.codebuild_project_name
description = "Lambda build project"
build_timeout = "10"
service_role = aws_iam_role.codebuild_role.arn
artifacts {
type = "S3"
location = local.lambda_bucket_name
}
environment {
compute_type = "BUILD_GENERAL1_SMALL"
image = "aws/codebuild/standard:2.0"
type = "LINUX_CONTAINER"
}
source {
type = "CODECOMMIT"
location = "https://git-codecommit.eu-west-2.amazonaws.com/v1/repos/medium-lambda"
git_clone_depth = 1
buildspec = <<EOF
version: 0.2
phases:
install:
commands:
- curl -fsSL https://apt.releases.hashicorp.com/gpg | apt-key add -
- apt-add-repository "deb [arch=amd64] https://apt.releases.hashicorp.com $(lsb_release -cs) main"
- apt-get update && apt-get install terraform=0.14.0
build:
commands:
- zip -r -q ${var.lambda_zipfile} *
- openssl dgst -sha256 -binary ${var.lambda_zipfile} | openssl enc -base64 > ${var.lambda_zipfile}.sha256
- aws s3 cp ${var.lambda_zipfile} s3://${local.lambda_bucket_name}/${var.codebuild_project_name}/${var.lambda_zipfile}
- aws s3 cp ${var.lambda_zipfile}.sha256 s3://${local.lambda_bucket_name}/${var.codebuild_project_name}/${var.lambda_zipfile}.sha256 --content-type text/plain
post_build:
commands:
- cd $CODEBUILD_SRC_DIR_terraform
- terraform init -backend-config="region=${var.aws_region}" -backend-config="bucket=${var.backend_bucket}" -backend-config="key=${var.backend_key}.tfstate" -var="aws_region=${var.aws_region}" -var="backend_bucket=${var.backend_bucket}" -var="backend_key=${var.backend_key}"
- terraform plan -target=aws_lambda_function.this -var="aws_region=${var.aws_region}" -var="backend_bucket=${var.backend_bucket}" -var="backend_key=${var.backend_key}"
- terraform apply -auto-approve -target=aws_lambda_function.this -var="aws_region=${var.aws_region}" -var="backend_bucket=${var.backend_bucket}" -var="backend_key=${var.backend_key}"
EOF
}
secondary_sources {
type = "CODECOMMIT"
location = "https://git-codecommit.eu-west-2.amazonaws.com/v1/repos/medium-terraform"
git_clone_depth = 1
source_identifier = "terraform"
}
source_version = "refs/heads/master"
}

Things to be noted at this stage:

  • The CodeBuild role will need permissions to update AWS Lambda, hence full access to AWS Lambda. As the Lambda function is always dependent on its role, Terraform will attempt to refresh the state of the IAM role aws_iam_role.lambda . We don’t expect the role to be updated from within CodeBuild, but we need to enable Terraform to check its state, hence the iam:GetRole permission.
  • During the install phase, we install exactly the same version of Terraform as we specify earlier (0.14.0 in this example).
  • The build consists of zipping the Lambda code and building its SHA256 hash.
  • Terraform needs to run after the artifacts are uploaded into the S3 bucket, but there is no such step in CodeBuild. Therefore we omit the artifacts section of the buildspec and upload the artifacts using AWS CLI command aws s3 cp in the build step.
  • We use terraform code as a secondary source. Its location within the CodeCommit container is set automatically as the environment variable CODEBUILD_SRC_DIR_ plus the name of the secondary source.
  • To apply terraform to the Lambda function only, we use target=aws_lambda_function.this .
  • We pass the same arguments region, bucket and key of Terraform commands as the variables to the terraform config. This is necessary since we need to use these variables in the buildspec.
  • We use -auto-approve to skip the approval of the changes in terraform apply .

The CodePipeline

To detect changes to Lambda function code and trigger CodeBuild, we use CodePipeline. Its Terraform code look like this:

resource "aws_iam_role" "codepipeline_role" {
name = "codepipeline-role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "codepipeline.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
data "aws_iam_policy_document" "codepipeline_policy" {
statement {
effect = "Allow"
actions = [
"s3:PutObject",
"s3:GetObject",
"s3:GetObjectVersion",
"codecommit:GetRepository",
"codecommit:GetBranch",
"codecommit:GetCommit",
"codecommit:GitPull",
"codebuild:StartBuild",
"codebuild:BatchGetBuilds"
]
resources = [
"*",
]
}
}
resource "aws_iam_policy" "codepipeline_policy" {
name = "codepipeline-policy"
description = "Policy used in trust relationship with CodePipeline"
policy = data.aws_iam_policy_document.codepipeline_policy.json
}
resource "aws_iam_policy_attachment" "codepipeline_policy_attachment" {
name = "codepipeline-policy-attachment"
policy_arn = aws_iam_policy.codepipeline_policy.arn
roles = [aws_iam_role.codepipeline_role.id]
}
resource "aws_codepipeline" "lambda_build" {
name = "lambda-pipeline"
role_arn = aws_iam_role.codepipeline_role.arn
artifact_store {
location = local.lambda_bucket_name
type = "S3"
}
stage {
name = "Source"
action {
name = "Source"
category = "Source"
owner = "AWS"
provider = "CodeCommit"
version = "1"
output_artifacts = ["lambda"]
configuration = {
RepositoryName = "medium-lambda"
BranchName = "master"
PollForSourceChanges = true
OutputArtifactFormat = "CODEBUILD_CLONE_REF"
}
}
}
stage {
name = "Build"
action {
name = "Build"
category = "Build"
owner = "AWS"
provider = "CodeBuild"
version = "1"
input_artifacts = ["lambda"]
configuration = {
ProjectName = aws_codebuild_project.lambda.name
}
}
}
}

Here, we use the simpler change detection method, polling. An alternative would be to set up a CloudWatch alert.

Result

If everything goes as expected, you will see a CodePipeline that looks like this:

Closing Remarks

I omitted encryption for both the S3 bucket and the Lambda environment. If you use encryption with KMS, you will need not only to modify aws s3 cp accordingly, but also add kms:DescribeKey, kms:GetKeyPolicy, kms:GetKeyRotationStatus and kms:ListResourceTags to the CodeBuild role since Terraform will need to refresh the state of the Lambda KMS key.

If you use Github or Bitbucket instead of CodeCommit, you might not need CodePipeline as you can detect code changes using aws_codebuild_webhook.

--

--