Back to Insights

How I Deploy Serverless Stacks with Terraform

Christian Scott·

This has become my go-to method for deploying lambda functions and serverless stacks with Terraform. There are plenty of other tools for it as well, like CDK, SAM, serverless framework, etc. When you're building APIs with Lambda functions, API Gateway, and DynamoDB, Terraform gives you declarative infrastructure that's easy to version, review, and replicate across environments.

Here's how I structure my serverless stacks and the patterns that make them maintainable.

Architecture Overview

Here's the typical serverless stack I deploy:

Clients resolve the domain through Route53 DNS, which points to an API Gateway HTTP API. The API Gateway terminates TLS using an ACM certificate and routes requests to Lambda functions. Lambda functions read and write data to DynamoDB. Everything is defined in Terraform, from the DNS records to the IAM permissions.

1. Use API Gateway HTTP API Instead of REST API

AWS offers two types of API Gateway: REST API and HTTP API. For new projects, I always choose HTTP API. It's simpler, faster, and cheaper. HTTP API supports the same Lambda integrations and custom domains, but without the complexity of API Gateway models, request/response transformations, and API keys.

The HTTP API uses route-based configuration. You define routes like POST /users orGET /users/{id} and map them to Lambda integrations. The integration automatically handles request/response formatting, so your Lambda receives a standard API Gateway event.

resource "aws_apigatewayv2_api" "main" {
  name          = "${local.project_name}-${local.environment}-api"
  protocol_type = "HTTP"
}

resource "aws_apigatewayv2_stage" "main" {
  api_id      = aws_apigatewayv2_api.main.id
  name        = local.environment
  auto_deploy = true
}

resource "aws_apigatewayv2_integration" "create_user" {
  api_id           = aws_apigatewayv2_api.main.id
  integration_type = "AWS_PROXY"
  integration_uri  = aws_lambda_function.create_user.invoke_arn
}

resource "aws_apigatewayv2_route" "create_user_post" {
  api_id    = aws_apigatewayv2_api.main.id
  route_key = "POST /users"
  target    = "integrations/${aws_apigatewayv2_integration.create_user.id}"
}

The auto_deploy flag on the stage means that route changes are deployed immediately when you run terraform apply. For production, you might want to disable this and manage deployments explicitly, but for most projects, automatic deployment is convenient.

Here's the request flow from client to database:

The API Gateway receives the HTTPS request, validates the TLS certificate, and invokes the Lambda function. The Lambda function processes the request, queries or updates DynamoDB, and returns a response. API Gateway formats the response and sends it back to the client.

2. Use OS-Only Runtimes for Languages Without Managed Support

For languages without a managed runtime (such as Rust, and increasingly Go if you migrate off go1.x), you run them on Lambda using an OS-only runtime like provided.al2023. This gives you a minimal Amazon Linux 2023 environment with no language runtime; just the base OS. You package a bootstrap executable (often your compiled binary plus the runtime client) in a ZIP file, and Lambda runs that binary. The bootstrap program talks to the Lambda Runtime API and invokes your handler code.

3. Use DynamoDB for Serverless Data Storage

DynamoDB fits naturally with serverless architectures. It scales automatically, charges per request, and integrates directly with Lambda through IAM. I use pay-per-request billing mode for most tables because it's simpler than managing provisioned capacity and works well for variable traffic patterns.

I structure tables around access patterns. The primary key determines how you query items, and global secondary indexes extend those patterns. I also enable TTL on tables where data has a natural expiration, which DynamoDB handles automatically.

resource "aws_dynamodb_table" "main" {
  name         = "${local.project_name}-${local.environment}-data"
  billing_mode = "PAY_PER_REQUEST"
  hash_key     = "id"

  attribute {
    name = "id"
    type = "S"
  }

  attribute {
    name = "status"
    type = "S"
  }

  ttl {
    attribute_name = "ttl"
    enabled        = true
  }

  global_secondary_index {
    name            = "status-index"
    hash_key        = "status"
    projection_type = "ALL"
  }
}

resource "aws_iam_role_policy" "lambda_dynamodb" {
  name = "${local.project_name}-${local.environment}-lambda-dynamodb"
  role = aws_iam_role.lambda_role.id

  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [{
      Effect = "Allow"
      Action = [
        "dynamodb:GetItem",
        "dynamodb:PutItem",
        "dynamodb:UpdateItem",
        "dynamodb:DeleteItem",
        "dynamodb:Query",
        "dynamodb:Scan"
      ]
      Resource = [
        aws_dynamodb_table.main.arn,
        "${aws_dynamodb_table.main.arn}/index/*"
      ]
    }]
  })
}

The IAM policy grants Lambda functions permission to read and write to the table and its indexes. I scope permissions to the specific table ARN rather than using wildcards, which follows the principle of least privilege.

4. Automate DNS and TLS with Route53 and ACM

I use Route53 for DNS and ACM for TLS certificates, similar to the container deployment setup. The certificate validation works the same way; create validation records in Route53 and wait for ACM to validate. Once configured, certificates renew automatically.

For API Gateway, you create a domain name resource that associates the certificate with your API. Then you create an API mapping that connects the domain to a stage, and a Route53 record that points your domain to the API Gateway domain.

resource "aws_route53_zone" "main" {
  name = var.domain_name
}

resource "aws_acm_certificate" "main" {
  domain_name       = var.domain_name
  validation_method = "DNS"
}

resource "aws_route53_record" "cert_validation" {
  for_each = {
    for dvo in aws_acm_certificate.main.domain_validation_options : dvo.domain_name => {
      name   = dvo.resource_record_name
      record = dvo.resource_record_value
      type   = dvo.resource_record_type
    }
  }

  zone_id = aws_route53_zone.main.zone_id
  name    = each.value.name
  records = [each.value.record]
  ttl     = 60
  type    = each.value.type
}

resource "aws_acm_certificate_validation" "main" {
  certificate_arn         = aws_acm_certificate.main.arn
  validation_record_fqdns = [for record in aws_route53_record.cert_validation : record.fqdn]
}

resource "aws_apigatewayv2_domain_name" "main" {
  domain_name = var.domain_name

  domain_name_configuration {
    certificate_arn = aws_acm_certificate_validation.main.certificate_arn
    endpoint_type   = "REGIONAL"
    security_policy  = "TLS_1_2"
  }
}

resource "aws_apigatewayv2_api_mapping" "main" {
  api_id      = aws_apigatewayv2_api.main.id
  domain_name = aws_apigatewayv2_domain_name.main.id
  stage       = aws_apigatewayv2_stage.main.id
}

resource "aws_route53_record" "api" {
  name    = var.domain_name
  type    = "A"
  zone_id = aws_route53_zone.main.zone_id

  alias {
    name                   = aws_apigatewayv2_domain_name.main.domain_name_configuration[0].target_domain_name
    zone_id                = aws_apigatewayv2_domain_name.main.domain_name_configuration[0].hosted_zone_id
    evaluate_target_health = false
  }
}

5. Deploy Application Code and Data Schema with Terraform

I deploy Lambda function code and DynamoDB table schemas through Terraform, which keeps infrastructure and application changes in sync. The pipeline builds Lambda artifacts in parallel with running Terraform plan, then runs Terraform apply to deploy infrastructure changes, updated function code, and data schema updates together.

The build stage compiles your code and packages it into ZIP files. For custom runtimes, this includes your binary and handler script. These artifacts are stored in a directory that Terraform can access. Meanwhile, Terraform plan runs to validate infrastructure changes, including DynamoDB table definitions, indexes, and Lambda function configurations. Once both complete, Terraform apply deploys everything together; if the function code hash changed, Terraform updates the Lambda function. If table attributes or indexes changed, Terraform updates the DynamoDB schema.

Here's a simplified GitLab CI pipeline that implements this pattern:

.build_lambda:
  stage: build
  script:
    - echo "Building Lambda functions..."
    - ./build.sh
    - mkdir -p lambda-packages
    - cd /your/source/dir/function-one && zip -r ../../lambda-packages/function-one.zip .
    - cd /your/source/dir/function-two && zip -r ../../lambda-packages/function-two.zip .
  artifacts:
    paths:
      - lambda-packages/
    expire_in: 1 hour

.terraform_plan:
  stage: plan
  image: hashicorp/terraform:latest
  variables:
    AWS_ACCESS_KEY_ID: $TERRAFORM_AWS_ACCESS_KEY_ID
    AWS_SECRET_ACCESS_KEY: $TERRAFORM_AWS_SECRET_ACCESS_KEY
    AWS_REGION: "us-east-1"
  before_script:
    - cd infrastructure
    - terraform init -backend-config="bucket=$TF_STATE_BUCKET" -backend-config="key=$TF_STATE_KEY"
  script:
    - terraform plan -out=tfplan
  artifacts:
    paths:
      - infrastructure/tfplan
    expire_in: 1 week

plan_infrastructure:
  extends: .terraform_plan
  needs: []

deploy_infrastructure:
  stage: deploy
  image: hashicorp/terraform:latest
  needs:
    - job: build_lambda
      artifacts: true
    - job: plan_infrastructure
      artifacts: true
  variables:
    AWS_ACCESS_KEY_ID: $TERRAFORM_AWS_ACCESS_KEY_ID
    AWS_SECRET_ACCESS_KEY: $TERRAFORM_AWS_SECRET_ACCESS_KEY
    AWS_REGION: "us-east-1"
  before_script:
    - cd infrastructure
    - terraform init -backend-config="bucket=$TF_STATE_BUCKET" -backend-config="key=$TF_STATE_KEY"
  script:
    - terraform apply -auto-approve tfplan
  environment:
    name: production
  when: manual

The build job creates Lambda ZIP packages and stores them as artifacts. The plan job runs Terraform plan and saves the plan file, which includes changes to Lambda functions, DynamoDB tables, and other infrastructure. The deploy job requires both artifacts, ensuring the Lambda packages are ready before Terraform tries to read them. When Terraform apply runs, it updates Lambda function code when thesource_code_hash changes, and updates DynamoDB table schemas when attributes or indexes change.

I set the deploy job to when: manual so it requires explicit approval before running. This prevents accidental deployments and gives you a chance to review the Terraform plan before applying it.

6. Use Terraform Modules for Multi-Environment Deployments

I structure my Terraform code as a reusable module that gets called for each environment. This ensures that staging and production use identical infrastructure configurations, with only environment-specific variables changing. The module encapsulates all the resources; API Gateway, Lambda functions, DynamoDB tables, Route53 zones, and IAM roles.

Each environment has its own directory with a main.tf that calls the module with environment-specific variables. This keeps the module code DRY and makes it easy to add new environments or update all environments consistently.

module "infrastructure" {
  source = "../modules/infrastructure"

  project_name = "myapi"
  environment  = "staging"
  domain_name  = "api-staging.example.com"
  
  additional_domains = []
}

output "api_endpoint" {
  value = module.infrastructure.api_endpoint
}

In the CI/CD pipeline, I run Terraform for each environment separately. This allows you to deploy to staging first, verify it works, then deploy to production. The pipeline uses the same module code for both environments, ensuring consistency.

.build_lambda:
  stage: build
  script:
    - echo "Building Lambda functions..."
    - ./build.sh
    - mkdir -p lambda-packages
    - cd /your/source/dir/function-one && zip -r ../../lambda-packages/function-one.zip .
    - cd /your/source/dir/function-two && zip -r ../../lambda-packages/function-two.zip .
  artifacts:
    paths:
      - lambda-packages/
    expire_in: 1 hour

.terraform_plan_template: &terraform_plan
  stage: plan
  image: hashicorp/terraform:latest
  variables:
    AWS_ACCESS_KEY_ID: $TERRAFORM_AWS_ACCESS_KEY_ID
    AWS_SECRET_ACCESS_KEY: $TERRAFORM_AWS_SECRET_ACCESS_KEY
    AWS_REGION: "us-east-1"
  before_script:
    - cd infrastructure/environments/$ENVIRONMENT
    - terraform init -backend-config="bucket=$TF_STATE_BUCKET" -backend-config="key=$ENVIRONMENT/terraform.tfstate"
  script:
    - terraform plan -out=tfplan
  artifacts:
    paths:
      - infrastructure/environments/$ENVIRONMENT/tfplan
    expire_in: 1 week

plan_staging:
  extends: *terraform_plan
  variables:
    ENVIRONMENT: "staging"
  needs: []

plan_production:
  extends: *terraform_plan
  variables:
    ENVIRONMENT: "production"
  needs: []

.deploy_template: &terraform_deploy
  stage: deploy
  image: hashicorp/terraform:latest
  needs:
    - job: build_lambda
      artifacts: true
    - job: plan_$ENVIRONMENT
      artifacts: true
  variables:
    AWS_ACCESS_KEY_ID: $TERRAFORM_AWS_ACCESS_KEY_ID
    AWS_SECRET_ACCESS_KEY: $TERRAFORM_AWS_SECRET_ACCESS_KEY
    AWS_REGION: "us-east-1"
  before_script:
    - cd infrastructure/environments/$ENVIRONMENT
    - terraform init -backend-config="bucket=$TF_STATE_BUCKET" -backend-config="key=$ENVIRONMENT/terraform.tfstate"
  script:
    - terraform apply -auto-approve tfplan
  when: manual

deploy_staging:
  extends: *terraform_deploy
  variables:
    ENVIRONMENT: "staging"
  environment:
    name: staging

deploy_production:
  extends: *terraform_deploy
  variables:
    ENVIRONMENT: "production"
  environment:
    name: production

The pipeline builds Lambda artifacts once, then runs Terraform plan for both staging and production in parallel. Each environment has its own state file, so plans don't interfere with each other. When you're ready to deploy, you can approve staging first, verify it works, then approve production. Both environments use the same module code and Lambda artifacts, ensuring they stay consistent.

This approach makes it easy to maintain consistency between environments. If you update the module code, all environments will use the same changes. If you need environment-specific differences, you can pass different variables to the module without duplicating the infrastructure code.

Summary

This serverless stack handles the common requirements for building APIs: request routing, TLS termination, data persistence, and automated deployments. It doesn't include features like API versioning, rate limiting, or request validation; you can add those if needed, but for most internal APIs and straightforward web services, this foundation is sufficient.

Because everything is defined in Terraform, I can create identical environments for staging and production by changing a few variables. The full configuration is typically 200-300 lines of HCL, which is small enough to understand completely and maintain over time.

Monthly cost for a small API is around $10-20: a few dollars for API Gateway requests, a few dollars for Lambda invocations, and a few dollars for DynamoDB storage and requests. Route53 and ACM are essentially free for small projects. The main cost driver is traffic volume, which scales linearly with usage.

Need help setting this up? If you'd like assistance deploying your serverless stack on AWS or building out your infrastructure, get in touch. I'm happy to discuss your project and provide a free estimate.

Full Terraform Configuration

Here's the complete Terraform configuration split into two parts: the environment configuration that calls the module, and the module itself that contains all the infrastructure resources.

Environment Configuration (e.g., environments/staging/main.tf):

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 6.10.0"
    }
  }
  backend "s3" {
    bucket = "your-terraform-state-bucket"
    key    = "staging/terraform.tfstate"
    region = "us-east-1"
  }
}

provider "aws" {
  region = "us-east-1"
}

variable "project_name" {
  description = "Project name"
  type        = string
  default     = "myapi"
}

variable "environment" {
  description = "Environment name"
  type        = string
}

variable "domain_name" {
  description = "Domain name for the API"
  type        = string
}

variable "lambda_handler" {
  description = "Lambda handler name"
  type        = string
  default     = "handler"
}

module "infrastructure" {
  source = "../../modules/infrastructure"

  project_name  = var.project_name
  environment   = var.environment
  domain_name   = var.domain_name
  lambda_handler = var.lambda_handler
}

output "api_endpoint" {
  value = module.infrastructure.api_endpoint
}

output "domain_name" {
  value = module.infrastructure.domain_name
}

output "dynamodb_table_name" {
  value = module.infrastructure.dynamodb_table_name
}

Infrastructure Module (e.g., modules/infrastructure/main.tf):

variable "project_name" {
  description = "Project name"
  type        = string
}

variable "environment" {
  description = "Environment name"
  type        = string
}

variable "domain_name" {
  description = "Domain name for the API"
  type        = string
}

variable "lambda_handler" {
  description = "Lambda handler name"
  type        = string
}

locals {
  project_name = var.project_name
  environment  = var.environment
  domain_name  = var.domain_name
}

resource "aws_route53_zone" "main" {
  name = var.domain_name
}

resource "aws_acm_certificate" "main" {
  domain_name       = var.domain_name
  validation_method = "DNS"

  lifecycle {
    create_before_destroy = true
  }
}

resource "aws_route53_record" "cert_validation" {
  for_each = {
    for dvo in aws_acm_certificate.main.domain_validation_options : dvo.domain_name => {
      name   = dvo.resource_record_name
      record = dvo.resource_record_value
      type   = dvo.resource_record_type
    }
  }

  allow_overwrite = true
  name            = each.value.name
  records         = [each.value.record]
  ttl             = 60
  type            = each.value.type
  zone_id         = aws_route53_zone.main.zone_id
}

resource "aws_acm_certificate_validation" "main" {
  certificate_arn         = aws_acm_certificate.main.arn
  validation_record_fqdns = [for record in aws_route53_record.cert_validation : record.fqdn]
}

resource "aws_dynamodb_table" "main" {
  name         = "${local.project_name}-${local.environment}-data"
  billing_mode = "PAY_PER_REQUEST"
  hash_key     = "id"

  attribute {
    name = "id"
    type = "S"
  }

  ttl {
    attribute_name = "ttl"
    enabled        = true
  }
}

resource "aws_iam_role" "lambda_role" {
  name = "${local.project_name}-${local.environment}-lambda-role"

  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [{
      Action = "sts:AssumeRole"
      Effect = "Allow"
      Principal = {
        Service = "lambda.amazonaws.com"
      }
    }]
  })
}

resource "aws_iam_role_policy_attachment" "lambda_basic" {
  role       = aws_iam_role.lambda_role.name
  policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
}

resource "aws_iam_role_policy" "lambda_dynamodb" {
  name = "${local.project_name}-${local.environment}-lambda-dynamodb"
  role = aws_iam_role.lambda_role.id

  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [{
      Effect = "Allow"
      Action = [
        "dynamodb:GetItem",
        "dynamodb:PutItem",
        "dynamodb:UpdateItem",
        "dynamodb:DeleteItem",
        "dynamodb:Query",
        "dynamodb:Scan"
      ]
      Resource = [
        aws_dynamodb_table.main.arn,
        "${aws_dynamodb_table.main.arn}/index/*"
      ]
    }]
  })
}

resource "aws_lambda_function" "create_item" {
  function_name = "create-item-${local.environment}"
  role          = aws_iam_role.lambda_role.arn
  handler       = var.lambda_handler
  runtime       = "provided.al2023"
  architectures = ["arm64"]

  filename         = "${path.module}/../../lambda-packages/create-item.zip"
  source_code_hash = filebase64sha256("${path.module}/../../lambda-packages/create-item.zip")

  environment {
    variables = {
      TABLE_NAME = aws_dynamodb_table.main.name
      LOG_LEVEL  = "info"
    }
  }
}

resource "aws_lambda_function" "get_item" {
  function_name = "get-item-${local.environment}"
  role          = aws_iam_role.lambda_role.arn
  handler       = var.lambda_handler
  runtime       = "provided.al2023"
  architectures = ["arm64"]

  filename         = "${path.module}/../../lambda-packages/get-item.zip"
  source_code_hash = filebase64sha256("${path.module}/../../lambda-packages/get-item.zip")

  environment {
    variables = {
      TABLE_NAME = aws_dynamodb_table.main.name
      LOG_LEVEL  = "info"
    }
  }
}

resource "aws_apigatewayv2_api" "main" {
  name          = "${local.project_name}-${local.environment}-api"
  protocol_type = "HTTP"
}

resource "aws_apigatewayv2_stage" "main" {
  api_id      = aws_apigatewayv2_api.main.id
  name        = local.environment
  auto_deploy = true
}

resource "aws_apigatewayv2_integration" "create_item" {
  api_id           = aws_apigatewayv2_api.main.id
  integration_type = "AWS_PROXY"
  integration_uri  = aws_lambda_function.create_item.invoke_arn
}

resource "aws_apigatewayv2_integration" "get_item" {
  api_id           = aws_apigatewayv2_api.main.id
  integration_type = "AWS_PROXY"
  integration_uri  = aws_lambda_function.get_item.invoke_arn
}

resource "aws_apigatewayv2_route" "create_item_post" {
  api_id    = aws_apigatewayv2_api.main.id
  route_key = "POST /items"
  target    = "integrations/${aws_apigatewayv2_integration.create_item.id}"
}

resource "aws_apigatewayv2_route" "get_item_get" {
  api_id    = aws_apigatewayv2_api.main.id
  route_key = "GET /items/{id}"
  target    = "integrations/${aws_apigatewayv2_integration.get_item.id}"
}

resource "aws_lambda_permission" "create_item_api_gw" {
  statement_id  = "AllowExecutionFromAPIGateway"
  action        = "lambda:InvokeFunction"
  function_name = aws_lambda_function.create_item.function_name
  principal     = "apigateway.amazonaws.com"
  source_arn    = "${aws_apigatewayv2_api.main.execution_arn}/*/*"
}

resource "aws_lambda_permission" "get_item_api_gw" {
  statement_id  = "AllowExecutionFromAPIGateway"
  action        = "lambda:InvokeFunction"
  function_name = aws_lambda_function.get_item.function_name
  principal     = "apigateway.amazonaws.com"
  source_arn    = "${aws_apigatewayv2_api.main.execution_arn}/*/*"
}

resource "aws_apigatewayv2_domain_name" "main" {
  domain_name = var.domain_name

  domain_name_configuration {
    certificate_arn = aws_acm_certificate_validation.main.certificate_arn
    endpoint_type   = "REGIONAL"
    security_policy  = "TLS_1_2"
  }
}

resource "aws_apigatewayv2_api_mapping" "main" {
  api_id      = aws_apigatewayv2_api.main.id
  domain_name = aws_apigatewayv2_domain_name.main.id
  stage       = aws_apigatewayv2_stage.main.id
}

resource "aws_route53_record" "api" {
  name    = var.domain_name
  type    = "A"
  zone_id = aws_route53_zone.main.zone_id

  alias {
    name                   = aws_apigatewayv2_domain_name.main.domain_name_configuration[0].target_domain_name
    zone_id                = aws_apigatewayv2_domain_name.main.domain_name_configuration[0].hosted_zone_id
    evaluate_target_health = false
  }
}

resource "aws_iam_user" "cicd_user" {
  name = "${local.project_name}-${local.environment}-cicd-user"
}

resource "aws_iam_user_policy_attachment" "cicd_terraform_policy" {
  user       = aws_iam_user.cicd_user.name
  policy_arn = "arn:aws:iam::aws:policy/PowerUserAccess"
}

resource "aws_iam_access_key" "cicd_user" {
  user = aws_iam_user.cicd_user.name
}

output "api_endpoint" {
  description = "API Gateway endpoint URL"
  value       = aws_apigatewayv2_api.main.api_endpoint
}

output "domain_name" {
  description = "Custom domain name"
  value       = aws_apigatewayv2_domain_name.main.domain_name
}

output "cicd_access_key_id" {
  description = "AWS Access Key ID for CI/CD Terraform operations"
  value       = aws_iam_access_key.cicd_user.id
  sensitive   = true
}

output "cicd_secret_access_key" {
  description = "AWS Secret Access Key for CI/CD Terraform operations"
  value       = aws_iam_access_key.cicd_user.secret
  sensitive   = true
}

output "dynamodb_table_name" {
  description = "DynamoDB table name"
  value       = aws_dynamodb_table.main.name
}

output "route53_zone_name_servers" {
  description = "Name servers for the Route53 zone"
  value       = aws_route53_zone.main.name_servers
}