Serverless API

The Lambda API template provides a template for defining an API using the AWS API Gateway HTTP API. It is designed to allow for rapid development with minimal configuration by defining API routes using folders and source files similar to the way APIs are defined in Next.js.



The following key properties need to be configured for this template:

  • Lambda Name Prefix: The prefix to be used by Lambda names generated by this template for the defined routes.
  • API Domain: The domain where the API should be deployed to. For instance, to be able to call the API endpoint the API domain needs to be configured.
  • Hosted Zone Domain: A Route 53 hosted zone that will allow adding the API Domain as a record. For instance, in order to configure the API domain, the hosted zones or would be valid. For more details, please check Hosted Zone Configuration in the Goldstack documentation.
  • CORS Header: An optional CORS header to enable a UI that is hosted on a different domain to access this API. For instance, for a UI that is deployed to the domain the CORS header should be supplied. To learn more about CORS, see the Cross-Origin Resource Sharing (CORS) in the MDN docs.

Getting Started

1. Project Setup

Before using this template, you need to configure the project. For this, please see the Getting Started Guide on the Goldstack documentation.

2. Setup Infrastructure

To stand up the infrastructure for this module, find the directory for this module in the packages/ folder and navigate to this folder in the command line. Then identify the name of the deployment you have defined in the Goldstack configuration tool. This can be found in the packages/[moduleName]/goldstack.json file. Look for the "deployments" property and there for the "name" of the first deployment. The name should either be dev or prod.

In order to stand up the infrastructure, run the following command:

yarn infra up [deploymentName]

This will be either yarn infra up dev or yarn infra up prod depending on your choice of deployment. Note that running this command can take a while.

Note that your API will not work yet. It first needs to be deployed as per instructions below.

3. Deploy Application

Once the infrastructure is successfully set up in AWS using yarn infra up, we can deploy the module. For this, simply run the following command:

yarn deploy [deploymentName]

This will either be yarn deploy dev or yarn deploy prod depending on your choice of deployment during project configuration.

You should now be able to access your API. The domain under which the API is deployed is configured in goldstack.json under "deployments[*].apiDomain". You can access this API domain with a browser since the default API provided in the template allows for GET requests to the root.


The source code for the API is defined in the src/ folder. The entry point for defining new routes is in src/routes. The easiest way to get started extending the API is to modify or add new routes to the server by adding new folders and files. The template will automatically update the infrastructure configuration for the new routes defined, such as adding routes to the API Gateway or defining new Lambda functions. Simply run yarn infra up [environment] after adding or removing routes.

There are a few things to keep in mind when defining new endpoints:

Defining Handlers

When defining a new endpoint in the src/routes folder by adding a new TypeScript source folder, a handler needs to be defined in the file. The easiest handler function simply returns a JSON object:

import { Handler, APIGatewayProxyEventV2 } from 'aws-lambda';

type ProxyHandler = Handler<APIGatewayProxyEventV2, any>;

// eslint-disable-next-line @typescript-eslint/no-unused-vars
export const handler: ProxyHandler = async (event, context) => {
  const message = event.queryStringParameters?.message || 'no message';

  return {
    message: `${message}`,

To customise the HTTP response, we can return an object of the type APIGatewayProxyResultV2 (see api-gateway-proxy.d.ts. There is unfortunately little documentation available about building responses specifically for JavaScript - but AWS provides reasonable general documentation about building responses with Lambdas for the HTTP API.

Here an example of a handler that customised the HTTP response:

import {
} from 'aws-lambda';

type ProxyHandler = Handler<APIGatewayProxyEventV2, APIGatewayProxyResultV2>;

// eslint-disable-next-line @typescript-eslint/no-unused-vars
export const handler: ProxyHandler = async (event, context) => {
  return {
    statusCode: 201,
    body: JSON.stringify({
      message: 'Hello World!',

Information about the HTTP request made can be found in the event object. For reference about the information available, see Working with AWS Lambda proxy integrations for HTTP APIs. Note this template uses version 2.0 of the API.

Defining Routes

Routes are defined through the names of folders and source folders within the src/routes folder. There are a few rules to keep in mind:

  • Basic Routing: The names of files is used for the names of resources. For instance, src/routes/resource.ts will available under
  • Subfolders: Names of folders will be used in the path to resources. For instance, src/routes/group/resource.ts will be available under
  • Indices: For defining a route that matches / within a folder (or the root of the API), a source file with the name $index.ts can be defined. For instance, src/routes/group/$index.ts will be available under
  • Default Fallback: To define a fallback that is called when no route is matched, define a source file with the name $default.ts. There should only be one $default.ts file in the API. This will match all paths that are not covered by other routes.
  • Path Parameters: Parameters in path are supported using the syntax {name}. For instance, src/user/{name}.ts will make the parameter name available in the endpoint. Parameters are also supported as folder names.
  • Greedy Paths: If a parameter should match multiple resource levels, it can be defined as follows {greedy+}. For instance src/group/{greedy+}.ts will match and and all other paths under group/.

Updating Infrastructure

Note that after defining a new route, the infrastructure will need to be updated using yarn infra up [deployment name]. This is because new or changed routes will require changes to the API Gateway and/or the Lambda functions that are defined.

Writing Tests

The Goldstack template for this module contains an example of an integration test for the API. Test are easy to write and very fast to run by utilising a custom Express.js server. It is also very cheap to create instances of the API on AWS infrastructure; thus more sophisticated setups can run tests directly against the API deployed on AWS.

Here an example for a local test:

import getPort from 'find-free-port';
import fetch from 'node-fetch';
import {
} from '@goldstack/utils-aws-http-api-local';

describe('Should create API', () => {
  let port: undefined | number = undefined;
  let server: undefined | StartServerResult = undefined;

  beforeAll(async () => {
    port = await new Promise<number>((resolve, reject) => {
        process.env.TEST_SERVER_PORT || '50321',
        (err: any, p1: number) => {
          if (err) {
    server = await startServer({
      port: `${port}`,
      routesDir: './src/routes',

  test('Should receive response and support parameters', async () => {
    const res = await fetch(`http://localhost:${port}/echo?message=abc`);
    const response = await res.json();

  afterAll(async () => {
    if (server) {
      await server.shutdown();

Best Practices

  • Keep Lambdas Lightweight: This template will package Lambdas only with the code that they require. Aim to minimise the number of dependencies that are imported into each handler function. The smaller the Lambda functions are, the less noticeable cold starts will be. Cold starts using Lambdas packaged by this module can be as low as 150 ms (with almost all of that time spent on AWS getting the basic infrastructure for the Lambda up and running).
  • Think RESTful: This module does not limit in any way which kinds of APIs can be created. However, it is often advisable to develop APIs in a RESTful way. For a reference on best practices, see RESTful web API design by Microsoft.


All infrastructure for this module is defined in Terraform. You can find the Terraform files for this template in the directory [moduleDir]/infra/aws. You can define multiple deployments for this template, for instance for development, staging and production environments.

If you configured AWS deployment before downloading your project, the deployments and their respective configurations are defined in [moduleDir]/goldstack.json.

The configuration tool will define one deployment. This will be either dev or prod depending on your choice during project configuration. In the example goldstack.json below, a deployment with the name dev is defined.

  "$schema": "./schemas/package.schema.json",
  "name": "...",
  "template": "...",
  "templateVersion": "...",
  "configuration": {},
  "deployments": [
      "name": "dev",
      "awsRegion": "us-west-2",
      "awsUser": "awsUser",
      "configuration": {

Infrastructure Commands

Infrastructure commands for this template can be run using yarn. There are four commands in total:

  • yarn infra up: For standing up infrastructure.
  • yarn infra init: For initialising Terraform.
  • yarn infra plan: For running Terraform plan.
  • yarn infra apply: For running Terraform apply.
  • yarn infra destroy: For destroying all infrastructure using Terraform destroy.
  • yarn infra upgrade: For upgrading the Terraform versions (supported by the template). To upgrade to an arbitrary version, use yarn infra terraform.
  • yarn infra terraform: For running arbitrary Terraform commands.

For each command, the deployment they should be applied to must be specified.

yarn infra [command] [deploymentName]

For instance, to stand up the infrastructure for the dev deployment, the following command would need to be issued:

yarn infra up dev

Generally you will only need to run yarn infra up. However, if you are familiar with Terraform and want more fine-grained control over the deployment of your infrastructure, you can also use the other commands as required.

Note that for running yarn infra terraform, you will need to specify which command line arguments you want to provide to Terraform. By default, no extra arguments are provided:

yarn infra terraform [deployment] plan

If extra arguments are needed, such as variables, you can use the --inject-variables option, such as for running terraform plan:

yarn infra terraform [deployment] --inject-variables plan

If you want to interact with the remote backend, you can also provide the --inject-backend-config option, such as for running terraform init:

yarn infra terraform [deployment] --inject-backend-config init

Customizing Terraform

Goldstack templates make it very easy to customize infrastructure to your specific needs. The easiest way to do this is to simply edit the *.tf files in the infra/aws folder. You can make the changes you need and then run yarn infra up [deploymentName] to apply the changes.

The infra/aws folder contains a file that contains the variables required for your deployment; for instance the domain name for a website. The values for these variables are defined in the module's goldstack.json file in the "configuration" property. There is one global configuration property that applies for all deployments and each deployment also has its own configuration property. In order to add a new variable, add the variable to and then add it to the configuration for your template or to the configurations for the deployments.

Note that due to JavaScript and Terraform using different conventions for naming variables, Goldstack applies a basic transformation to variable names. Camel-case variables names are converted to valid variables names for Terraform by replacing every instance of a capital letter C with _c in the variable name. For instance:

myVariableName in the Goldstack configuration will translate to the Terraform variable my_variable_name as defined in

Terraform State

In order to manage your infrastructure, Terraform maintains a state for each deployment; to calculate required changes when the infrastructure is updated and also for destroying the infrastructure if it is no longer required. Goldstack by default will store the terraform state in the infra/aws folder as simple files.

This works well for deploying infrastructure from your local development environment but is not a good choice when building a CI/CD pipeline for the infrastructure definition. In that case, it is better to define Remote State. A popular choice many projects adopt here is to store the state in an S3 bucket. Please see the Terraform documentation for further details.


This template can be packaged up and deployed to the deployments specified in goldstack.json. Note that deployment will only work after the infrastructure for the respective deployment has been stood up. To deploy your package, run the following script:

yarn deploy [deploymentName]

Guides and How To

Adding environment variables

Environment variables are defined in the Terraform source code for this template. Specifically they are defined in the infra/aws/ file in the resource resource "aws_lambda_function" "this". Note that all lambdas share the same environment variables. By default, there are a few environment variables specified:

 environment {
    variables = {
      CORS                 = var.cors
      NODE_OPTIONS         = "--enable-source-maps"

Add your environment variables into the variables map:

 environment {
    variables = {
      CORS                 = var.cors
      NODE_OPTIONS         = "--enable-source-maps"
      YOUR_ENV_VAR = 'your env var value'

Usually environment variables should have different values depending on which environment the server is deployed to. This can be accomplished using Terraform variables. Change your variable declaration to the following:

YOUR_ENV_VAR = var.my_env

Then go into the file infra/aws/ and add the following definition:

variable "my_env" {
  description = "My environment variable"
  type = string

And finally add this variable to all deployment configurations in goldstack.json:

      "configuration": {
        "lambdaName": "my-lambda",
        "apiDomain": "",
        "hostedZoneDomain": "",
        "cors": "",
        "myEnv": "Value for deployment"

Note that the Terraform variable my_env translates to myEnv in the JSON definition (Just remove all _ and make the first character after _ uppercase for your variable definitions).

Lastly, to support local development make sure to define the variable correctly in all scripts in package.json. Specifically, you may want to define them for "test", "test-ci" and "watch".

    "test": "MY_ENV=localvalue jest --passWithNoTests --watch --config=jest.config.js",
    "test-ci": "MY_ENV=localvalue jest --passWithNoTests --config=jest.config.js --detectOpenHandles",
    "watch": "PORT=8731 MY_ENV=localvalue nodemon --config nodemon.json --exec 'yarn node dist/src/local.js'"

Note that for credentials and other values that should not be committed to source code, it may be better to store these in AWS Secrets Manager and retrieve them using the AWS SDK based on the process.env.GOLDSTACK_DEPLOYMENT value provided.

It is also possible to provide the value of Terraform variables through environment variables during build time. For instance, if you have defined the variable my_env, simply provide the environment variable MY_ENV when calling yarn infra.

MY_ENV=value yarn infra up prod

This works very well in combination with secrets for GitHub actions.

- name: Update API infra
  run: |
    yarn workspace my-api infra up prod
    MY_ENV: ${{secrets.MY_ENV}}
    AWS_USER_NAME: goldstack-prod
    AWS_DEFAULT_REGION: us-west-2

Changing Esbuild behaviour

Github issue

Provide esbuild.config.json in the ./packages/serverless-api folder of your generated project with example config like this:

  "platform": "node"

This object will be used for build process on every serverless function

If you want to change esbuild config for specific function, just add esbuild config, like this:


Priority for resulting esbuild config is this (from highest to lowest):

Analysing Generated Bundles

It is often useful to analyse the bundles generated by esbuild to optimise their size. For this, the template will always provide esbuild metafiles in the ./distLambdas/zips folder. These can be analysed with a tool such as esbuild-visualizer.

Simply install this tool:

npm i -g esbuild-visualizer

And then analyse any of the metafiles generated:

esbuild-visualizer --metadata ./distLambda/zips/[your function name].meta.json

This will yield a stats.html file you can view with any web browser.

Bundle statistics

Troubleshooting and Frequently Asked Questions

DNS Name for API Cannot be resolved

After applying yarn infra up [deployment] and yarn deploy [deployment] it is not possible to call the API at https://[configuration.apiDomain]. An error such as Address cannot be resolved or DNSProbe failed is reported.

This is caused by changes to the deployed DNS hosted zone needing some time to propagate through the DNS network. Wait for 10-30 min and the API should be able to be called without problems. To validate your DNS name has been configured correctly, go to the AWS Route 53 Console, choose the region you have deployed, and validate there is a correct entry for the hosted zone you have selected. There should be an A entry such as the following:

[apiDomain].[hostedZone] A [id]

Concurrent Modification Error when Creating Infrastructure

The following error may be displayed sometime when running yarn infra up prod [deploymentName] for the first time. This is due to an error in the way Terraform schedules the creation of the resources. The easy solution to this problem is simply running yarn infra up prod [deploymentName] again.

Error: error creating API Gateway v2 route: ConflictException: Unable to complete operation due to concurrent modification.
Please try again later.

See Issue #40

Security Hardening

This template requires further security hardening when deployed in critical production applications. Specifically the lambdas are given the role arn:aws:iam::aws:policy/AdministratorAccess" and this will grant the lambdas access to all resources on the AWS account, including the ability to create and destroy infrastructure. It is therefore recommended to grant the lambdas only rights to resources it needs access to, such as read and write permissions for an S3 bucket. This can be modified in infra/aws/ in the resource resource "aws_iam_role_policy_attachment" "lambda_admin_role_attach".

Note that in this templates all lambdas for the API share the same permissions. This is by design to simply setup and management of the infrastructure in the understanding that the API forms one integrated element of a system. If there are concerns about access to resourced being shared by multiple lambdas, another API can be created.

© 2024 Pureleap Pty. Ltd. and Contributors