Storing Serverless Functions on AWS ECR via GitLab

Sebastien Goasguen

Sebastien Goasguen

Jul 31, 2019
Storing Serverless Functions on AWS ECR via GitLab
Storing Serverless Functions on AWS ECR via GitLab

In a Knative environment, serverless workloads are packaged as containers. Functions are wrapped into execution runtimes like the Knative Lambda Runtime during a container build. Since the functions artifacts are containers they can be stored in any container registry.

Recently we had to deploy TriggerMesh in an environment where the container registry used was AWS ECR. We usually deploy a private registry but in this case, we had to use ECR.

In addition, the function code was stored in Gitlab and the user wanted to build the container images directly via GitLab CI and then push to AWS ECR.

In this post we show you how we did it, using Kaniko in GitLab CI and configuring an ECR repository to be able to store the functions into it.

Create an ECR repository

Creating repositories where you can store your container images on AWS is straightforward. You go to your AWS console, browse to the ECR service and create a repository. The URL of the repository is region-specific and of the form:

<aws_account_id>.dkr.us-east-1.amazon.com/<my_image>

Pushing to this repository endpoint can be done with any container client like the Docker client.

However, to be able to push in an automated and secure manner you should do this push as part of your continuous integration (CI) configuration.

Building Containers with GitLab CI

In GitLab CI, you can build containers and use the GitLab registry thanks to Kaniko. To do this you simply define a build stage similar to:

build:
  stage: build
  image:
    name: gcr.io/kaniko-project/executor:debug
    entrypoint: [""]
  script:
    - echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_REGISTRY_PASSWORD\"}}}" > /kaniko/.docker/config.json
    - /kaniko/executor --context $CI_PROJECT_DIR --dockerfile $CI_PROJECT_DIR/Dockerfile --destination $CI_REGISTRY_IMAGE:$CI_COMMIT_TAG
  only:
    - tags

In the snippet above, you see that a container image for Kaniko is used, the command executes a container build and pushes the resulting image to the internal container registry.

Pushing to AWS ECR via Kaniko

To push to ECR, we need to use a Docker config.json file with a credHelpers section that looks like this:

{
  "credHelpers": {
    "aws_account_id.dkr.ecr.region.amazonaws.com": "ecr-login"
  }
}

And to authenticate properly to ECR, Kaniko needs to be able to access a file containing AWS credentials. The tiny trick here is to write an AWS credential containing API keys, like ~/.aws/credentials:

[default]
aws_access_key_id=AKIAIOSFODNN7EXAMPLE
aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY

Then encode it in base64 cat credentials | base64 - and store it as an environment variables in GitLab CI.

You build stage can then be modified like so:

stages:
  - build

variables:
  REPO_URL: <account_id>.dkr.ecr.us-east-1.amazonaws.com

build:
  stage: build
  image:
    name: gcr.io/kaniko-project/executor:debug-v0.10.0
    entrypoint: [""]
  script:
    - /busybox/mkdir -p /root/.aws
    - /busybox/echo $CREDENTIALS | base64 -d > /root/.aws/credentials
    - /busybox/echo "{\"credHelpers\":{\"$REPO_URL\":\"ecr-login\"}}" > /kaniko/.docker/config.json
    - /kaniko/executor --context $CI_PROJECT_DIR --dockerfile $CI_PROJECT_DIR/Dockerfile --destination $REPO_URL/phoenix:$CI_COMMIT_TAG

From then on, every time you modify your function code GitLab CI will kick in, build the container and push it to your AWS ECR repository. From there you will be able to create your Knative service…

Create your first event flow in under 5 minutes