Published on: July 31, 2020
8 min read
Build containers with the AWS Fargate Custom Executor for GitLab Runner and AWS CodeBuild
AWS Fargate does not allow containers to run in privileged mode. This means Docker-in-Docker (DinD), which enables the building and running of container images inside of containers, does not work with the AWS Fargate Custom Executor driver for GitLab Runner. The good news is that users don't have to be blocked by this and may use a cloud-native approach to build containers, effectively leveraging a seamless integration with AWS CodeBuild in the CI/CD pipeline.
We provide in-depth instructions on how to autoscale GitLab CI on AWS Fargate in GitLab Runner's documentation. In this blog post, we explain how to instrument CI containers and source repositories to trigger AWS CodeBuild and use it to build container images.
How distinct CI workloads run on Fargate.
The picture above illustrates distinct GitLab CI workloads running on Fargate. The container identified by ci-coordinator (001)
is running a typical CI job which does not build containers, so it does not require additional configuration or dependencies. The second container, ci-coordinator (002)
, illustrates the problem to be tackled in this post: The CI container includes the AWS CLI in order to send content to an Amazon S3 Bucket, trigger the AWS CodeBuild job, and fetch logs.
Once these prerequisites are configured, you can dive into the six-step process to configure CI containers and source repositories to trigger AWS CodeBuild and use it to build container images.
S3
.Create bucket
.ci-container-build-bucket
will be used as example) and select your preferred region.Create bucket
.Create folder
.gitlab-runner-builds
name.Save
.Services
in the top menuCodeBuild
in the Developer Tools sectionCreate build project
Project Name
enter ci-container-build-project
Source provider
select Amazon S3
Bucket
select the ci-container-build-bucket
created in step onegitlab-runner-builds/build.zip
Environment image
, select Managed image
Operating system
select your preferred OS from the available optionsRuntime(s)
, choose Standard
.Image
, select aws/codebuild/standard:4.0
Image version
, select Always use the latest image for this runtime version
Environment type
select Linux
Privileged
flagService role
select New service role
and note the sugggested Role name
Build specifications
select Use a buildspec file
As stated in Autoscaling GitLab CI on AWS Fargate, a custom container is required to run GitLab CI jobs on Fargate. Since the solution relies on communicating with S3 and CodeBuild, you'll need to have the AWS CLI tool available in the CI container.
Install the zip
tool to make S3 communication smoother. As an example of a Ubuntu-based container, the lines below must be added to the CI container's Dockerfile
:
RUN apt-get update -qq -y \
&& apt-get install -qq -y curl unzip zip \
&& curl -Lo awscliv2.zip https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip \
&& unzip awscliv2.zip \
&& ./aws/install
By default, CodeBuild looks for a file named buildspec.yml
in the build source. This file will instruct CodeBuild on how to build and publish the resulting container image. Create this file with the content below and commit it to the git repository (if you changed the Buildspec name when configuring the CodeBuild project in Step 2, please create the file accordingly):
version: 0.2
phases:
install:
commands:
- nohup /usr/local/bin/dockerd --host=unix:///var/run/docker.sock --host=tcp://127.0.0.1:2375 --storage-driver=overlay2&
- timeout 15 sh -c "until docker info; do echo .; sleep 1; done"
build:
commands:
- echo Build started on `date`
- docker -v
- docker build -t <IMAGE-TAG> .
- echo Build completed on `date`
Now we will set up the GitLab CI job that will pull everything together.
The CI job will need to interact with AWS Cloud to start CodeBuild jobs, poll the status of the jobs, and fetch logs. Commands such as aws codebuild
and aws logs
help to tackle this, so let's use them in a script, codebuild.sh
:
#!/bin/bash
build_project=ci-container-build-project
build_id=$(aws codebuild start-build --project-name $build_project --query 'build.id' --output text)
build_status=$(aws codebuild batch-get-builds --ids $build_id --query 'builds[].buildStatus' --output text)
while [ $build_status == "IN_PROGRESS" ]
do
sleep 10
build_status=$(aws codebuild batch-get-builds --ids $build_id --query 'builds[].buildStatus' --output text)
done
stream_name=$(aws codebuild batch-get-builds --ids $build_id --query 'builds[].logs.streamName' --output text)
group_name=$(aws codebuild batch-get-builds --ids $build_id --query 'builds[].logs.groupName' --output text)
aws logs get-log-events --log-stream-name $stream_name --log-group-name $group_name --query 'events[].message' --output text
echo Codebuild completed with status $build_status
Once the steps one through five are complete, the source repository will be structured as follows:
/sample-repository
├── .gitlab-ci.yml
├── buildspec.yml
├── codebuild.sh
├── Dockerfile
├── <APPLICATION-FILES>
The final step to build the container is to add a job to .gitlab-ci.yml
:
dockerbuild:
stage: deploy
script:
- zip build.zip buildspec.yml Dockerfile <APPLICATION-FILES>
- aws configure set default.region <REGION>
- aws s3 cp build.zip s3://ci-container-build-bucket/gitlab-runner-builds/build.zip
- bash codebuild.sh
Below are some definitions from terms in the script:
<APPLICATION-FILES>
is a placeholder for the files that will be required to successfully build the resulting container image using the Dockerfile
, e.g., package.json
and app.js
in a Node.js applicationDockerfile
is used to build the resulting image. Note: It is not the same file used to build the CI container image, mentioned in Step 3: Build the CI container imageThe final step is to set up the AWS credentials. As we already mentioned, the CI job will interact with AWS through the AWS CLI to perform a number of operations, and to do that, the AWS CLI needs to authenticate as an IAM user with the permissions listed below. We recommend you create a new user and grant it minimal privileges instead of using your personal AWS user account. For the sake of simplicity, we suggest this approach to complete this walk-through guide.
This AWS user only needs programmatic access and do not forget to make note of its Access key ID and Secret access key – they will be needed later. A simple way to grant only the minimal privileges for the new user is to create a customer managed policy since it can be directly attached to the user. A group might also be used to grant the same privileges for more users, but it is not mandatory for running the sample workflow.
S3
{
"Effect": "Allow",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::ci-container-build-bucket/gitlab-runner-builds/*"
}
CodeBuild
{
"Effect": "Allow",
"Action": ["codebuild:StartBuild", "codebuild:BatchGetBuilds"],
"Resource": "arn:aws:codebuild:<REGION>:<ACCOUNT-ID>:project/ci-container-build-project"
}
CloudWatch Logs
{
"Effect": "Allow",
"Action": "logs:GetLogEvents",
"Resource": "arn:aws:logs:<REGION>:<ACCOUNT-ID>:log-group:/aws/codebuild/ci-container-build-project:log-stream:*"
}
The access credentials can be provided to AWS CLI through GitLab CI environment variables. Please go to your GitLab Project's CI/CD Settings, click Expand in the Variables section, add AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
with the values you got from the AWS Management Console after creating the IAM user. See the image below for the result you can expect:
Using an IAM Role and Amazon ECS temporary/unique security credentials is also possible, but not covered in this tutorial.
With all configurations in place, commit the changes and trigger a new pipeline to watch the magic happen!
build.zip
build.zip
is then uploaded to the S3 Bucket we created in Step 1: Create an Amazon S3 Bucketcodebuild.sh
starts a CodeBuild job based on the project created in Step 2: Create an AWS CodeBuild Project (Note: that project has an S3 object as its source provider)gitlab-runner-builds/build.zip
from S3, decompresses it and – from buildspec.yml
– builds the resulting container imageA sample repository, demonstrating everything described in the article is available here.
If you want to perform a cleanup after testing the custom executor with AWS Fargate and CodeBuild, you should remove the following objects:
RUN
command added to the CI container image in Step 3buildspec.yml
file created in Step 4codebuild.sh
file created in Step 5dockerbuild
job added to .gitlab-ci.yml
in Step 5Read more about GitLab and AWS: -How autoscaling GitLab CI works on AWS Fargate -GitLab 12.10 released with Requirements Management and Autoscaling CI on AWS Fargate -Announcing 32/64-bit Arm Runner Support for AWS Graviton2
Cover image by Lucas van Oort on Unsplash