Define a Docker Compose file, transpose to a Cloudformation template, and deploy the cluster to Amazon EC2 Container Service using AWS CLI

In this post, I’ll share a simple process to take a Docker Compose application, convert it to an Amazon EC2 Container Service (ECS) Cloudformation task definition, build/push the images to Amazon EC2 Container Registry (ECR), and deploy the cluster to Amazon ECS.

To get started I created a simple Node.js/Express app. It creates an Elasticsearch client connection, and then ensures the index and sample document exist. A request to ‘/’ queries Elasticsearch and returns the JSON response. new file: express/app.js.

const express = require('express')
const app = express()

const elasticsearch = require('elasticsearch')
const es_client = elasticsearch.Client({
  host: process.env.ELASTICSEARCH_HOST + ':9200'
})
const es_index = 'ecs_index'
const es_type = 'ecs_type'
const express_port = 3000

// ensure es index exists
es_client.indices.create({
  index: es_index
}, function(err, resp, status) {
  // ensure es document exists
  es_client.index({
    index: es_index,
    type: es_type,
    id: 1,
    body: {
      foo: 'bar'
    }
  }, function(err, resp, status) {
    if (err) console.log('ERROR:', err)
  })
})

app.get('/', function (req, res) {
  es_client.search({
    index: es_index,
    type: es_type,
    body: {
      query: {
        match_all: {}
      }
    }
  }).then(function(response){
    res.send(response.hits.hits)
  }, function(error) {
    res.status(error.statusCode).send(error.message)
  })
})

app.listen(express_port, function () {
  console.log('App starting on port: ', express_port)
})

I added a bash script to wait for Elasticsearch before starting the Express app, new file: express/start:

#!/bin/bash

until curl elasticsearch:9200 &>/dev/null; do
  sleep 1
done

node app.js

Here is the Dockerfile I created for the Express/Node.js container, new file: express/Dockerfile:

FROM node:6.11.1

RUN apt-get update -qq && apt-get install -y netcat

ENV APP_HOME /express
WORKDIR $APP_HOME

COPY app.js $APP_HOME
COPY package.json $APP_HOME
COPY start $APP_HOME

RUN npm install

For the Nginx container, I started with the default configuration and added a section to proxy_pass requests to the Express container.

File: nginx/nginx.conf

user  nginx;
worker_processes  2;

error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;

events {
    worker_connections  1024;
}

http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    keepalive_timeout  65;

    gzip  on;
    gzip_types text/plain application/json;

    include /etc/nginx/conf.d/*.conf;
}

File: nginx/default.conf

server {
    listen       80;
    server_name _;

    location / {
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $http_host;
        proxy_redirect off;
        proxy_pass http://express:3000;
    }

    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   /usr/share/nginx/html;
    }
}

I also created a custom Bash start script to wait for Express/Node.js to respond before starting Nginx. file: nginx/start

#!/bin/bash

until nc -vz express 3000 2>/dev/null; do
  sleep 1
done

nginx -g 'daemon off;'

Basic nginx Dockerfile: nginx/Dockerfile

FROM nginx:stable

RUN apt-get update -qq && apt-get install -y netcat

COPY default.conf /etc/nginx/conf.d/default.conf
COPY nginx.conf /etc/nginx/nginx.conf
COPY start /docker/

For the Elasticsearch container I overwrote the default elasticsearch.yml file to disable production X-Pack features and set the “transport host”, file: elasticsearch/elasticsearch.yml

cluster.name: "docker-cluster"
network.host: 0.0.0.0
transport.host: 127.0.0.1

# minimum_master_nodes need to be explicitly set when bound on a public IP
# set to 1 to allow single node clusters
# Details: https://github.com/elastic/elasticsearch/pull/17288
discovery.zen.minimum_master_nodes: 1

xpack.security.enabled: false
xpack.monitoring.enabled: false

Here is the Elasticsearch Dockerfile, file: elasticsearch/Dockerfile

FROM docker.elastic.co/elasticsearch/elasticsearch:5.5.0

COPY elasticsearch.yml /usr/share/elasticsearch/config/elasticsearch.yml

Next I created a docker compose file for the 3 containers: Nginx, Express (Node.js), and Elasticsearch. New file: docker-compose.yml:

version: '2'
services:
  elasticsearch:
    build:
      context: ./elasticsearch
      dockerfile: Dockerfile
    ports:
      - '9200:9200'
      - '9300:9300'
    volumes:
      - elasticsearch:/usr/share/elasticsearch/data
  express:
    build:
      context: ./express
      dockerfile: Dockerfile
    command: ./start
    depends_on:
      - elasticsearch
    environment:
      - ELASTICSEARCH_HOST=elasticsearch
    ports:
      - '3000:3000'
  nginx:
    build:
      context: ./nginx
      dockerfile: Dockerfile
    command: /docker/start
    depends_on:
      - express
    ports:
      - '80:80'
volumes:
  elasticsearch: {}

To build and run the cluster run: docker-compose build && docker-compose up.

At this point I was ready to shift my focus to the AWS configuration. I transposed the Docker Compose file to a Cloudformation template, new file: aws/cloud-formation-task.json

I added a very basic (and incomplete) Ruby script to load the Docker Compose yaml, fetch the skeleton structure for a template via AWS CLI, and transpose the configuration to JSON via: ./convert-docker-compose-to-cloudformation.rb (view source on Github)

{
  "containerDefinitions": [
    {
      "name": "express",
      "command": [
        "/express/start"
      ],
      "environment": [
        {
          "name": "ELASTICSEARCH_HOST",
          "value": "elasticsearch"
        }
      ],
      "essential": true,
      "image": "############.dkr.ecr.us-east-1.amazonaws.com/eric-test/express:latest",
      "links": [
        "elasticsearch"
      ],
      "memoryReservation": 128,
      "readonlyRootFilesystem": false
    },
    {
      "name": "elasticsearch",
      "essential": true,
      "image": "############.dkr.ecr.us-east-1.amazonaws.com/eric-test/elasticsearch:latest",
      "memoryReservation": 256,
      "mountPoints": [
        {
          "sourceVolume": "volume-0",
          "readOnly": false,
          "containerPath": "/usr/share/elasticsearch/data"
        }
      ],
      "readonlyRootFilesystem": false
    },
    {
      "name": "nginx",
      "command": [
        "/docker/start"
      ],
      "essential": true,
      "image": "############.dkr.ecr.us-east-1.amazonaws.com/eric-test/nginx:latest",
      "links": [
        "express"
      ],
      "memoryReservation": 128,
      "portMappings": [
        {
          "protocol": "tcp",
          "containerPort": 80,
          "hostPort": 80
        }
      ],
      "readonlyRootFilesystem": false
    }
  ],
  "family": "eric-test-family",
  "volumes": [
    {
      "host": {
        "sourcePath": "elasticsearch"
      },
      "name": "volume-0"
    }
  ]
}

Before the Docker containers could be used in a task definition, I had to create ECR repositories for each:

#!/bin/bash

AWS_PROFILE=default
AWS_REGION=us-east-1
ECR_PREFIX=eric-test

containers=(elasticsearch express nginx)
for container in "${containers[@]}"
do
  aws --profile $AWS_PROFILE ecr create-repository --repository-name $ECR_PREFIX/$container
done

I wrote a script to build, tag, and push the Docker images to ECR:

#!/bin/bash

AWS_PROFILE=default
AWS_REGION=us-east-1
AWS_ACCOUNT_ID=############
ECR_PREFIX=eric-test

eval $(aws --profile $AWS_PROFILE ecr get-login --no-include-email --region $AWS_REGION)

containers=(elasticsearch express nginx)
for container in "${containers[@]}"
do
  cd $container
  docker build -t $ECR_PREFIX/$container .
  docker tag $ECR_PREFIX/$container:latest $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$ECR_PREFIX/$container:latest
  docker push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$ECR_PREFIX/$container:latest
  cd ..
done

I defined a role to allow the ECS tasks to assume a role, file: aws/container-service-task-role.json

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "",
      "Effect": "Allow",
      "Principal": {
        "Service": "ecs-tasks.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

I created the role.

#!/bin/bash

AWS_PROFILE=default
ROLE_NAME=eric-test-ecs-role

aws --profile $AWS_PROFILE iam create-role \
  --role-name $ROLE_NAME \
  --assume-role-policy-document file://aws/container-service-task-role.json

I created a new ECS cluster.

#!/bin/bash

AWS_PROFILE=default
CLUSTER_NAME=eric-test-cluster

aws --profile $AWS_PROFILE ecs create-cluster --cluster-name $CLUSTER_NAME

To provision an EC2 into the ECS cluster I defined a user data file, file: aws/ecs.config

#!/bin/bash
echo ECS_CLUSTER=eric-test-cluster >> /etc/ecs/ecs.config

I created an Amazon ECS Container Instance IAM Role “ecsInstanceRole” as noted in the docs.

I then used the AWS CLI to create a new EC2 instance.

#!/bin/bash

AWS_IAM_INSTANCE_PROFILE=ecsInstanceRole
AWS_KEY_NAME=
AWS_PROFILE=default
AWS_SECURITY_GROUP_IDS=sg-########
AWS_SUBNET_ID=subnet-########
EC2_COUNT=1
EC2_INSTANCE_TYPE=m4.large
ECS_AMI_ID=ami-04351e12

aws ec2 run-instances \
  --profile $AWS_PROFILE \
  --image-id $ECS_AMI_ID \
  --count $EC2_COUNT \
  --instance-type $EC2_INSTANCE_TYPE \
  --key-name $AWS_KEY_NAME \
  --security-group-ids $AWS_SECURITY_GROUP_IDS \
  --subnet-id $AWS_SUBNET_ID \
  --iam-instance-profile Name=$AWS_IAM_INSTANCE_PROFILE \
  --user-data file://aws/ecs.config

I registered the Cloudformation template as an ECS task definition:

#!/bin/bash

AWS_PROFILE=default
AWS_ACCOUNT_ID=############
ROLE_NAME=eric-test-ecs-role
TASK_ROLE_ARN=arn:aws:iam::$AWS_ACCOUNT_ID:role/$ROLE_NAME
TASK_FAMILY=eric-test-family

aws --profile $AWS_PROFILE ecs register-task-definition \
  --family $TASK_FAMILY \
  --task-role-arn $TASK_ROLE_ARN \
  --cli-input-json file://aws/cloud-formation-task.json

Last I created an ECS service to manage the task and ensure there is always one task running.

#!/bin/bash

AWS_ACCOUNT_ID=############
AWS_PROFILE=default
AWS_REGION=us-east-1
CLUSTER_NAME=eric-test-cluster
SERVICE_COUNT=1
SERVICE_NAME=eric-test-service
TASK_FAMILY=eric-test-family
TASK_DEFINITION_ARN=arn:aws:ecs:$AWS_REGION:$AWS_ACCOUNT_ID:task-definition/$TASK_FAMILY

aws --profile $AWS_PROFILE ecs create-service \
  --cluster $CLUSTER_NAME \
  --service-name $SERVICE_NAME \
  --task-definition $TASK_DEFINITION_ARN \
  --desired-count $SERVICE_COUNT

Upon executing the last AWS CLI command the service will start the task and start containers on the EC2. I was able to curl the API via:

$ curl ec2-###-###-###-###.compute-1.amazonaws.com 2>/dev/null | python -mjson.tool
[
    {
        "_id": "1",
        "_index": "ecs_index",
        "_score": 1,
        "_source": {
            "foo": "bar"
        },
        "_type": "ecs_type"
    }
]

Source code on Github

Updated: