Discover Docker, CoreOS and Xen Server with Maksym Prokopov

Insights, discussion, bugs and how-tos about Docker, CoreOS, orchestration and other interesting things.

Imagine, you have several containers and nginx-proxy in front. And server runs under CoreOS. What's the proper configuration for startup?

Here is schema I use in production CoreOS. We need to create separate network for all frontend containers, in my case network name is 'nginx-proxy'.

  1. Create following nginx-proxy.service in /etc/systemd/system
[Unit]
Description=nginx-proxy.service
After=docker.service
Requires=docker.service

[Service]
TimeoutStartSec=0
ExecStartPre=-/usr/bin/docker kill nginx-proxy
ExecStartPre=-/usr/bin/docker rm nginx-proxy
ExecStartPre=-/usr/bin/docker pull jwilder/nginx-proxy
ExecStartPre=-/usr/bin/docker network create nginx-proxy
ExecStart=/usr/bin/docker run -p 80:80 -p 443:443 \
  -v /var/run/docker.sock:/tmp/docker.sock:ro  \
  -v /home/core/certificates:/etc/nginx/certs:ro \
  -v /home/core/vhost.d:/etc/nginx/vhost.d:ro \
  -v /home/core/conf.d/external.conf:/etc/nginx/conf.d/external.conf \
  -v /usr/share/nginx/html \
  --net=nginx-proxy \
  --label com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy \
  --name nginx-proxy \
  jwilder/nginx-proxy
ExecStop=/usr/bin/docker stop nginx-proxy

[Install]
WantedBy=multi-user.target
  1. Connect certbot container to nginx-proxy container
Description=certbot.service
After=nginx-proxy.service
Requires=nginx-proxy.service

[Service]
TimeoutStartSec=0
ExecStartPre=-/usr/bin/docker kill certbot
ExecStartPre=-/usr/bin/docker rm certbot
ExecStartPre=/usr/bin/docker pull jrcs/letsencrypt-nginx-proxy-companion
ExecStart=/usr/bin/docker run \
  -v /var/run/docker.sock:/var/run/docker.sock:ro \
  -v /home/core/certificates:/etc/nginx/certs:rw \
  -v /home/core/vhost.d:/etc/nginx/vhost.d \
  --net=nginx-proxy \
  --volumes-from nginx-proxy \
  --name certbot \
  jrcs/letsencrypt-nginx-proxy-companion

[Install]
WantedBy=multi-user.target
  1. /etc/systemd/system/itservice.service
[Unit]
Description=itservice
After=nginx-proxy.service
Requires=nginx-proxy.service

[Service]
TimeoutStartSec=0
Type=simple
WorkingDirectory=/home/core/itservice
ExecStart=/opt/bin/docker-compose -f /home/core/itservice/docker-compose.yml -f /home/core/itservice/docker-compose.override.yml up
ExecStop=/opt/bin/docker-compose -f /home/core/itservice/docker-compose.yml -f /home/core/itservice/docker-compose.override.yml stop

[Install]
WantedBy=multi-user.target

It's important to have docker-compose.yml version 2+ and connect frontend networks like this:

version: "2"
services: 
  app:
    image: myapp
  networks:
     - frontend
networks: 
  - frontend
     external:
        name: nginx-proxy

In this way container should be accessible to nginx-proxy right when you do

docker-compose up

.

Lifehack:

docker ps -s

Will show all your containers and occupied container size.

aws ec2 describe-snapshots --output text --query 'Snapshots[*].{ID:SnapshotId}' | xargs aws ec2 delete-snapshot --snapshot-id=

Well, this issue was my pain in the ass until I fully understood what was going on with nginx-proxy container and docker-compose v2.

When you have docker-compose.yml with separate frontend network you need to connect this network to nginx-proxy container in order to work. Obviously, there is no frontend network until you up your containers for the first time!
When you do

docker-compose up

network will be created, but nginx-proxy will not be attached to it! So you need to shut down your app containers, do

docker network connect itservice_frontend-tier nginx-proxy

and then to fire your containers up again! But wait, you need to restart nginx-proxy container to connect to this new network!

As you can see this process can not be single step deployment solution on CoreOS, but now you know why and how to fix this.

timedatectl list-timezones | grep Kiev
Europe/Kiev
sudo timedatectl set-timezone Europe/Kiev
timedatectl status

Local time: Wed 2017-11-29 13:41:12 EET
Universal time: Wed 2017-11-29 11:41:12 UTC
RTC time: Wed 2017-11-29 11:41:12
Time zone: Europe/Kiev (EET, +0200)
Network time on: no
NTP synchronized: yes
RTC in local TZ: no

Here is (https://hub.docker.com/r/schickling/mailcatcher/ docker container) you can use for testing mail sending from your lovely application.

add to your docker-compose.yml

mail:
  image: schickling/mailcatcher
  ports:
     - 1080:1080

Do not forget to link this container to your app container, then use SMTP settings:

STMP: 
  server: mail
  port: 1025

and voila!

Open your browser at http://host:1080 and enjoy with preview of caught emails.

I decided to compare if there will be significant differences between dockerized Postgres and non-dockerized. Here is my test environment: rails application with rich test suite, about 628 examples, macOS Sierra 10.12.6, Docker 17.06.0-ce-mac19.

I use Postgres docker version alpine 9.6 which is 37,7Mb and raw Postgres 9.6 with GUI for Mac which is 379Mb.
Here is my docker-compose config

db:
  image: postgres:alpine
  ports:
    - 5432:5432
adminer:
  image: adminer
  ports:
    - 8080:8080
  links:
    - db

With docker
Finished in 46.86 seconds (files took 17.77 seconds to load)
628 examples, 0 failures

Without docker, raw Postgre 9.6
Finished in 31.35 seconds (files took 8.38 seconds to load)
628 examples, 0 failures

And again in Docker
Finished in 41.64 seconds (files took 8.24 seconds to load)
628 examples, 0 failures

And again without Docker
Finished in 31.53 seconds (files took 8.01 seconds to load)
628 examples, 0 failure

And again with Docker
Finished in 41.77 seconds (files took 8.51 seconds to load)
628 examples, 0 failures

So its 41,5 seconds for Docker version and 31,53 seconds without Docker average.
This is 24% difference in particular rails rspec case, which is quite significant as for compartion and is not so significant for testing loop.

At least this is a price you pay for container portability.

Recently I was doing a [https://github.com/mprokopov/it-service-sse microservice with Server Sent Events and Pedestal] and I though it should be a good idea to implement automatic build and deployment to docker registry container. I already have Gitlab installed and started to play around.

Long story short, here my .gitlab-ci.yml which took me a couple of days to figure out what is the “artifact” in Gitlab and how is it suppose to survive between artifact builds and docker builds.

My current setup has two stages, java build and then docker build. At the first stage we use clojure:lein-2.7.1-alpine, which is quite small, to build jar file from the sources. Then we try to assemble docker container and reuse artifact from the previous build. I was lucky enough to discover that artifact could be saved with help of “cache” option in YML file which preserves folder in “path” for the next build.

So here is the working Gitlab CI configuration which builds JAR as artifact and uploads to the Pipeline page and then creates docker container and publishes it to the internal Gitlab registry.

stages:
  - jar
  - docker
cache:
  paths:
    - target
jar:
  image: clojure:lein-2.7.1-alpine
  stage: jar
  script:
    - lein deps
    - lein uberjar
  artifacts:
    paths:
      - target/it-service-sse-0.0.1-SNAPSHOT-standalone.jar
    expire_in: 1 week

build-master:
  stage: docker
  image: docker:latest
  services:
    - docker:dind
  script:
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
    - docker build --pull -t "$CI_REGISTRY_IMAGE" .
    - docker push "$CI_REGISTRY_IMAGE"

I hope this will save you a couple of days for debug.
Happy Continuous Delivery!

Aug 31, 2017, 15:35

Capstan

Very interesting open source project with Unikernels

OSv is the open source operating system designed for the cloud. Built from the ground up for effortless deployment and management, with superior performance.
Simplified cloud stack
The language runtime, OS and hypervisor all provide protection and abstraction. OSv minimizes the redundancy in these layers by simplifying the OS.

http://osv.io/

Here is my results when I changed base image from openjdk:latest to clojure:alpine

registry.it-expert.com.ua/nexus/privat-manager latest 61e0537043a3 24 hours ago 260MB
registry.it-expert.com.ua/nexus/privat-manager 8b8634a576b1 3 days ago 863MB

Difference is 603 fucking megabytes with equally the same functionality!

Ctrl + ↓ Earlier