Discover Docker, CoreOS and Xen Server with Maksym Prokopov

Insights, discussion, bugs and how-tos about Docker, CoreOS, orchestration and other interesting things.

Sometimes everything become unresponsive, that's probably due to low memory settings, which defaults to 2Gb for default installation.

Adjust memory and cpu settings by running from CLI

minikube start --memory 8096 --cpus 2
docker exec -it gitlab /opt/gitlab/embedded/bin/registry garbage-collect /var/opt/gitlab/registry/config.yml

will make cleanup procedure

Check your images list with

docker images

and find your latest image sha, in my case 269...
Then use this string, where your app image is registry.it-expert.com.ua/nexus/it-service

docker images -f=reference='registry.it-expert.com.ua/nexus/it-service*:*' -f before=269 -q | xargs docker rmi

This will remove all your previous images, which was created before selected image.

You can also use

docker images -f=dangling=true

to find all stale images.

Imagine, you have several containers and nginx-proxy in front. And server runs under CoreOS. What's the proper configuration for startup?

Here is schema I use in production CoreOS. We need to create separate network for all frontend containers, in my case network name is 'nginx-proxy'.

  1. Create following nginx-proxy.service in /etc/systemd/system
[Unit]
Description=nginx-proxy.service
After=docker.service
Requires=docker.service

[Service]
TimeoutStartSec=0
ExecStartPre=-/usr/bin/docker kill nginx-proxy
ExecStartPre=-/usr/bin/docker rm nginx-proxy
ExecStartPre=-/usr/bin/docker pull jwilder/nginx-proxy
ExecStartPre=-/usr/bin/docker network create nginx-proxy
ExecStart=/usr/bin/docker run -p 80:80 -p 443:443 \
  -v /var/run/docker.sock:/tmp/docker.sock:ro  \
  -v /home/core/certificates:/etc/nginx/certs:ro \
  -v /home/core/vhost.d:/etc/nginx/vhost.d:ro \
  -v /home/core/conf.d/external.conf:/etc/nginx/conf.d/external.conf \
  -v /usr/share/nginx/html \
  --net=nginx-proxy \
  --label com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy \
  --name nginx-proxy \
  jwilder/nginx-proxy
ExecStop=/usr/bin/docker stop nginx-proxy

[Install]
WantedBy=multi-user.target
  1. Connect certbot container to nginx-proxy container
Description=certbot.service
After=nginx-proxy.service
Requires=nginx-proxy.service

[Service]
TimeoutStartSec=0
ExecStartPre=-/usr/bin/docker kill certbot
ExecStartPre=-/usr/bin/docker rm certbot
ExecStartPre=/usr/bin/docker pull jrcs/letsencrypt-nginx-proxy-companion
ExecStart=/usr/bin/docker run \
  -v /var/run/docker.sock:/var/run/docker.sock:ro \
  -v /home/core/certificates:/etc/nginx/certs:rw \
  -v /home/core/vhost.d:/etc/nginx/vhost.d \
  --net=nginx-proxy \
  --volumes-from nginx-proxy \
  --name certbot \
  jrcs/letsencrypt-nginx-proxy-companion

[Install]
WantedBy=multi-user.target
  1. /etc/systemd/system/itservice.service
[Unit]
Description=itservice
After=nginx-proxy.service
Requires=nginx-proxy.service

[Service]
TimeoutStartSec=0
Type=simple
WorkingDirectory=/home/core/itservice
ExecStart=/opt/bin/docker-compose -f /home/core/itservice/docker-compose.yml -f /home/core/itservice/docker-compose.override.yml up
ExecStop=/opt/bin/docker-compose -f /home/core/itservice/docker-compose.yml -f /home/core/itservice/docker-compose.override.yml stop

[Install]
WantedBy=multi-user.target

It's important to have docker-compose.yml version 2+ and connect frontend networks like this:

version: "2"
services: 
  app:
    image: myapp
  networks:
     - frontend
networks: 
  - frontend
     external:
        name: nginx-proxy

In this way container should be accessible to nginx-proxy right when you do

docker-compose up

.

Lifehack:

docker ps -s

Will show all your containers and occupied container size.

aws ec2 describe-snapshots --output text --query 'Snapshots[*].{ID:SnapshotId}' | xargs aws ec2 delete-snapshot --snapshot-id=

Well, this issue was my pain in the ass until I fully understood what was going on with nginx-proxy container and docker-compose v2.

When you have docker-compose.yml with separate frontend network you need to connect this network to nginx-proxy container in order to work. Obviously, there is no frontend network until you up your containers for the first time!
When you do

docker-compose up

network will be created, but nginx-proxy will not be attached to it! So you need to shut down your app containers, do

docker network connect itservice_frontend-tier nginx-proxy

and then to fire your containers up again! But wait, you need to restart nginx-proxy container to connect to this new network!

As you can see this process can not be single step deployment solution on CoreOS, but now you know why and how to fix this.

Nov 29, 2017, 13:41

Change time zone in CoreOS

timedatectl list-timezones | grep Kiev
Europe/Kiev
sudo timedatectl set-timezone Europe/Kiev
timedatectl status

Local time: Wed 2017-11-29 13:41:12 EET
Universal time: Wed 2017-11-29 11:41:12 UTC
RTC time: Wed 2017-11-29 11:41:12
Time zone: Europe/Kiev (EET, +0200)
Network time on: no
NTP synchronized: yes
RTC in local TZ: no

Here is (https://hub.docker.com/r/schickling/mailcatcher/ docker container) you can use for testing mail sending from your lovely application.

add to your docker-compose.yml

mail:
  image: schickling/mailcatcher
  ports:
     - 1080:1080

Do not forget to link this container to your app container, then use SMTP settings:

STMP: 
  server: mail
  port: 1025

and voila!

Open your browser at http://host:1080 and enjoy with preview of caught emails.

I decided to compare if there will be significant differences between dockerized Postgres and non-dockerized. Here is my test environment: rails application with rich test suite, about 628 examples, macOS Sierra 10.12.6, Docker 17.06.0-ce-mac19.

I use Postgres docker version alpine 9.6 which is 37,7Mb and raw Postgres 9.6 with GUI for Mac which is 379Mb.
Here is my docker-compose config

db:
  image: postgres:alpine
  ports:
    - 5432:5432
adminer:
  image: adminer
  ports:
    - 8080:8080
  links:
    - db

With docker
Finished in 46.86 seconds (files took 17.77 seconds to load)
628 examples, 0 failures

Without docker, raw Postgre 9.6
Finished in 31.35 seconds (files took 8.38 seconds to load)
628 examples, 0 failures

And again in Docker
Finished in 41.64 seconds (files took 8.24 seconds to load)
628 examples, 0 failures

And again without Docker
Finished in 31.53 seconds (files took 8.01 seconds to load)
628 examples, 0 failure

And again with Docker
Finished in 41.77 seconds (files took 8.51 seconds to load)
628 examples, 0 failures

So its 41,5 seconds for Docker version and 31,53 seconds without Docker average.
This is 24% difference in particular rails rspec case, which is quite significant as for compartion and is not so significant for testing loop.

At least this is a price you pay for container portability.

Ctrl + ↓ Earlier