Discover Docker, CoreOS and Xen Server with Maksym Prokopov

Insights, discussion, bugs and how-tos about Docker, CoreOS, orchestration and other interesting things.

I decided to compare if there will be significant differences between dockerized Postgres and non-dockerized. Here is my test environment: rails application with rich test suite, about 628 examples, macOS Sierra 10.12.6, Docker 17.06.0-ce-mac19.

I use Postgres docker version alpine 9.6 which is 37,7Mb and raw Postgres 9.6 with GUI for Mac which is 379Mb.
Here is my docker-compose config

db:
  image: postgres:alpine
  ports:
    - 5432:5432
adminer:
  image: adminer
  ports:
    - 8080:8080
  links:
    - db

With docker
Finished in 46.86 seconds (files took 17.77 seconds to load)
628 examples, 0 failures

Without docker, raw Postgre 9.6
Finished in 31.35 seconds (files took 8.38 seconds to load)
628 examples, 0 failures

And again in Docker
Finished in 41.64 seconds (files took 8.24 seconds to load)
628 examples, 0 failures

And again without Docker
Finished in 31.53 seconds (files took 8.01 seconds to load)
628 examples, 0 failure

And again with Docker
Finished in 41.77 seconds (files took 8.51 seconds to load)
628 examples, 0 failures

So its 41,5 seconds for Docker version and 31,53 seconds without Docker average.
This is 24% difference in particular rails rspec case, which is quite significant as for compartion and is not so significant for testing loop.

At least this is a price you pay for container portability.

Recently I was doing a [https://github.com/mprokopov/it-service-sse microservice with Server Sent Events and Pedestal] and I though it should be a good idea to implement automatic build and deployment to docker registry container. I already have Gitlab installed and started to play around.

Long story short, here my .gitlab-ci.yml which took me a couple of days to figure out what is the “artifact” in Gitlab and how is it suppose to survive between artifact builds and docker builds.

My current setup has two stages, java build and then docker build. At the first stage we use clojure:lein-2.7.1-alpine, which is quite small, to build jar file from the sources. Then we try to assemble docker container and reuse artifact from the previous build. I was lucky enough to discover that artifact could be saved with help of “cache” option in YML file which preserves folder in “path” for the next build.

So here is the working Gitlab CI configuration which builds JAR as artifact and uploads to the Pipeline page and then creates docker container and publishes it to the internal Gitlab registry.

stages:
  - jar
  - docker
cache:
  paths:
    - target
jar:
  image: clojure:lein-2.7.1-alpine
  stage: jar
  script:
    - lein deps
    - lein uberjar
  artifacts:
    paths:
      - target/it-service-sse-0.0.1-SNAPSHOT-standalone.jar
    expire_in: 1 week

build-master:
  stage: docker
  image: docker:latest
  services:
    - docker:dind
  script:
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
    - docker build --pull -t "$CI_REGISTRY_IMAGE" .
    - docker push "$CI_REGISTRY_IMAGE"

I hope this will save you a couple of days for debug.
Happy Continuous Delivery!

Aug 31, 15:35

Capstan

Very interesting open source project with Unikernels

OSv is the open source operating system designed for the cloud. Built from the ground up for effortless deployment and management, with superior performance.
Simplified cloud stack
The language runtime, OS and hypervisor all provide protection and abstraction. OSv minimizes the redundancy in these layers by simplifying the OS.

http://osv.io/

Here is my results when I changed base image from openjdk:latest to clojure:alpine

registry.it-expert.com.ua/nexus/privat-manager latest 61e0537043a3 24 hours ago 260MB
registry.it-expert.com.ua/nexus/privat-manager 8b8634a576b1 3 days ago 863MB

Difference is 603 fucking megabytes with equally the same functionality!

Here is short script you can use to make backups into Hetzner storage box using CIFS aka Windows Share.

mount.cifs //uXXXXXX.your-storagebox.de/backup /mnt/hetzner/ -o user=uXXXXXX,pass=YYYYYYYY
cp -r /mnt/HD/HD_a2/backups2/hourly.0/ /mnt/hetzner/dns-320
umount /mnt/hetzner

Deadly simple.

And what I want is to backup everything what has been done by the rsnapshot, that's why I've added following to the crontab

...
30 3 * * *  rsnapshot daily && /mnt/HD/HD_a2/ffp/home/root/backup-hetzner.sh
...

So right after rsnapshot finish its job backup to remote host starts.

The main idea is to have fully automated docker database backup from low end D-Link NAS DNS-320.

Solution design is following:

  1. My backup box will copy backup.sh script to the remote coreos-03 host.
  2. Then remote host copies backup.sh script into database container.
  3. Backup box executes docker command «docker exec itservice_db_1 backup.sh» on coreos-03 host, which, in turn, executes mysqlbackup. SQL dump is captured directly from command output and then gzipped.
  4. Rsnapshot saves folder with gzipped SQL dump and rotates old backup folders as necessary.

So, we will need only

  • ssh
  • tar
  • rsnapshot

Here is my working implementation:

script backup.sh

#!/bin/bash
## env vars are already in docker container
/usr/bin/mysqldump -u$MYSQL_USER -p$MYSQL_PASSWORD $MYSQL_DATABASE

script backup-coreos-itservice.sh

/ffp/bin/scp /ffp/home/root/backup.sh core@coreos-03:/home/core/itservice/backup.sh
/usr/sbin/ssh -C core@coreos-03 "docker cp /home/core/itservice/backup.sh itservice_db_1:/usr/local/bin/backup.sh"
/usr/sbin/ssh -C core@coreos-03 "docker exec itservice_db_1 /usr/local/bin/backup.sh" > latest.sql
/opt/bin/tar czf itservice-sql-dump.tar.gz latest.sql --remove-files

rsnapshot.conf

...
backup_script	/mnt/HD/HD_a2/ffp/home/root/backup-coreos-itservice.sh	coreos-03/itservice_db_1
...

crontab

0 */4 * * * rsnapshot hourly
30 3 * * *  rsnapshot daily
0  3 * * 1  rsnapshot weekly
30 2 1 * *  rsnapshot monthly

Keep in mind, that you will need to generate ssh keys for your backup box and add it to authorized_keys on coreos-03 host, but this is out of scope this article.

just put in /etc/coreos/update.conf
line

REBOOT_STRATEGY=off
Aug 4, 2017, 17:28

Docker clean stale containers

This will clean up stale docker containers in status «created»

docker ps -a -f "status=created" -q | xargs docker rm
Mar 31, 2017, 16:49

Docker tradeoffs

In order to make better decisions mind tradeoffs. For instance for Docker here is my list.

Advantages

  • mobility. Your app become more mobile, easy to deploy.
  • better version control. You can always revert in no time
  • single approach for app deployment
  • easy to scale
  • storage for source code (dockerhub, docker cloud)

Disadvantages

  • longer app deploy time
  • longer app compile time
  • more storage used
  • less responsive code in development
  • change every process from deployment to backup
  • easier to shoot self in foot with wrong data storage design
  • new abstraction layer = more knowledge to acquire
  • it's harder to recover data in case of loss
  • testing process will include one more step

Life hack goto

chrome://net-internals/#hsts

and then do «delete domain» in hacky chrome interface.

Ctrl + ↓ Earlier