81 post tagged

docker

Ctrl + ↑ Later

Here is my results when I changed base image from openjdk:latest to clojure:alpine

registry.it-expert.com.ua/nexus/privat-manager latest 61e0537043a3 24 hours ago 260MB
registry.it-expert.com.ua/nexus/privat-manager 8b8634a576b1 3 days ago 863MB

Difference is 603 fucking megabytes with equally the same functionality!

docker

The main idea is to have fully automated docker database backup from low end D-Link NAS DNS-320.

Solution design is following:

  1. My backup box will copy backup.sh script to the remote coreos-03 host.
  2. Then remote host copies backup.sh script into database container.
  3. Backup box executes docker command «docker exec itservice_db_1 backup.sh» on coreos-03 host, which, in turn, executes mysqlbackup. SQL dump is captured directly from command output and then gzipped.
  4. Rsnapshot saves folder with gzipped SQL dump and rotates old backup folders as necessary.

So, we will need only

  • ssh
  • tar
  • rsnapshot

Here is my working implementation:

script backup.sh

#!/bin/bash
## env vars are already in docker container
/usr/bin/mysqldump -u$MYSQL_USER -p$MYSQL_PASSWORD $MYSQL_DATABASE

script backup-coreos-itservice.sh

/ffp/bin/scp /ffp/home/root/backup.sh core@coreos-03:/home/core/itservice/backup.sh
/usr/sbin/ssh -C core@coreos-03 "docker cp /home/core/itservice/backup.sh itservice_db_1:/usr/local/bin/backup.sh"
/usr/sbin/ssh -C core@coreos-03 "docker exec itservice_db_1 /usr/local/bin/backup.sh" > latest.sql
/opt/bin/tar czf itservice-sql-dump.tar.gz latest.sql --remove-files

rsnapshot.conf

...
backup_script	/mnt/HD/HD_a2/ffp/home/root/backup-coreos-itservice.sh	coreos-03/itservice_db_1
...

crontab

0 */4 * * * rsnapshot hourly
30 3 * * *  rsnapshot daily
0  3 * * 1  rsnapshot weekly
30 2 1 * *  rsnapshot monthly

Keep in mind, that you will need to generate ssh keys for your backup box and add it to authorized_keys on coreos-03 host, but this is out of scope this article.

just put in /etc/coreos/update.conf
line

REBOOT_STRATEGY=off
coreosdocker
Aug 4, 2017, 17:28

Docker clean stale containers

This will clean up stale docker containers in status «created»

docker ps -a -f "status=created" -q | xargs docker rm
docker
Mar 31, 2017, 16:49

Docker tradeoffs

In order to make better decisions mind tradeoffs. For instance for Docker here is my list.

Advantages

  • mobility. Your app become more mobile, easy to deploy.
  • better version control. You can always revert in no time
  • single approach for app deployment
  • easy to scale
  • storage for source code (dockerhub, docker cloud)

Disadvantages

  • longer app deploy time
  • longer app compile time
  • more storage used
  • less responsive code in development
  • change every process from deployment to backup
  • easier to shoot self in foot with wrong data storage design
  • new abstraction layer = more knowledge to acquire
  • it's harder to recover data in case of loss
  • testing process will include one more step
docker

For coreos that's enough to use something like this

docker run -d \
  -v /home/core/certificates:/etc/nginx/certs:rw \
  --volumes-from nginx-proxy \
  -v /var/run/docker.sock:/var/run/docker.sock:ro \
  jrcs/letsencrypt-nginx-proxy-companion

docker run -d -p 80:80 -p 443:443 \
  --name nginx-proxy \
  -v /home/core/certificates:/etc/nginx/certs:ro \
  -v /etc/nginx/vhost.d \
  -v /usr/share/nginx/html \
  -v /var/run/docker.sock:/tmp/docker.sock:ro \
  -v /home/core/conf.d/external.conf:/etc/nginx/conf.d/external.conf  \
  --restart always \
  jwilder/nginx-proxy

and then not to forget to specify

LETSENCRYPT_HOST=mydomain.com
LETSENCRYPT_EMAIL=my@email.com

in your docker-compose section.

More is here https://hub.docker.com/r/mickaelperrin/docker-letsencrypt-nginx-proxy-companion/

I struggled with deploying web services via Ansible to staging CoreOS host and that's something that looks like a hell!

I received one error, then another with just simple-simple steps like

- name: IT-Premium docker-compose deploy
  hosts: coreos
  tasks:
    - name: Install docker-py
      pip: name=docker-py executable=/home/core/bin/pip

    - name: Install PyYAML
      pip: name=PyYAML executable=/home/core/bin/pip

    - name: Install docker-compose
      pip: name=docker-compose executable=/home/core/bin/pip version=1.9.0

    - name: Creates it-premium directory
      file: path=/home/core/it-premium state=directory

    - name: copy docker-compose.yml
      copy: src=./docker-compose.yml dest=/home/core/it-premium/docker-compose.yml
      tags: deploy

    - name: copy sqlite
      copy: src=./sqlite dest=/home/core/it-premium/ mode=0644
      tags: deploy

    - name: docker registry login
      docker_login:
        registry: "registry.it-expert.com.ua"
        username: nexus
        password: "{{gitlab_password}}"

    - name: pull images
      docker_image:
        name: registry.it-expert.com.ua/nexus/it-premium
        state: present

    - name: launch it-premium docker-compose with 2 containers
      tags: step1
      docker_service:
        project_src: it-premium
        state: present
        build: no
      register: output

    - debug:
        var: output

You can notice version of docker-compose 1.9.0, which is supplied there. That's because of issue with
Error: cannot import name 'IPAMConfig'
thrown by docker_service.

And here is why https://github.com/ansible/ansible/issues/20492

This is due to your docker-compose version.
The docker-py package has been renamed into docker in version 2.0 (https://github.com/docker/docker-py/releases/tag/2.0.0). And in this version, Docker.Client has been renamed into docker.APIClient.
Docker-compose 1.10+ now requires docker instead of docker-py. And due to his name the docker package is before the docker-py one in the PYTHONPATH leading to the import error.
A workaround is to downgrade your docker-compose version to 1.9.0 the time the Ansible docker_container module updates its dependencies from docker-py to docker.

That's something like «piss on you, dirty user, because we do not care about backward compatibility».

Because when you change something, it is like delete old state and introduce new one instead. And when you delete something, that could broke anything that relies on state.

How to do instead? Just ADD something new without removal. Call it with new namespace, new function name and just use!

ansibledocker
dockermysql

When your site did not work properly in docker do following:

  1. Check is nginx container running
docker ps -a
3fbec7a5431f        jwilder/nginx-proxy                                "/app/docker-entrypoi"   20 hours ago        Up 7 minutes                0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp   nginx-proxy
  1. Check if it has proper configuration
docker exec nginx-proxy nginx -t

You should receive

nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
docker exec nginx-proxy nginx -T

this shows configuration of nginx. So, no need to bash into nginx container and do cat /etc/nginx/conf.d/default.conf. It's much faster.

  1. Check if your container and nginx container has network connection.
    If not, connect them with
docker network connect mycontainer_default nginx-proxy
  1. Check container logs
docker logs nginx-proxy -f --tail 250

That will show last 250 lines and continues to follow new data.

dockernginx

For instance you have several small-sized dockerized websited on different servers in your internal network with webserver's IP like 192.168.0.2, 192.168.0.3, 192.168.0.4 and you would like to expose your site to public via single IP address like 95.67.123.18.

Is that possible? Sure, and you can even use great jwilder/nginx-proxy image for that.

Just create on public webserver folder /home/user/conf.d/external.conf and put there config like this

log_format vhost '$host $remote_addr - $remote_user [$time_local] '
                 '"$request" $status $body_bytes_sent '
                 '"$http_referer" "$http_user_agent"';
upstream server.it-premium.com.ua {
  server 192.168.0.2:80;
}
server {
  server_name server.it-premium.com.ua;
  listen 80;
  location / {
    proxy_pass http://server.it-premium.com.ua;
  }
  access_log /var/log/nginx/access.log vhost;
}

also launch nginx-proxy container with extra arguments

-v /home/user/conf.d/external.conf:/etc/nginx/conf.d/external.conf

that will connect single config file to nginx config folder and will read configuration from there.
Beware, in case you change manually external.conf you'll need to restart your nginx container, because docker will treat your file as invalid.

dockernginx
Ctrl + ↓ Earlier