7 posts tagged

tips

docker images | grep myname | cut -d' ' -f9 | xargs docker rmi -f
dockertips

This oneliner will query for last 5 containers and removes them.

docker ps -a -n 5 -q | xargs docker rm

It is useful when you have dangling containers after failed builds

Afterward, you can launch image cleanup because you won't be tied with stopped containers.

docker rmi `docker images -qf dangling=true`

The truth is when you copy file of a directory by ADD it acts exactly as COPY do. If you use ADD with tarball archive, it will extract it automatically into the directory, which is supplied as a second argument.

Example:

ADD file.tar.gz /code

is exactly the same as

COPY    file.tar.gz /code/
RUN     tar -xvzf /code/file.tar.gz

and will save you one line

dockertips

Unlike docker-compose sometimes kubectl provides you better tooling for docker container access.

Let me share with you my useful findings.

Check a deployment and a pod with a oneliner.

First, label your deployment and pod, say with label service_group=classic

Then you can do this

kubectl get pod,deployment -l=service_group=classic

and get response similar to

NAME                                      READY     STATUS              RESTARTS   AGE
pod/dev-kayako-classic-7b7d7b6777-lm2n8   0/1       ContainerCreating   0          18s

NAME                                       DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.extensions/dev-kayako-classic   1         1         1            0           18s

Port forwarding to a pod or deployment

With port-forward, you can easily connect to pods service and debug it.

kubectl port-forward pod/podname port

or

kubectl port-forward deployment/mydeployment  port

For example

kubectl port-forward sharepass-78d566f866-4dvv5 3000
Forwarding from 127.0.0.1:3000 -> 3000
Forwarding from [::1]:3000 -> 3000
Handling connection for 3000
Handling connection for 3000

This will forward port 3000 to localhost, so you can open URL http://localhost:3000 and enjoy access to your service.

Put config files in volume with ConfigMap

apiVersion: v1
kind: ConfigMap
metadata:
  name: patch
data:
  iconv.txt: |
    bla-bla
---
apiVersion: extensions/v1beta1
kind: Deployment
spec:
...
   volumeMounts:
        - name: patches
          mountPath: /opt/patches
  volumes:
        - name: patches
          configMap:
            name: patch

Work with different clusters in different terminal tabs

use KUBECONFIG environment variable to specify a different config cluster file

export KUBECONFIG=/Users/nexus/mycluster/config
kubectl get pods

Update docker image inside a pod

For example you'd like to update image to v1.0.1 for the pod mypod in deployment mydeployment.

kubectl set image deployment mydeployment mypod=myimage:1.0.1

Pod status change watch

Say you have dev-classic-bla-bla pod and you just did an update to this deployment. With this command, you can watch what's happening with your pod.

kubectl get pod --watch | grep classic
dev-classic-6586754cb8-kt5fz   0/1       Terminating        0          4m
dev-classic-6586754cb8-kt5fz   0/1       Terminating   0         4m
dev-classic-6586754cb8-kt5fz   0/1       Terminating   0         4m
dev-classic-85b958f486-p4vt2   0/1       Pending   0         0s
dev-classic-85b958f486-p4vt2   0/1       Pending   0         0s
dev-classic-85b958f486-p4vt2   0/1       ContainerCreating   0         0s
dev-classic-85b958f486-p4vt2   1/1       Running   0         7s

Use the “record” option for easier rollbacks

When applying a yaml use the —record flag:

kubectl apply -f deployment.yaml --record

With this option, everytime there is an update, it gets saved to the history of those deployments and it provides you with the ability to rollback a change.

kubectl rollout history deployments my-deployment

Kubectx

This tool is helpful when you have a lot of k8s clusters for management.
https://github.com/ahmetb/kubectx

Kube PS1

This tool will help you to install nice prompt with k8s cluster name and current context
https://github.com/jonmosco/kube-ps1

This will show last 250 lines of containers logs and will follow up new.

docker-compose logs -f --tail 250
dockertips
  1. Bundle package will create a cached copy, so bundler in docker will not fetch all dependencies all the time.
bundle package
  1. Create separate bundler data volume to perisist bundle between builds. Set BUNDLE_PATH to data volume. You can include this option just for development docker-compose.yml file and not to include in production.
version: "2"

services:
  memcached:
    image: memcached
    networks:
      - back-tier
  redis:
    image: redis
    ports: ["6379"]
    networks:
      - back-tier
  db:
    image: mysql:5
    volumes:
      - ./sql:/docker-entrypoint-initdb.d
      - mysql:/var/lib/mysql
    networks:
      - back-tier

  sse:
    image: mprokopov/sse
    build:
      context: sse/.
    command: "bundle exec rackup --host 0.0.0.0 --port 9292"
    environment:
      - RACK_ENV=production ## docker database settings in config.yml
    ports:
      - "9292:9292"
    links:
      - redis
      - db
    depends_on:
      - db
      - redis
    networks:
      - back-tier
      - front-tier

  worker:
    image: mprokopov/itservice_web_dev
    command: "bundle exec rake environment resque:work"
    environment:
      - QUEUE=*
    links:
      - db
      - redis
    depends_on:
      - db
      - redis
    networks:
      - back-tier

  worker-schedule:
    image: mprokopov/itservice_web_dev
    command: "bundle exec rake environment resque:scheduler"
    links:
      - db
      - redis
    depends_on:
      - redis
    networks:
      - back-tier

  search:
    image: mprokopov/itservice_search
    build: ./search
    volumes:
      - search-data:/search
    depends_on:
      - db
    links:
      - db
    networks:
      - back-tier
    expose:
      - "9306"
  web:
    ports:
      - "3000:3000"
    environment:
      - LETTER_OPENER=letter_opener_web
      - RAILS_SERVE_STATIC_FILES=true
      - SLACK_NOTIFICATION=false
      - EMAIL_NOTIFICATION=false
      - SLACK_WEBHOOK_CHANNEL=#events_test
      - STREAM_API=http://localhost:9292
    depends_on:
      - db
      - redis
    links:
      - db
      - redis
      - search
    networks:
      - back-tier
      - front-tier
    volumes:
      - search-data:/search

volumes:
  search-data:
  mysql:

networks:
  back-tier:
  front-tier:
  1. Use docker-compose.override.yml for development and docker-compose.prod.yml for production builds. Create docker-compose.yml which contains common services configuration.
version: "2"

services:
  db:
    environment:
      - MYSQL_DATABASE=itservice_development
      - MYSQL_USER=
      - MYSQL_ROOT_PASSWORD=
      - MYSQL_PASSWORD=

  sse:
    environment:
      - MYSQL_DATABASE=itservice_development
      - MYSQL_USER=
      - MYSQL_PASSWORD=
      - MYSQL_HOST=db
      - REDIS_HOST=redis
      - RACK_ENV=production ## docker database settings in config.yml

  worker:
    environment:
      - RAILS_ENV=development

  worker-schedule:
    environment:
      - RAILS_ENV=development

  search:
    environment:
      - SPHINX_ENV=development
  web:
    image: mprokopov/itservice_web_dev
    command: bundle exec rails s -b 0.0.0.0 -p 3000
    environment:
      - RAILS_ENV=development
      - LETTER_OPENER=letter_opener_web
      - RAILS_SERVE_STATIC_FILES=true
      - SLACK_NOTIFICATION=false
      - EMAIL_NOTIFICATION=false
      - SLACK_WEBHOOK_CHANNEL=#events_test
      - STREAM_API=http://localhost:9292
      - BUNDLE_PATH=/bundle
    volumes:
      - bundle:/bundle
      - ./app:/app

volumes:
  bundle:
  1. Use docker-compose.prod.yml as docker-compose.override.yml in production, so you will save necessary keystrokes, because docker-compose will use override.yml by default.
  2. Use nginx-proxy container in production and gem unicorn/puma or thin.
    connect nginx-proxy container to frontend network like this
docker network connect itservice_front-tier nginx-proxy

It will enable to use nginx-proxy with docker-compose v2 syntax.

  1. In case you're using CoreOS or systemd you can create container backups via custom backup service and Timer for that service.

It's common task when you do not want to expose your image via docker hub, but just to deploy it to remote host.

In this case you can use this beautiful oneliner

docker save myimage | ssh -C user@host docker load

That will transfer image to remote host via ssh with compression and loads into docker. In case the same image exists it renames old image and places new one with specified name.

Cool!

Anyway, it will transfer all bunch of layers, instead using registry will definitely save you much bandwidth.

dockertips