83 posts tagged

docker

What I did with docker-compose deployments previously with jwilder/nginx-proxy container setup and environment variables like VIRTUAL_HOST in Kubernetes is resolved as Ingress-nginx load balancing and SSL offloading, and, frankly speaking, is awesome!

dockerkubernetes

Unlike docker-compose sometimes kubectl provides you better tooling for docker container access.

Let me share with you my useful findings.

Check a deployment and a pod with a oneliner.

First, label your deployment and pod, say with label service_group=classic

Then you can do this

kubectl get pod,deployment -l=service_group=classic

and get response similar to

NAME                                      READY     STATUS              RESTARTS   AGE
pod/dev-kayako-classic-7b7d7b6777-lm2n8   0/1       ContainerCreating   0          18s

NAME                                       DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.extensions/dev-kayako-classic   1         1         1            0           18s

Port forwarding to Pop or Deployment

With port-forward, you can easily connect to pods service and debug it.

kubectl port-forward pod/podname port

or

kubectl port-forward deployment/mydeployment  port

For example

kubectl port-forward sharepass-78d566f866-4dvv5 3000
Forwarding from 127.0.0.1:3000 -> 3000
Forwarding from [::1]:3000 -> 3000
Handling connection for 3000
Handling connection for 3000

This will forward port 3000 to localhost, so you can open URL http://localhost:3000 and enjoy access to your service.

Put config files in volume with ConfigMap

apiVersion: v1
kind: ConfigMap
metadata:
  name: patch
data:
  iconv.txt: |
    bla-bla
---
apiVersion: extensions/v1beta1
kind: Deployment
spec:
...
   volumeMounts:
        - name: patches
          mountPath: /opt/patches
  volumes:
        - name: patches
          configMap:
            name: patch

Work with different clusters in different terminal tabs

use KUBECONFIG environment variable to specify a different config cluster file

export KUBECONFIG=/Users/nexus/mycluster/config
kubectl get pods

Pod status change watch

Say you have dev-classic-bla-bla pod and you just did an update to this deployment. With this command, you can watch what's happening with your pod.

kubectl get pod --watch | grep classic
dev-classic-6586754cb8-kt5fz   0/1       Terminating        0          4m
dev-classic-6586754cb8-kt5fz   0/1       Terminating   0         4m
dev-classic-6586754cb8-kt5fz   0/1       Terminating   0         4m
dev-classic-85b958f486-p4vt2   0/1       Pending   0         0s
dev-classic-85b958f486-p4vt2   0/1       Pending   0         0s
dev-classic-85b958f486-p4vt2   0/1       ContainerCreating   0         0s
dev-classic-85b958f486-p4vt2   1/1       Running   0         7s

Kubectx

This tool is helpful when you have a lot of k8s clusters for management.
https://github.com/ahmetb/kubectx

Kube PS1

This tool will help you to install nice prompt with k8s cluster name and current context
https://github.com/jonmosco/kube-ps1

Kubernetes is a winner in docker cloud orchestration, so let's get a brief introduction in what it is and what kind of problems does it solve.

Kubernetes is a set of tools designed to solve a problem of deployment of your lovely tailor-made application to the cloud. And it does not matter which cloud you choose, AWS, Azure or Google, or even IBM, because Kubernetes provides you a set of tools which are platform independent.

For almost every application you need to talk to database, to store some assets, to get requests and make responses with useful content to users. So, basically you need to have 'storage' for storing files, 'network' to get requests and make responses to users and 'computing power with memory' to run your application. All of this is quite simple if you run your application on a single computer (node), you have local storage on an HDD, i.e. filesystem, you have CPU and memory to run your application and single network interface to talk to users and other services like external database or api. But suddenly you decide to deploy your application to some cloud provider and users start using your application in huge scale and then you decide to scale your application across cloud nodes.

And this is still not that moment when you need Kubernetes, because you can use cloud provided facilities like Amazon S3 for storage, Amazon RDS for automatically scaled databases and more Amazon EC2 instances to run your applications.

But wait, why do we need Kubernetes then? Well, every quite complex application has it's own technological stack and different unique Continue Integration and Continue Deployment processes. So, if you work for large enterprise which in turn has a lot of such custom made applications, there is one day when the enterprise decides to make unification of rollout and upgrade processes to reduce costs and create spatial forces, which are responsible for deployment and maintenance. We start calling them DevOps.

And here is that moment, when everything become more and more complicated.

This guys started dreaming about better abstractions for unification of application deployment and invented a concept of Docker. Docker is about having everything application needs to run in one 'container' which can be stored and transferred over network. But every nifty feature comes with it's price. For this luxury of having platform independent movable containers we pay price with much larger application size and extra build step. Both are very time and resources consuming.

Okay, now we have a bunch of containers, cool. How should we monitor all of them and restart when they failed? And we also run containers across different servers and docker does not provide us a way how one part of our application in one container should connect across network and talk to a part of application in another container. And there is nothing more than cleaning up failed docker containers and stalled images manually or in semi-automated, controlled by human way.

And there was a blood. A lot of different third-party solutions and even Docker themselves were trying to compete and provide better tools for solving this problems. And finally more and more people started considering Kubernetes by Google as preferred solution.

Let me briefly describe a concepts of Kubernetes.

  • minikube is an utility to get your home-made cluster up and running as a virtual machine on your single computer
  • kubectl is an utility to control every operation on a cluster, which is up by running minikube.
  • In Deployment you specify information about which docker image to download and run, and what volumes and ports this particular container will use. Deployment will spawn Pods.
  • Basically Pod is a running container, you can just delete running Pod and new one will be created automatically. You can specify in Deployment how many exactly the same containers you need to spawn.
  • You need to create Service in two cases. First, when you want provide DNS name for your container, for example, redis. This will allow other containers to discover and connect to redis by name resolution. Second, when you want to make your application visible from outside of your cloud, i.e. 'publish' application for users.
  • You should use Persistence Volume Claim if you have data in your application which should survive across container deletions. Usually database container will store it's data in such volumes. Otherwise say 'Hasta la vista' to all the data in your live containers, because they are intended to be ephemeral, easily killed and spawned again when necessary.
  • Another nice concept in Kubernetes is a Secret. This is spatial place and API to store your credentials and settings. You can refer to them in Deployment, when you need, for example, to supply database password in one of environment variables.

If you still have intention to try deployment to Kubernetes cluster, start playing with minikube and investigate this nice example of deployment Wordpress + MySql https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/

dockerkubernetes

Sometimes everything become unresponsive, that's probably due to low memory settings, which defaults to 2Gb for default installation.

Adjust memory and cpu settings by running from CLI

minikube start --memory 8096 --cpus 2
dockerkubernetes
docker exec -it gitlab /opt/gitlab/embedded/bin/registry garbage-collect /var/opt/gitlab/registry/config.yml

will make cleanup procedure

docker

Check your images list with

docker images

and find your latest image sha, in my case 269...
Then use this string, where your app image is registry.it-expert.com.ua/nexus/it-service

docker images -f=reference='registry.it-expert.com.ua/nexus/it-service*:*' -f before=269 -q | xargs docker rmi

This will remove all your previous images, which was created before selected image.

You can also use

docker images -f=dangling=true

to find all stale images.

docker

Imagine, you have several containers and nginx-proxy in front. And server runs under CoreOS. What's the proper configuration for startup?

Here is schema I use in production CoreOS. We need to create separate network for all frontend containers, in my case network name is 'nginx-proxy'.

  1. Create following nginx-proxy.service in /etc/systemd/system
[Unit]
Description=nginx-proxy.service
After=docker.service
Requires=docker.service

[Service]
TimeoutStartSec=0
ExecStartPre=-/usr/bin/docker kill nginx-proxy
ExecStartPre=-/usr/bin/docker rm nginx-proxy
ExecStartPre=-/usr/bin/docker pull jwilder/nginx-proxy
ExecStartPre=-/usr/bin/docker network create nginx-proxy
ExecStart=/usr/bin/docker run -p 80:80 -p 443:443 \
  -v /var/run/docker.sock:/tmp/docker.sock:ro  \
  -v /home/core/certificates:/etc/nginx/certs:ro \
  -v /home/core/vhost.d:/etc/nginx/vhost.d:ro \
  -v /home/core/conf.d/external.conf:/etc/nginx/conf.d/external.conf \
  -v /usr/share/nginx/html \
  --net=nginx-proxy \
  --label com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy \
  --name nginx-proxy \
  jwilder/nginx-proxy
ExecStop=/usr/bin/docker stop nginx-proxy

[Install]
WantedBy=multi-user.target
  1. Connect certbot container to nginx-proxy container
Description=certbot.service
After=nginx-proxy.service
Requires=nginx-proxy.service

[Service]
TimeoutStartSec=0
ExecStartPre=-/usr/bin/docker kill certbot
ExecStartPre=-/usr/bin/docker rm certbot
ExecStartPre=/usr/bin/docker pull jrcs/letsencrypt-nginx-proxy-companion
ExecStart=/usr/bin/docker run \
  -v /var/run/docker.sock:/var/run/docker.sock:ro \
  -v /home/core/certificates:/etc/nginx/certs:rw \
  -v /home/core/vhost.d:/etc/nginx/vhost.d \
  --net=nginx-proxy \
  --volumes-from nginx-proxy \
  --name certbot \
  jrcs/letsencrypt-nginx-proxy-companion

[Install]
WantedBy=multi-user.target
  1. /etc/systemd/system/itservice.service
[Unit]
Description=itservice
After=nginx-proxy.service
Requires=nginx-proxy.service

[Service]
TimeoutStartSec=0
Type=simple
WorkingDirectory=/home/core/itservice
ExecStart=/opt/bin/docker-compose -f /home/core/itservice/docker-compose.yml -f /home/core/itservice/docker-compose.override.yml up
ExecStop=/opt/bin/docker-compose -f /home/core/itservice/docker-compose.yml -f /home/core/itservice/docker-compose.override.yml stop

[Install]
WantedBy=multi-user.target

It's important to have docker-compose.yml version 2+ and connect frontend networks like this:

version: "2"
services: 
  app:
    image: myapp
  networks:
     - frontend
networks: 
  - frontend
     external:
        name: nginx-proxy

In this way container should be accessible to nginx-proxy right when you do

docker-compose up

.

docker

Lifehack:

docker ps -s

Will show all your containers and occupied container size.

docker

Well, this issue was my pain in the ass until I fully understood what was going on with nginx-proxy container and docker-compose v2.

When you have docker-compose.yml with separate frontend network you need to connect this network to nginx-proxy container in order to work. Obviously, there is no frontend network until you up your containers for the first time!
When you do

docker-compose up

network will be created, but nginx-proxy will not be attached to it! So you need to shut down your app containers, do

docker network connect itservice_frontend-tier nginx-proxy

and then to fire your containers up again! But wait, you need to restart nginx-proxy container to connect to this new network!

As you can see this process can not be single step deployment solution on CoreOS, but now you know why and how to fix this.

Here is (https://hub.docker.com/r/schickling/mailcatcher/ docker container) you can use for testing mail sending from your lovely application.

add to your docker-compose.yml

mail:
  image: schickling/mailcatcher
  ports:
     - 1080:1080

Do not forget to link this container to your app container, then use SMTP settings:

STMP: 
  server: mail
  port: 1025

and voila!

Open your browser at http://host:1080 and enjoy with preview of caught emails.

Ctrl + ↓ Earlier