Discover Docker, CoreOS and Xen Server with Maksym Prokopov

Insights, discussion, bugs and how-tos about Docker, CoreOS, orchestration and other interesting things.
core@coreos04itpremumlocalbak ~ $ docker ps -a -f name=redmine
CONTAINER ID        IMAGE                     COMMAND                  CREATED             STATUS              PORTS               NAMES
5c4c2c13eb0b        sameersbn/redmine:3.4.2   «/sbin/entrypoint.sh…»   14 months ago       Up 6 days           80/tcp, 443/tcp     redmine_redmine_1
ec9532ae1b24        sameersbn/postgresql      «/sbin/entrypoint.sh»    14 months ago       Up 6 days           5432/tcp            redmine_postgresql_1
core@coreos04itpremumlocalbak ~ $ docker ps -a -f name=redmine -q | xargs docker start
5c4c2c13eb0b
ec9532ae1b24

Sometimes you can get error similar to this
Object does not exist on the server: [404] Object does not exist on the server

That means you messed with upload of your artifacts to LFS and they changed at some point and being commited to repo.
The only way to fix that is to remove these files from commit, that will overwrite your history, hense you have to push your branch with —force flag to overwrite your remote repository.

So here is your hard way to fix that:

  1. Search for the commit that contains your assets. In my case it is Runtime/platforms/hpux/hpux_rebuilt.tar
git log --raw --all --full-history -- Runtime/platforms/hpux/hpux_rebuilt.tar
tree 901cb036d4dfef5c36c66d6452b3d496d0e22772
parent 19848337754164ea17afcd1eb7bd55ef512b3c34
author Maksym Prokopov  1548405977 +0700
committer Maksym Prokopov  1548405977 +0700

added script clause

search your file with

git cat-file -p 901cb and drill down until you find your assed blob sha1
100644 blob 496ee2ca6a2f08396a4076fe43dedf3dc0da8b6d	.gitignore
040000 tree 1c93b94f6ec752528a46741a7ff94e81d46afdcf	AppServer
040000 tree b32efb40aa560188970fc4f470c697e06f7c7030	Build
100644 blob 050d454e84afc229f35144ef8f13087c58d11f8a	Jenkinsfile_AppServer_Cobol
040000 tree ceac52278585b596885410e38021bbcbb95cd7b6	Runtime
git cat-file -p ceac52
040000 tree ff05d245ea0782b43355affc5fe8237ec8619bdc	platforms
git cat-file -p ff05d245ea0782
040000 tree c9c55c219bc5cf5436cdb4d4dcf761a088677bc2	aix
040000 tree 65cc3c3899d9bfecbf95999fdaed24c2235fc887	hpux
040000 tree 13bada715fbde55b9d1e3be0aa0c82a8ed886ca0	intel_nt
040000 tree ad1b0ec603f2895a91a3241fa3ad034c97645b44	solaris
git cat-file -p 65cc3c3899d9bfecbf95999fda
040000 tree 3477a0392bd9733297c52284a85776bf2461f7d8	SDK
100644 blob ae6c4b1828104c732c36a680c5b298fb5769fbaf	cshp1_list
040000 tree 1bfe688c02bbf87836d70709318d750b89f17f6a	exec
100644 blob 808487ec8d13531604c58a632dec9373c2629647	executables_list
100644 blob f2033d65e09b598f6a19e141cae59c980f8595ae	hpux_rebuilt.tar
100644 blob 82cf5d66ffdecde3f059f9bc9914f1fc8498b5f0	non_executables_list
040000 tree 182465cb595fa768df9376f558110eff8cc88afc	sbin

here is your blob id

100644 blob f2033d65e09b598f6a19e141cae59c980f8595ae	hpux_rebuilt.tar
  1. Find assets blog_id. Put their blob_id to some file.

Put f2033d65e09b598f6a19e141cae59c980f8595ae to blob-ids file

Repeate step 1 for every file your're missing.

  1. Download and execute (https://rtyley.github.io/bfg-repo-cleaner/ BFG utility) with flag -bi and your file with blob ids.
java -jar ~/Downloads/bfg-1.13.0.jar -bi blob-ids
  1. Revert last commit and remove files.
    You'll see
Updating references:    100% (10/10)
...Ref update completed in 15 ms.

Commit Tree-Dirt History
————————————

	Earliest                                              Latest
	|                                                          |
	DmDmDDmDmDmmDmDmDDDmmDmDmDmDDDmDmDmmDmDmDDDmDmDmDmDDmDDmmmDm

	D = dirty commits (file tree fixed)
	m = modified commits (commit message or parents changed)
	. = clean commits (no changes to file tree)

	                        Before     After
	—————————————————————-
	First modified commit | 39384790 | 5c5a7a09
	Last dirty commit     | 71d09db3 | 495a1329

Deleted files
——————-

	Filename                  Git id
	—————————————————————
	15schemas.tar           | b795d231 (134 B)
	ciar_vb4_15.tar         | 2d06bbb5 (133 B)
	ciar_xsl_15.tar         | 26c79c30 (132 B)
	civt_vb4_15.tar         | 23d54549 (132 B)
	civt_xsl_15.tar         | 7201bc36 (131 B)
	mysql.tar               | e3d9d62d (131 B)
	wrunora.15.1.0.0_11.2.0 | d752af96 (133 B)

In total, 109 object ids were changed. Full details are logged here:

  1. Redo commit.
  2. Execute git garbage collection.
git reflog expire --expire=now --all && git gc --prune=now --aggressive
  1. Push your changes with —force to remote repo.
git push origin --force

If you want to re-add LFS files
you can do it with

git lfs track "filename"
docker images | grep myname | cut -d' ' -f9 | xargs docker rmi -f

This oneliner will query for last 5 containers and removes them.

docker ps -a -n 5 -q | xargs docker rm

It is useful when you have dangling containers after failed builds

Afterward, you can launch image cleanup because you won't be tied with stopped containers.

docker rmi `docker images -qf dangling=true`

It is sometimes happened when you need to update just docker image and do not do deployment again. Here is oneliner to do this for Kubernetes deployment myapp.

printf '{"spec":{"template":{"metadata":{"labels":{"date":"%s"}}}}}' `date +%s`  | xargs -0 kubectl patch deployment myapp -p

The truth is when you copy file of a directory by ADD it acts exactly as COPY do. If you use ADD with tarball archive, it will extract it automatically into the directory, which is supplied as a second argument.

Example:

ADD file.tar.gz /code

is exactly the same as

COPY    file.tar.gz /code/
RUN     tar -xvzf /code/file.tar.gz

and will save you one line

Jul 16, 2018, 11:53

K8s insight about Ingress

What I did with docker-compose deployments previously with jwilder/nginx-proxy container setup and environment variables like VIRTUAL_HOST in Kubernetes is resolved as Ingress-nginx load balancing and SSL offloading, and, frankly speaking, is awesome!

Jul 16, 2018, 11:33

Useful Kubernetes tools and tips

Unlike docker-compose sometimes kubectl provides you better tooling for docker container access.

Let me share with you my useful findings.

Check a deployment and a pod with a oneliner.

First, label your deployment and pod, say with label service_group=classic

Then you can do this

kubectl get pod,deployment -l=service_group=classic

and get response similar to

NAME                                      READY     STATUS              RESTARTS   AGE
pod/dev-kayako-classic-7b7d7b6777-lm2n8   0/1       ContainerCreating   0          18s

NAME                                       DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.extensions/dev-kayako-classic   1         1         1            0           18s

Port forwarding to a pod or deployment

With port-forward, you can easily connect to pods service and debug it.

kubectl port-forward pod/podname port

or

kubectl port-forward deployment/mydeployment  port

For example

kubectl port-forward sharepass-78d566f866-4dvv5 3000
Forwarding from 127.0.0.1:3000 -> 3000
Forwarding from [::1]:3000 -> 3000
Handling connection for 3000
Handling connection for 3000

This will forward port 3000 to localhost, so you can open URL http://localhost:3000 and enjoy access to your service.

Put config files in volume with ConfigMap

apiVersion: v1
kind: ConfigMap
metadata:
  name: patch
data:
  iconv.txt: |
    bla-bla
---
apiVersion: extensions/v1beta1
kind: Deployment
spec:
...
   volumeMounts:
        - name: patches
          mountPath: /opt/patches
  volumes:
        - name: patches
          configMap:
            name: patch

Work with different clusters in different terminal tabs

use KUBECONFIG environment variable to specify a different config cluster file

export KUBECONFIG=/Users/nexus/mycluster/config
kubectl get pods

Update docker image inside a pod

For example you'd like to update image to v1.0.1 for the pod mypod in deployment mydeployment.

kubectl set image deployment mydeployment mypod=myimage:1.0.1

Pod status change watch

Say you have dev-classic-bla-bla pod and you just did an update to this deployment. With this command, you can watch what's happening with your pod.

kubectl get pod --watch | grep classic
dev-classic-6586754cb8-kt5fz   0/1       Terminating        0          4m
dev-classic-6586754cb8-kt5fz   0/1       Terminating   0         4m
dev-classic-6586754cb8-kt5fz   0/1       Terminating   0         4m
dev-classic-85b958f486-p4vt2   0/1       Pending   0         0s
dev-classic-85b958f486-p4vt2   0/1       Pending   0         0s
dev-classic-85b958f486-p4vt2   0/1       ContainerCreating   0         0s
dev-classic-85b958f486-p4vt2   1/1       Running   0         7s

Use the “record” option for easier rollbacks

When applying a yaml use the —record flag:

kubectl apply -f deployment.yaml --record

With this option, everytime there is an update, it gets saved to the history of those deployments and it provides you with the ability to rollback a change.

kubectl rollout history deployments my-deployment

Kubectx

This tool is helpful when you have a lot of k8s clusters for management.
https://github.com/ahmetb/kubectx

Kube PS1

This tool will help you to install nice prompt with k8s cluster name and current context
https://github.com/jonmosco/kube-ps1

Kubernetes is a winner in docker cloud orchestration, so let's get a brief introduction in what it is and what kind of problems does it solve.

Kubernetes is a set of tools designed to solve a problem of deployment of your lovely tailor-made application to the cloud. And it does not matter which cloud you choose, AWS, Azure or Google, or even IBM, because Kubernetes provides you a set of tools which are platform independent.

For almost every application you need to talk to database, to store some assets, to get requests and make responses with useful content to users. So, basically you need to have 'storage' for storing files, 'network' to get requests and make responses to users and 'computing power with memory' to run your application. All of this is quite simple if you run your application on a single computer (node), you have local storage on an HDD, i.e. filesystem, you have CPU and memory to run your application and single network interface to talk to users and other services like external database or api. But suddenly you decide to deploy your application to some cloud provider and users start using your application in huge scale and then you decide to scale your application across cloud nodes.

And this is still not that moment when you need Kubernetes, because you can use cloud provided facilities like Amazon S3 for storage, Amazon RDS for automatically scaled databases and more Amazon EC2 instances to run your applications.

But wait, why do we need Kubernetes then? Well, every quite complex application has it's own technological stack and different unique Continue Integration and Continue Deployment processes. So, if you work for large enterprise which in turn has a lot of such custom made applications, there is one day when the enterprise decides to make unification of rollout and upgrade processes to reduce costs and create spatial forces, which are responsible for deployment and maintenance. We start calling them DevOps.

And here is that moment, when everything become more and more complicated.

This guys started dreaming about better abstractions for unification of application deployment and invented a concept of Docker. Docker is about having everything application needs to run in one 'container' which can be stored and transferred over network. But every nifty feature comes with it's price. For this luxury of having platform independent movable containers we pay price with much larger application size and extra build step. Both are very time and resources consuming.

Okay, now we have a bunch of containers, cool. How should we monitor all of them and restart when they failed? And we also run containers across different servers and docker does not provide us a way how one part of our application in one container should connect across network and talk to a part of application in another container. And there is nothing more than cleaning up failed docker containers and stalled images manually or in semi-automated, controlled by human way.

And there was a blood. A lot of different third-party solutions and even Docker themselves were trying to compete and provide better tools for solving this problems. And finally more and more people started considering Kubernetes by Google as preferred solution.

Let me briefly describe a concepts of Kubernetes.

  • minikube is an utility to get your home-made cluster up and running as a virtual machine on your single computer
  • kubectl is an utility to control every operation on a cluster, which is up by running minikube.
  • In Deployment you specify information about which docker image to download and run, and what volumes and ports this particular container will use. Deployment will spawn Pods.
  • Basically Pod is a running container, you can just delete running Pod and new one will be created automatically. You can specify in Deployment how many exactly the same containers you need to spawn.
  • You need to create Service in two cases. First, when you want provide DNS name for your container, for example, redis. This will allow other containers to discover and connect to redis by name resolution. Second, when you want to make your application visible from outside of your cloud, i.e. 'publish' application for users.
  • You should use Persistence Volume Claim if you have data in your application which should survive across container deletions. Usually database container will store it's data in such volumes. Otherwise say 'Hasta la vista' to all the data in your live containers, because they are intended to be ephemeral, easily killed and spawned again when necessary.
  • Another nice concept in Kubernetes is a Secret. This is spatial place and API to store your credentials and settings. You can refer to them in Deployment, when you need, for example, to supply database password in one of environment variables.

If you still have intention to try deployment to Kubernetes cluster, start playing with minikube and investigate this nice example of deployment Wordpress + MySql https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/

Sometimes everything become unresponsive, that's probably due to low memory settings, which defaults to 2Gb for default installation.

Adjust memory and cpu settings by running from CLI

minikube start --memory 8096 --cpus 2
Ctrl + ↓ Earlier