5 posts tagged

kubernetes

It is sometimes happened when you need to update just docker image and do not do deployment again. Here is oneliner to do this for Kubernetes deployment myapp.

printf '{"spec":{"template":{"metadata":{"labels":{"date":"%s"}}}}}' `date +%s`  | xargs -0 kubectl patch deployment myapp -p
dockerhintkubernetes

What I did with docker-compose deployments previously with jwilder/nginx-proxy container setup and environment variables like VIRTUAL_HOST in Kubernetes is resolved as Ingress-nginx load balancing and SSL offloading, and, frankly speaking, is awesome!

dockerkubernetes

Unlike docker-compose sometimes kubectl provides you better tooling for docker container access.

Let me share with you my useful findings.

Check a deployment and a pod with a oneliner.

First, label your deployment and pod, say with label service_group=classic

Then you can do this

kubectl get pod,deployment -l=service_group=classic

and get response similar to

NAME                                      READY     STATUS              RESTARTS   AGE
pod/dev-kayako-classic-7b7d7b6777-lm2n8   0/1       ContainerCreating   0          18s

NAME                                       DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.extensions/dev-kayako-classic   1         1         1            0           18s

Port forwarding to a pod or deployment

With port-forward, you can easily connect to pods service and debug it.

kubectl port-forward pod/podname port

or

kubectl port-forward deployment/mydeployment  port

For example

kubectl port-forward sharepass-78d566f866-4dvv5 3000
Forwarding from 127.0.0.1:3000 -> 3000
Forwarding from [::1]:3000 -> 3000
Handling connection for 3000
Handling connection for 3000

This will forward port 3000 to localhost, so you can open URL http://localhost:3000 and enjoy access to your service.

Put config files in volume with ConfigMap

apiVersion: v1
kind: ConfigMap
metadata:
  name: patch
data:
  iconv.txt: |
    bla-bla
---
apiVersion: extensions/v1beta1
kind: Deployment
spec:
...
   volumeMounts:
        - name: patches
          mountPath: /opt/patches
  volumes:
        - name: patches
          configMap:
            name: patch

Work with different clusters in different terminal tabs

use KUBECONFIG environment variable to specify a different config cluster file

export KUBECONFIG=/Users/nexus/mycluster/config
kubectl get pods

Update docker image inside a pod

For example you'd like to update image to v1.0.1 for the pod mypod in deployment mydeployment.

kubectl set image deployment mydeployment mypod=myimage:1.0.1

Pod status change watch

Say you have dev-classic-bla-bla pod and you just did an update to this deployment. With this command, you can watch what's happening with your pod.

kubectl get pod --watch | grep classic
dev-classic-6586754cb8-kt5fz   0/1       Terminating        0          4m
dev-classic-6586754cb8-kt5fz   0/1       Terminating   0         4m
dev-classic-6586754cb8-kt5fz   0/1       Terminating   0         4m
dev-classic-85b958f486-p4vt2   0/1       Pending   0         0s
dev-classic-85b958f486-p4vt2   0/1       Pending   0         0s
dev-classic-85b958f486-p4vt2   0/1       ContainerCreating   0         0s
dev-classic-85b958f486-p4vt2   1/1       Running   0         7s

Use the “record” option for easier rollbacks

When applying a yaml use the —record flag:

kubectl apply -f deployment.yaml --record

With this option, everytime there is an update, it gets saved to the history of those deployments and it provides you with the ability to rollback a change.

kubectl rollout history deployments my-deployment

Kubectx

This tool is helpful when you have a lot of k8s clusters for management.
https://github.com/ahmetb/kubectx

Kube PS1

This tool will help you to install nice prompt with k8s cluster name and current context
https://github.com/jonmosco/kube-ps1

dockerkubernetestips

Kubernetes is a winner in docker cloud orchestration, so let's get a brief introduction in what it is and what kind of problems does it solve.

Kubernetes is a set of tools designed to solve a problem of deployment of your lovely tailor-made application to the cloud. And it does not matter which cloud you choose, AWS, Azure or Google, or even IBM, because Kubernetes provides you a set of tools which are platform independent.

For almost every application you need to talk to database, to store some assets, to get requests and make responses with useful content to users. So, basically you need to have 'storage' for storing files, 'network' to get requests and make responses to users and 'computing power with memory' to run your application. All of this is quite simple if you run your application on a single computer (node), you have local storage on an HDD, i.e. filesystem, you have CPU and memory to run your application and single network interface to talk to users and other services like external database or api. But suddenly you decide to deploy your application to some cloud provider and users start using your application in huge scale and then you decide to scale your application across cloud nodes.

And this is still not that moment when you need Kubernetes, because you can use cloud provided facilities like Amazon S3 for storage, Amazon RDS for automatically scaled databases and more Amazon EC2 instances to run your applications.

But wait, why do we need Kubernetes then? Well, every quite complex application has it's own technological stack and different unique Continue Integration and Continue Deployment processes. So, if you work for large enterprise which in turn has a lot of such custom made applications, there is one day when the enterprise decides to make unification of rollout and upgrade processes to reduce costs and create spatial forces, which are responsible for deployment and maintenance. We start calling them DevOps.

And here is that moment, when everything become more and more complicated.

This guys started dreaming about better abstractions for unification of application deployment and invented a concept of Docker. Docker is about having everything application needs to run in one 'container' which can be stored and transferred over network. But every nifty feature comes with it's price. For this luxury of having platform independent movable containers we pay price with much larger application size and extra build step. Both are very time and resources consuming.

Okay, now we have a bunch of containers, cool. How should we monitor all of them and restart when they failed? And we also run containers across different servers and docker does not provide us a way how one part of our application in one container should connect across network and talk to a part of application in another container. And there is nothing more than cleaning up failed docker containers and stalled images manually or in semi-automated, controlled by human way.

And there was a blood. A lot of different third-party solutions and even Docker themselves were trying to compete and provide better tools for solving this problems. And finally more and more people started considering Kubernetes by Google as preferred solution.

Let me briefly describe a concepts of Kubernetes.

  • minikube is an utility to get your home-made cluster up and running as a virtual machine on your single computer
  • kubectl is an utility to control every operation on a cluster, which is up by running minikube.
  • In Deployment you specify information about which docker image to download and run, and what volumes and ports this particular container will use. Deployment will spawn Pods.
  • Basically Pod is a running container, you can just delete running Pod and new one will be created automatically. You can specify in Deployment how many exactly the same containers you need to spawn.
  • You need to create Service in two cases. First, when you want provide DNS name for your container, for example, redis. This will allow other containers to discover and connect to redis by name resolution. Second, when you want to make your application visible from outside of your cloud, i.e. 'publish' application for users.
  • You should use Persistence Volume Claim if you have data in your application which should survive across container deletions. Usually database container will store it's data in such volumes. Otherwise say 'Hasta la vista' to all the data in your live containers, because they are intended to be ephemeral, easily killed and spawned again when necessary.
  • Another nice concept in Kubernetes is a Secret. This is spatial place and API to store your credentials and settings. You can refer to them in Deployment, when you need, for example, to supply database password in one of environment variables.

If you still have intention to try deployment to Kubernetes cluster, start playing with minikube and investigate this nice example of deployment Wordpress + MySql https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/

dockerkubernetes

Sometimes everything become unresponsive, that's probably due to low memory settings, which defaults to 2Gb for default installation.

Adjust memory and cpu settings by running from CLI

minikube start --memory 8096 --cpus 2
dockerkubernetes