Ctrl + ↑ Later
Jun 11, 2020, 10:08

Centos 7 docker install

yum install -y yum-utils
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum install docker-ce.x86_64
systemctl start docker
systemctl enable docker

in case you do this under unprivileged used don't forget to add permission to the docker.sock

sudo groupadd docker
 sudo usermod -aG docker $USER
cd /var/lib/docker/containers && find . -type f -name *-json.log | xargs truncate -s0
env $(cat .env) rails s

you need to have ruby installed which is always true for macOS.

function yaml2json()
{
    ruby -ryaml -rjson -e \
         'puts JSON.pretty_generate(YAML.load(ARGF))' $*
}

usage:

yaml2json my.yml
Mar 25, 2020, 8:51

The Gitlab pecularities

For a very long time, I used to work with Jenkins most of my work time, but since my new job, I have to spend more time with Gitlab and started having a bunch of  WTFs. So here is my list:

  1. Caching. Distributed cache requires S3 implementation. Here is free one: https://min.io/
  2. Stages. Stages are not the same stages as Jenkins has, these are naturally separate jobs with the separate BUILD_IDs, so you'd better use CI_PIPELINE_ID.
  3. Declarative pipeline. The only YAML can be used with its pros and cons, but having Gitlab Actions is mainstream nowadays.
  4. Files. You can't store ssh key as a file in GitLab, only like a variable. That means you need to create one using echo $variable in your script. With proper permissions.

That seems to be a peculiarity of the Gitlab because of its nature.

The thing is that local cache is really local to the particular runner, so if you use shared runners each next task will be assigned most probably to the different runner(node). In this way you will not have the same cache unless you're using distributed cache stored on S3.

The easiest way to tackle this is to use a tag for each of your stage. Thus it's going to stick to particular runner.

Mar 22, 2020, 10:06

ansible vault quick encryption

it was convenient for me to use zsh function for the string encryption:

add this to .zshrc

vault() {
	echo -n $1 | ansible-vault encrypt_string --vault-id=myvault
}

and use like this

vault my-password

output should be similar to this

Reading plaintext input from stdin. (ctrl-d to end input)
!vault |
          $ANSIBLE_VAULT;1.1;AES256
          39383538336133613537376463373062363639343761633365666530313363343766663662336530
          6637336536383438333038623865386636383737393165340a663236336463306261386466326262
          31333664393130313734303230356364626335346336363430303036633962343536353137376665
          3464363163346433350a653230336636643562363030383363336166636365313133343563393261
          38396530616261616338626161363133323430323361623164393466333038326637
Encryption successful

install jq to the system with

apt-get install -y jq

or respective yum command

docker inspect container_name| jq ".[]|.Config.Env"
  1. Configure a role for EC2 instance with permissions to write to the CloudWatch.
  2. Assign that role to EC2 instance with docker.
  3. Open CloudWatch and create docker-logs logging group.
  4. Login into EC2 node and create file
{
  "log-driver": "awslogs",
  "log-opts": {
   "awslogs-group": "docker-logs"
  }
}
  1. Restart docker with
systemctl restart docker

That's it.

  1. Go to the CloudWatch group and check logs, everything should be there :)

In order command

kubectl top node

to work you need to deploy Metrics Server, which it easy task.

DOWNLOAD_URL=$(curl -Ls "https://api.github.com/repos/kubernetes-sigs/metrics-server/releases/latest" | jq -r .tarball_url)
DOWNLOAD_VERSION=$(grep -o '[^/v]*$' <<< $DOWNLOAD_URL)
curl -Ls $DOWNLOAD_URL -o metrics-server-$DOWNLOAD_VERSION.tar.gz
mkdir metrics-server-$DOWNLOAD_VERSION
tar -xzf metrics-server-$DOWNLOAD_VERSION.tar.gz --directory metrics-server-$DOWNLOAD_VERSION --strip-components 1
kubectl apply -f metrics-server-$DOWNLOAD_VERSION/deploy/1.8+/

in some time you can execute the command

kubectl top node

and see something similar to this

Ctrl + ↓ Earlier