12 posts tagged

nginx

Well, this issue was my pain in the ass until I fully understood what was going on with nginx-proxy container and docker-compose v2.

When you have docker-compose.yml with separate frontend network you need to connect this network to nginx-proxy container in order to work. Obviously, there is no frontend network until you up your containers for the first time!
When you do

docker-compose up

network will be created, but nginx-proxy will not be attached to it! So you need to shut down your app containers, do

docker network connect itservice_frontend-tier nginx-proxy

and then to fire your containers up again! But wait, you need to restart nginx-proxy container to connect to this new network!

As you can see this process can not be single step deployment solution on CoreOS, but now you know why and how to fix this.

curl -H 'Host:stage.it-premium.com.ua' https://192.168.150.62

where Host is your host and 192.168.150.62 is your nginx server.

nginx

For coreos that's enough to use something like this

docker run -d \
  -v /home/core/certificates:/etc/nginx/certs:rw \
  --volumes-from nginx-proxy \
  -v /var/run/docker.sock:/var/run/docker.sock:ro \
  jrcs/letsencrypt-nginx-proxy-companion

docker run -d -p 80:80 -p 443:443 \
  --name nginx-proxy \
  -v /home/core/certificates:/etc/nginx/certs:ro \
  -v /etc/nginx/vhost.d \
  -v /usr/share/nginx/html \
  -v /var/run/docker.sock:/tmp/docker.sock:ro \
  -v /home/core/conf.d/external.conf:/etc/nginx/conf.d/external.conf  \
  --restart always \
  jwilder/nginx-proxy

and then not to forget to specify

LETSENCRYPT_HOST=mydomain.com
LETSENCRYPT_EMAIL=my@email.com

in your docker-compose section.

More is here https://hub.docker.com/r/mickaelperrin/docker-letsencrypt-nginx-proxy-companion/

When your site did not work properly in docker do following:

  1. Check is nginx container running
docker ps -a
3fbec7a5431f        jwilder/nginx-proxy                                "/app/docker-entrypoi"   20 hours ago        Up 7 minutes                0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp   nginx-proxy
  1. Check if it has proper configuration
docker exec nginx-proxy nginx -t

You should receive

nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
docker exec nginx-proxy nginx -T

this shows configuration of nginx. So, no need to bash into nginx container and do cat /etc/nginx/conf.d/default.conf. It's much faster.

  1. Check if your container and nginx container has network connection.
    If not, connect them with
docker network connect mycontainer_default nginx-proxy
  1. Check container logs
docker logs nginx-proxy -f --tail 250

That will show last 250 lines and continues to follow new data.

dockernginx

Short answer: *httpry* is excellent utility for HTTP debugging on Linux.

apt-get install -y httpry
httpry
root@d9fbedf5d17d:/app# httpry -i eth0
httpry version 0.1.7 -- HTTP logging and information retrieval tool
Copyright (c) 2005-2012 Jason Bittel <jason.bittel@gmail.com>
----------------------------
Hash buckets:       64
Nodes inserted:     10
Buckets in use:     10
Hash collisions:    0
Longest hash chain: 1
----------------------------
Starting capture on eth0 interface
2017-01-17 07:44:48	185.49.14.190	172.17.0.4	>	GET	testp3.pospr.waw.pl	http://testp3.pospr.waw.pl/testproxy.php	HTTP/1.1	-	-
2017-01-17 07:44:48	172.17.0.4	185.49.14.190	<	-	-	-	HTTP/1.1	503	Service Temporarily Unavailable
2017-01-17 07:45:05	95.67.19.75	172.17.0.4	>	HEAD	photo.kiev.ua	/	HTTP/1.1	-	-
2017-01-17 07:45:06	172.17.0.4	95.67.19.75	<	-	-	-	HTTP/1.1	200	OK
2017-01-17 07:45:06	95.67.19.75	172.17.0.4	>	HEAD	blog.it-premium.com.ua	/	HTTP/1.1	-	-
2017-01-17 07:45:06	172.17.0.4	172.17.0.6	>	HEAD	blog.it-premium.com.ua	/	HTTP/1.1	-	-
2017-01-17 07:45:06	172.17.0.6	172.17.0.4	<	-	-	-	HTTP/1.1	200	OK
2017-01-17 07:45:06	172.17.0.4	95.67.19.75	<	-	-	-	HTTP/1.1	200	OK
2017-01-17 07:46:35	172.17.0.4	172.17.0.2	>	GET	neovo.kiev.ua	/	HTTP/1.1	-	-
2017-01-17 07:46:35	172.17.0.2	172.17.0.4	<	-	-	-	HTTP/1.1	200	OK
2017-01-17 07:46:51	62.80.171.198	172.17.0.4	>	GET	condom.org.ua	/	HTTP/1.1	-	-
2017-01-17 07:46:51	172.17.0.4	62.80.171.198	<	-	-	-	HTTP/1.1	503	Service Temporarily Unavailable
2017-01-17 07:47:19	163.172.65.114	172.17.0.4	>	GET	photo.kiev.ua	/photo_files/75/original/5050.jpg?1458503622	HT

Source http://xmodulo.com/sniff-http-traffic-command-line-linux.html

nginx

For instance you have several small-sized dockerized websited on different servers in your internal network with webserver's IP like 192.168.0.2, 192.168.0.3, 192.168.0.4 and you would like to expose your site to public via single IP address like 95.67.123.18.

Is that possible? Sure, and you can even use great jwilder/nginx-proxy image for that.

Just create on public webserver folder /home/user/conf.d/external.conf and put there config like this

log_format vhost '$host $remote_addr - $remote_user [$time_local] '
                 '"$request" $status $body_bytes_sent '
                 '"$http_referer" "$http_user_agent"';
upstream server.it-premium.com.ua {
  server 192.168.0.2:80;
}
server {
  server_name server.it-premium.com.ua;
  listen 80;
  location / {
    proxy_pass http://server.it-premium.com.ua;
  }
  access_log /var/log/nginx/access.log vhost;
}

also launch nginx-proxy container with extra arguments

-v /home/user/conf.d/external.conf:/etc/nginx/conf.d/external.conf

that will connect single config file to nginx config folder and will read configuration from there.
Beware, in case you change manually external.conf you'll need to restart your nginx container, because docker will treat your file as invalid.

dockernginx

For me worked following configuration for jwilder/nginx-proxy container.

web:
  image: 'gitlab/gitlab-ce:latest'
  hostname: 'gitlab.it-expert.com.ua'
  environment:
    GITLAB_OMNIBUS_CONFIG: |
      external_url 'https://gitlab.it-expert.com.ua'
      registry_external_url 'https://registry.it-expert.com.ua'
    VIRTUAL_HOST: gitlab.it-expert.com.ua,registry.it-expert.com.ua
    VIRTUAL_PORT: 443
    VIRTUAL_PROTO: https
  volumes:
    - './data/config:/etc/gitlab'
    - './data/logs:/var/log/gitlab'
    - './data/data:/var/opt/gitlab'

Tricky part was to figure out how containers is connected and which and who should process SSL.

For this configuration you should supply SSL certificates both for nginx-proxy and gitlab-ce containers, because communications between them is also using SSL. For gitlab-ce use ./data/config/ssl folder.

I've encountered issue with dropped SSE connection when deployed SSE based sinatra app with SSL certificates and nginx-proxy docker container.

First, create vhost.d directory on host and connect it to /etc/nginx/vhost.d in nginx-proxy container via volume. Create corresponding file to your VIRTUAL_HOST name, which will be included upon nginx startup. In my case that was SSE stream.it-premium.com.ua.

My /home/nexus/vhost.d/stream.it-premium.com.ua file is following

proxy_set_header Connection '';
proxy_http_version 1.1;
chunked_transfer_encoding off;
proxy_buffering off;

But it has still issues with dropped connection.
Finally I've updated app code with following headers:

response['Cache-control'] = 'no-cache'
response['X-Accel-Buffering'] = 'no'

And everything run smoothly after deploy.

Sep 16, 2016, 12:00

How to debug jwilder/nginx-proxy

It's common when you use nginx-proxy by jwilder. It supports SSL in clever way, handy configuration for containers via environment variables, but sometimes issues happened. How can I debug it?

First, check container logs with

docker logs nginx-proxy

then run nginx configuration test with

docker exec nginx-proxy nginx -t

nginx-proxy container should be up and running

In case no issues you should receive output something like this:
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful.

Otherwise number of line with issue will be reported.

Then you should verify configuration, the simplest way is to exec following command:

docker exec nginx-proxy cat /etc/nginx/conf.d/default.conf

I tackled some weird issues when:

  1. I use docker-compose version 2, and nginx-proxy does not have access to frontend network. In this case you should list
docker network ls

to find out yout frontend network name.
NETWORK ID NAME DRIVER
7fcacd901d8b bridge bridge
4c795ffb838a  itservice_front-tier bridge
ac4c6aeaf804 itservice_back-tier bridge
55d3d40d2390 none null
d0c498886500 host host

then

docker connect itservice_front-tier nginx-proxy

and container become accessible.

  1. In case container expose several ports, like odoo container, and none of them is 80 or 443, you should point with VIRTUAL_PORT proper port for proxying. In my case default.conf configuration of nginx pointed that docker container
server 172.17.0.5 down;

is visible, but down. After specifying VIRTUAL_PORT all things has been resolved.

dockernginx

нужно просто добавить контейнер в сеть с nginx-proxy примерно так

docker network add mynet_default nginx-proxy

и все работает.

Либо добавлять external: bridge через docker-compose.

Ctrl + ↓ Earlier