Bei einem Projekt ergab sich kürzlich die Notwendigkeit, MySQL Statusinformationen in ein Graylog Logging System zu bekommen. Allerdings durfte der Webserver durch die Firmen-Firewall nur über die Standard-Ports 80 und 443 nach aussen kommunizieren und konnte so nicht z.B. über UDP selbst Daten an Graylog senden.
Daher entstanden zwei kleine Graylog-Plugins, um einerseits die Daten (mittels PHP) über das Internet verfügbar zu machen und andererseits diese Daten vom Graylog-Server aus mittels „Polling“ abzuholen. Dies übernimmt ein sog. Graylog Input-Plugin (Java). Beide Plugins stehen im Graylog Marketplace kostenlos zum Download zur Verfügung.
Use J. Wilders nginx-proxy in multiple docker-compose projects
There is an awesome project for Docker if you want to run e.g. multiple webserver containers on the same ports on one host machine, say Apache or Nginx on port 80: jwilder/nginx-proxy.
Nginx-Proxy for Docker
You have to expose the port 80 in the Dockerfile as usual, but you don’t explicitly map the port in your docker-compose.yml or when using „docker run …“. Instead, you let the nginx-proxy do the heavy work and forward the requests to the right container. Therefore, you add an environment variable for the proxy:
environment: VIRTUAL_HOST: myapp.dev.local
so that it knows which request to forward to which container.
If you want to start multiple docker-compose.yml files, you can’t just add the nginx-proxy container to all the docker-compose.yml files though. If you only had one docker-compose project with e.g. multiple webservers on port 80, you could just add one proxy container to your YAML:
nginx-proxy: image: jwilder/nginx-proxy container_name: nginx-proxy ports: - "80:80" volumes: - /var/run/docker.sock:/tmp/docker.sock:ro
The Problem
But if you have multiple projects, there would be conflicts with this approach since there can only be one container with any given name – and you do only want one nginx-proxy across projects after all! Unfortunately, docker (compose) does not allow existing containers (yet?) and throws an error if you try to start the same container multiple times.
If you want to share the proxy container for different projects, you should also use an external network in your docker-compose.yml files like so (see github.com/jwilder/nginx-proxy/issues/552):
networks: default: external: name: nginx-proxy
Be aware that if you do this, you have to manually create the network before you run „docker-compose up -d“:
docker network create nginx-proxy
The Solution
A solution for using the proxy accross projects would be to check for the network and nginx-proxy container before each call to „docker-compose up -d“. One way to do this is with a Makefile, e.g. in your „make start“ or „make up“ commands, you could call a shell script which does those checks for you:
start: ./config/run-proxy.sh docker-compose start up: ./config/run-proxy.sh docker-compose up -d
This way, the script would create the required network and/or the proxy container if either of them doesn’t exist yet. So all the running projects / containers can share the global proxy container in the global network.
The Details
So, here is an example docker-compose.yml and also an example bash script (run-proxy.sh):
#!/bin/bash ########################################################################## # script to check if the jwilder proxy container is already running # and if the ngnix-proxy network exists # should be called before "docker-compose up -d" ########################################################################## if [ ! "$(docker network ls | grep nginx-proxy)" ]; then echo "Creating nginx-proxy network ..." docker network create nginx-proxy else echo "nginx-proxy network exists." fi if [ ! "$(docker ps | grep nginx-proxy)" ]; then if [ "$(docker ps -aq -f name=nginx-proxy)" ]; then # cleanup echo "Cleaning Nginx Proxy ..." docker rm nginx-proxy fi # run your container in our global network shared by different projects echo "Running Nginx Proxy in global nginx-proxy network ..." docker run -d --name nginx-proxy -p 80:80 --network=nginx-proxy -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy else echo "Nginx Proxy already running." fi
And, for reference – an example docker-compose.yml:
version: '2' services: shopware: image: docker.myregistry.de/docker/php7-apache/image container_name: appswdemo environment: VIRTUAL_HOST: shopware.dev.local VIRTUAL_PORT: 80 DB_HOST: db SHOPWARE_VERSION: 5.3 volumes: - ./config/config.php:/var/www/html/config.php - ./src/pluginslocal:/var/www/html/engine/Shopware/Plugins/Local - ./src/plugins:/var/www/html/custom/plugins - ./src/customtheme:/var/www/html/themes/customtheme links: - db # data only container for persistence dbdata: container_name: dbdataswdemo image: mysql:5.6 entrypoint: /bin/bash db: image: mysql:5.6 container_name: dbswdemo environment: MYSQL_ROOT_PASSWORD: root MYSQL_DATABASE: shopware MYSQL_USER: shopware MYSQL_PASSWORD: shopware TERM: xterm volumes_from: - dbdata phpmyadmin: image: phpmyadmin/phpmyadmin environment: VIRTUAL_HOST: shopwaredb.dev.local VIRTUAL_PORT: 8080 PMA_ARBITRARY: 1 MYSQL_USER: shopware MYSQL_PASSWORD: shopware MYSQL_ROOT_PASSWORD: root links: - "db:db" networks: default: external: name: nginx-proxy
As you can see, the web container („shopware“ in this example), which runs Apache and PHP 7 in this case, doesn’t map any explicit ports, it only tells the proxy its URL and „virtual port“, but there is no „ports:“ section in the YML file. Same goes for the „phpmyadmin“ container.
And finally, the relevant parts of the Makefile:
ARGS = $(filter-out $@,$(MAKECMDGOALS)) MAKEFLAGS += --silent start: ./run-proxy.sh docker-compose start up: ./run-proxy.sh docker-compose up -d ############################# # Argument fix workaround ############################# %: @:
nginx-proxy would now forward all requests to shopware.dev.local to the PHP / Apache container on port 80 and also shopwaredb.dev.local to the PhpMyAdmin container on Port 8080, and you could start more containers on port 80 and 8080 (PhpMyAdmin) without any port conflicts on your host!