I'm not sure if this is the "right place" for this question, but I'll ask here since I am definitely doing this on Ubuntu Server 20.04.
I have a series of docker containers which are started using Docker Compose. For most of these, multiple containers (services) are started in a single docker-compose.yml
file. Some, not all, of these "stacks" of containers ("stacks" in quotes because I'm technically not using Docker Swarm).
A few services have a dependency on NFS shares mounted on the host. On my host, I use /etc/fstab
to mount them like so:
192.168.1.51:/volume2/media /media/plex nfs _netdev,nofail,nfsvers=3,proto=tcp,rsize=65536,wsize=65536 0 0192.168.1.51:/volume2/nextcloud /media/nextcloud nfs _netdev,nofail,nfsvers=3,proto=tcp 0 0
In my docker-compose.yml
for Nextcloud, for example, I mount /media/nextcloud
into the container. The crux of the issue here is that it's been difficult to ensure that mounts are ready and available prior to the docker daemon starting up the containers that depend on them.
There are a couple of solutions I've attempted or thought of here, I'll walk through all of them.
Put a direct dependency on the NFS mounts inside the systemd service unit for the docker daemon. Basically this involved me creating a file named
/etc/systemd/system/docker.service.d/mounts.conf
with the following contents:[Unit]RequiresMountsFor=/media/nextcloud
I haven't had an opportunity to test this yet, but I'm hoping this will hold off starting the docker daemon until this specific mount is established. But I'm not sure if it will wait, since I'm using
nofail
in/etc/fstab
. I putnofail
in an attempt to get_netdev
working; when my NAS is booted up after my Ubuntu server, the mounts were not getting set up and I'd have to dosudo mount -a
by hand (but this was before I addednofail
).I created a custom systemd template unit for individual compose files, but the issue with this is that if one single service in the compose "stack" goes down,
docker-compose up
will not exit I believe. Basically, there's a conflict of behavior between Docker's restart policies for containers and the policies enforced by systemd units. I feel like this issue alone makes this complicated enough that I wanted to avoid this solution, but it "feels" like this is the right path to go, assuming there's a solution to this restart policy problem.This solution I did not try, but it was an idea shared with me: Similar to the point above, but do not use docker compose. Instead, set up 1 systemd service unit per individual docker container, and use the
docker run --rm
command. I was against this idea because I cannot give up the ability to store container setup and configuration in YAML files.Another one I tried a long time ago, but unfortunately have since forgotten the issues with it, was to use the NFS volume driver in Docker to mount the NFS share directly into a docker volume. However, this had issues when doing an
up
/down
, I think, and I ended up scrapping the idea. I think the volumes weren't recreated as expected somehow. Willing to try this again if there's a better way...
How to manage the dependency on startup is not the only issue, though. What if these NFS mounts are lost while the containers are already running? I really do not want the containers to continue running in this case, ideally, because not all services in the containers are aware that they are writing to an unreliable filesystem/mount. I'm not sure if the OS-level propagates failures up to the container in a way that the service process running there has to deal with it in some way, or if it would be better to shut down the containers in response to the NFS mount becoming unavailable (say, if I power down my NAS, which is the source of the NFS shares).
Regarding point #2 above about the systemd template unit, here's what I was using for those that were curious. I came up with this by hand, I'm not sure if it's the "right way". There's a lot of things I'm not sure about:
/etc/systemd/system/docker-compose@.service
:
[Unit]Description=Docker Compose Service for %iAfter=docker.serviceRequires=docker.service[Service]Type=execRestart=alwaysRestartSec=30User=myuserGroup=myuserWorkingDirectory=/home/myuser/docker_services/%iEnvironment="DOCKER_UID=1000" "DOCKER_GID=1000"ExecStartPre=/usr/local/bin/docker-compose pullExecStart=/usr/local/bin/docker-compose upExecStop=/usr/local/bin/docker-compose down[Install]WantedBy=multi-user.target
Note: Above, the path /home/myuser/docker_services/nextcloud
would have a docker-compose.yml
inside of it
/etc/systemd/system/docker-compose@nextcloud.service.d/mounts.conf
:
[Unit]RequiresMountsFor=/media/nextcloud
Command I use to start:
sudo systemctl start docker-compose@nextcloud.service
Why should I set my
ExecStop
todocker-compose down
ifdocker-compose up
is a blocking operation and responds to (I believe)SIGKILL
. I'm just not sure how systemd manages running foreground processes like this. Should I use daemon mode (docker-compose up -d
)?I could have done
up -d
/start
/stop
instead ofpull
/up
/down
, but I really liked the idea of pulling the latest image each time I started the services or reboot my machine. But, this might be too much work for a systemd service. I did not use this solution for long enough to know which is best.
So after all of this, if you're still with me, my question is: How should I manage dependencies between NFS mounts on the host and Docker containers that depend on them? I'm hoping for adjustments to solutions I've already tried, or completely new ideas.