I've run into this issue where I am deploying my app with a CI/CD pipeline on a docker swarm cluster.
I keep getting not enough space on device when deploying, which is weird... My images are all <500mb in size, and there isn't much data on the server to begin with.
I start investigating.
sudo du -a -h / | sort -n -r | head -n 5
5G   /var/lib/docker/overlay2/ec1a3324f4cb66327ff13907af28b101ab15d1a0a27a04f0adedf50017f1612e/merged/etc
6G   /var/lib/docker/overlay2/98f9e5f2c28a7ee7972cadfeaa069210238c06b5f806c2f5e039da9d57778817/merged/etc
2G   /var/lib/docker/overlay2/7fe5364228810e035090c86448b5327150f7372c9d2216b8ab4f8c626e679ba0/merged/etc
1G   /var/lib/docker/overlay2/5f80f0b1a72b83553c9089a54226c260b2e695dbba69b9e06ecc18fc18e3d107/merged/etc
And I see that the docker overlay2 folders are taking huges amount of space.
So I clean them up using docker system prune -a -f --volumes.
But I am wondering why this happens?
I am suspecting that in between deploying a new instance of my services, the volumes are attached to the new container and the old container keeps writing to it's filesystem.
What really happens regarding volumes when you deploy a new docker image on a docker swarm cluster? Does it disconnect the volume mapping on the old node - reconnect to the new one, leaving the old instance to write to it's own filesystem?
What steps should I put in place to avoid this?
Example deploy-stack.yml
version: "3.9"
services:
  myApp:
    image: myRepo/myApp:latest
    depends_on:
      - db
    volumes:
      - /var/data/uploads:/app/uploads
      - /var/data/logs:/app/logs
    deploy:
      replicas: 1
      update_config:
        parallelism: 1
        order: start-first
        failure_action: rollback
        monitor: 30s
      restart_policy:
        condition: any
    ports:
      - "80:80"
 
  db:
    image: "postgres:15beta3-alpine"
    container_name: db_pg
    environment:
      POSTGRES_PASSWORD: XXXXXXXXXXXX
      PGDATA: /var/lib/postgresql/data
    volumes:
      - /var/data/db_pg:/var/lib/postgresql/data
    deploy:
      replicas: 1
      update_config:
        parallelism: 1
        failure_action: rollback
        monitor: 30s
      restart_policy:
        condition: any
  seq:
    image: datalust/seq:latest
    environment:
      ACCEPT_EULA: "Y"
      SEQ_FIRSTRUN_ADMINPASSWORDHASH: XXXXXXXXXXXXXXX
    ports:
      - 8888:80
    volumes:
      - /var/data/seq:/data
    deploy:
      replicas: 1
      update_config:
        parallelism: 1
        failure_action: rollback
        monitor: 30s
      restart_policy:
        condition: any
networks:
  default:
    external: true
    name: app-network
Is the myApp.deploy.update_config.order: start-first causing this?