Sorry if this is a duplicate question, but I've been searching for a while, and can't find a solution for my use case. Tried AI bots... that was just a huge waste of time and energy.
I have a Ubuntu server, and I've split it up into 3 VLANs, using netplan. There is no functional "default network". It has 3 vlans: vlan2, vlan20 and vlan30. DHCP on the router assigns an ip to each of those VLANs. So the machine has these ips assigned:
- 192.168.2.101
- 192.168.20.2
- 192.168.30.2
Now I want to run some docker containers, each on it's specific VLAN. But I just can't find a way to configure this properly. I would like to achive the following:
- each container should run on a single VLAN interface, and it should be published ONLY on that interface, not on all of them
- I don't want to get new IPs assigned for each docker container. I know this is "normal", but I honestly don't see the point. If I run 20 containers, who the hell can maintain that? Hidden IPs assigned somewhere else, completely invisible to the router. It's not a question if I will get a IP conflict, it's just a matter of when and how many
- I want to do all config with docker compose, if possible. I'm not a networking expert, I'll forget any manual commands eventually
- I can compromise on this, but ideally, I don't want to hardcode any machine IPs in docker compose. I'd prefer to have the machine (all 3 vlans) get IPs assigned by DHCP
So, for example, lets say I want to run ollama and ollama-webui. I want ollama to be deployed on 192.168.2.101:7869 and ollama-webui on 192.168.2.101:8080. I don't want them on different IPs, and I don't want them on IPs different than the hosts IP.
What I have tried so far:
host network mode - the problem here is that this uses the "default" network interface of the host. My "default" interface is basically a no-op. I can't find a way to pick a vlan in host network mode
specifying/hardcoding the IP in the ports binding - e.g. 192.168.2.101:8080:8080. This works a little better, but it's still not what I want. The container is still accessible from all VLANs. I can still access it through e.g. 192.168.20.2:8080. It does throw a server error on any other IP, but I think this is just some internal error in ollama-webui. From the networking perspective, there is nothing binding this to vlan2. e.g. I want to run another container on 192.168.20.2:8080. I don't want a container to permanently burn up the port for all IPs
macvlan/ipvlan - honestly I don't know the difference between these, but in my use case they behave the same. They eliminate the problem of leaking access to other vlans. However, each container now gets a new IP assigned (not by the router), which is not what I want. I want all containers that run on the same vlan, to use the same IP (the same as the host machine). I have too many devices, I really can't manage "hidden" static IPs. I could live with internal IPs, if they are only internal. I would need to set up a reverse proxy or something, which binds to those internal IPs from the external/host IP. That seems complex, I don't know how to do it properly, and I was hoping for a simpler solution
I'm pretty much stuck. Neither of these solutions does what I want. I can potentially compromise on hardcoding the IPs in docker compose, but that is the only thing I'm willing to compromise on. The bottom line is the same. I want multiple containers, that run on the same vlan, to use the same IP (same as the vlan of the host machine). Basically I want to pretend that I have 3 pcs, each connected to a different access port (and vlan), and that all containers on the same machine use the same, host network interface.
I'd appreciate any help. I'm way in over my head with vlans. Naively, once I gotten out of the netplan mess, I was hoping it would be smooth sailing from there on...
Here is the docker compose that I'm playing around with, if its any help. I can also post the netplan config, if it's needed.
version: '3.8'
services:
ollama:
image: ollama/ollama:latest
ports:
- 192.168.2.101:7869:11434
volumes:
- ./ollama/ollama:/root/.ollama
container_name: ollama
pull_policy: always
tty: true
restart: always
environment:
- OLLAMA_KEEP_ALIVE=24h
- OLLAMA_HOST=0.0.0.0
networks:
- vlan2_ipvlan
ollama-webui:
image: ghcr.io/open-webui/open-webui:main
volumes:
- ./ollama/ollama-webui:/app/backend/data
container_name: ollama-webui
depends_on:
- ollama
ports:
- 192.168.2.101:8080:8080
environment:
- OLLAMA_BASE_URLS=http://ollama:7869
- ENV=dev
- WEBUI_AUTH=False
- WEBUI_NAME=Local AI
- WEBUI_SECRET_KEY=some_key
extra_hosts:
- host.docker.internal:host-gateway
restart: unless-stopped
networks:
- vlan2_ipvlan
#networks:
vlan2_ipvlan:
driver: ipvlan
driver_opts:
parent: vlan2
ipam:
config:
- subnet: 192.168.2.0/24
gateway: 192.168.2.1