Routing Specific Docker Containers Through WireGuard VPN with systemd-networkd

Prev

2020-04-08

Next
I heard that dramatic article images heavy with meaning are a meme, so here you have a picture of a subway tunnel because VPNs are network tunnels.

Short Instructions

I recently reorganized my self-hosted stuff to use Docker. While Docker not really fits my philosophy, the broad availability and low-maintenance of images for pretty much all software convinced me to switch and so far I’m happy, it’s significantly less work than before, I can check the Docker Compose files into version control, and backups are easy with everthing inside Docker volumes.

The Problem

Anyway – here is the scenario I want to talk about: You have one or more Docker containers and you want to route all its traffic through a WireGuard VPN, but not the other containers’ or the host’s traffic. You have root access to the host machine.

The Way to the Solution

wg-quick

The most straightforward way of using WireGuard is wg-quick. You just need a configuration file, about 10 lines long (take a look at an OpenVPN config file and you will appreciate this shortness), run sudo wg-quick up {config file} and your VPN is up and running. These files also work with the Android/iOS/MacOS/Windows apps.

For example, the VPN provider Mullvad, which I can recommend 100%, lets you download wg-quick files for easy setup.

wg-quick is easy, but it routes all traffic through the VPN, which is what you want most of the times, but not in our use case. Watch out, the allowed IP range does not help as you might think: You can tell WireGuard that only traffic to specific IPs should be routed through the VPN, which makes sense for something like a VPN for employees: only traffic to the company’s network should go through the VPN. We however need to filter by source. wg-quick can’t do that.

Using the Tools directly

After quite a lot of searching I finally found a great blog article detailing a solution to our exact problem using the wg and ip tools directly (and one using WireGuard client inside another container). This article is mostly based on that one.

The gist of that that method is: You set up a WireGuard interface manually, the same way wg-quick does internally, but without any routing to it yet. Then you add a routing rule via ip that sends all traffic from a specific subnet to the VPN. Lastly, you configure the desired Docker container to use exactly that subnet using Docker Compose or docker network.

While it is a nice and elegant solution, I think it is kind of cumbersome to configure, so I tried to find a more comfortable way of setting this up.

systemd-networkd

While I agree with some of the criticism against systemd and its policies, systemd-networkd really is the best thing that ever happened to network configuration on Linux. Instead of fiddling around with awfully complex tools like ip or weird network managers, you can set up your network with short, few and well-documented plain-text config files. I love it. Turns out it also has everything we need for tunneling our Docker containers, and in a nice and easy way. This is the solution I went with and want to show you.

Instructions in Short

For the impatient. For detailed instructions see below.

To tunnel a container through a WireGuard VPN given a wg-quick config file from your VPN provider, add these files to /etc/systemd/network/:

80-wg0.netdev:

[NetDev]
Name = wg0
Kind = wireguard
Description = WireGuard VPN

[WireGuard]
PrivateKey = {Private key, same as in wg-quick config}
RouteTable = off

[WireGuardPeer]
PublicKey = {Public key, same as in wg-quick config}
AllowedIPs = 0.0.0.0/0,::0/0
Endpoint= {Endpoint, same as in wg-quick config}

85-wg0.network:

[Match]
Name=wg0

[Network]
# If you need multiple addresses, e.g. for IPv4 and 6, use multiple Address lines.
Address = {Address to bind to inside the VPN, same as in wg-quick config}

[RoutingPolicyRule]
From = 10.123.0.0/16
Table = 242

[Route]
Gateway = {The address of the interface, same as above in [Network] in Address}
Table = 242

[Route]
Destination = 0.0.0.0/0
Type = blackhole
Metric = 1
Table = 242

Then run sudo docker network create tunneled0 --subnet 10.123.0.0. Now you can run docker containers with --net=tunneled0 to tunnel them.

Alternatively use Docker Compose to create and use a Docker network in that subnet:

version: "3.7"
services:
  app:
    image: {image}
    dns: "{DNS server to use}"
    networks:
      tunneled0: {}
networks:
  tunneled0:
    ipam:
      config:
        - subnet: 10.123.0.0/16

That’s it!

The Detailed Solution

Preparation

Make sure that your host has:

Setting up the Interface

First we have to get the WireGuard interface running. We couldn’t do it with wg-quick as it automatically routes all traffic through it, and using wg is cumbersome, so we use systemd-networkd. All we have to do is add two files in /etc/systemd/network/:

80-wg0.netdev:

[NetDev]
# Or any other name
Name = wg0
Kind = wireguard
# Or your own description
Description = WireGuard VPN

[WireGuard]
PrivateKey = {Private key, same as in wg-quick config}
RouteTable = off

[WireGuardPeer]
PublicKey = {Public key, same as in wg-quick config}
# Remeber, these are allowed target IPs, not source, therefore we allow all
AllowedIPs = 0.0.0.0/0,::0/0
Endpoint= {Endpoint, same as in wg-quick config}

85-wg0.network:

[Match]
# Same as in .netdev file
Name=wg0

[Network]
# If you need multiple addresses, e.g. for IPv4 and 6, use multiple Address lines.
Address = {Address to bind to inside the VPN, same as in wg-quick config}

As you can see, it’s very similar to and just as easy as a wg-quick config file and most values can be taken straight from said file. For more info take a look at the man pages of netdev and network files.

The names of the files can be adjusted to your liking. Note that systemd-networkd reads config files in alphabetic order, so adjust the prefixed numbers in the names if necessary.

Use # systemctl restart systemd-networkd (or reboot to be sure) to apply the configs. Now you can verify that the inferface is actually working:

$ curl -4 icanhazip.com
$ sudo curl -4 --interface wg0 icanhazip.com

The results of the two curl calls should be different, the first shows your normal IP, the second one should yield the VPN IP address. Note that for me the second curl only works as root (probably curl can only bind to the interface as root for some reason). With sudo wg and networkctl status wg0 you can get further info about the interface.

Routing

Now that we got the WireGuard interface up and running we have to arrange for the traffic of our Docker container to actually go through it. Turns out all we have to do is adding a few lines to 85-wg0.network. This it how it should look like:

Updated 85-wg0.network:

[Match]
Name=wg0

[Network]
# If you need multiple addresses, e.g. for IPv4 and 6, use multiple Address lines.
Address = {Address to bind to inside the VPN, same as in wg-quick config}

[RoutingPolicyRule]
# Or any other unused private subnet
From = 10.123.0.0/16
# Or any other unused table number
Table = 242

[Route]
Gateway = {The address of the interface, same as above}
# Same table number as above
Table = 242

[Route]
Destination = 0.0.0.0/0
Type = blackhole
Metric = 1
# Same table number as above
Table = 242

What the [RoutingPolicyRule] section does is taking all traffic from the specified subnet and looking up the routes in routing table 242 for it. We add a route to (hopefully previously empty) table 242 with the [Route] section, and that route sends the traffic to our WireGuard interface because we set the interface’s address as gateway.

The second [Route] section sets a blackhole route in the same table with a metric of 1, that means a lower priority than the default metric of 0. This should discard all traffic (instead of routing it through the default network without any VPN) if the VPN gateway is down and therefore prevent leaks.

That should be all we have to do on the system side!

Using it with Docker

To actually get Docker to use the interface with specific containers we have two possibilities.

Note for both methods that published ports will not be available on localhost on the host as they normally would as all container traffic goes through the VPN (which is what we wanted, of course). So if you add an exposed port it must be accessed through the VPN’s outside address.

Docker Directly

Create a Docker network in the subnet we used in the systemd-networkd config file with sudo docker network create tunneled0 --subnet 10.123.0.0/16 (or use any other name than tunneled0), then run containers in that network by using the --net=tunneled0 option. With the --dns option you can set a custom DNS so that no DNS traffic gets leaked.

For example, you can use sudo docker run -t --net=tunneled0 curlimages/curl icanhazip.com to check that the returned IP is actually the VPN’s IP.

Docker Compose

This is the more comfortable method. You can use this as a base for your own compose files:

version: "3.7"
services:
  app:
    image: {image}
    dns: "{DNS server to use}"
    networks:
      # Or your own name
      tunneled0:
networks:
  # Same name as above
  tunneled0:
    ipam:
      config:
        - subnet: 10.123.0.0/16

Port Forwarding

You can use Docker’s normal port publishing options to make ports available through the VPN. So, for example, if your VPN provider gives you port 1234 and you want port 80 inside your container to be available through the VPN, call Docker with -p 1234:80 (do not forget the other required options explained above) or add

ports:
  - "1234:80"

to the corresponding service’s section in the Docker Compose file.

Note that published ports of tunneled containers are not reachable on localhost, only through the VPN. Sadly, I haven’t yet found a possibility to fix that.

Conclusion

We got Docker containers running on a WireGuard VPN with only two short and simple config files. If you have any questions or comments, please post them in the discussion forum or contact me.

A big thank you goes out to Nick Babcock for the great article this one is based on!


Update 1: Added a blackhole route to prevent leaks when VPN gateway is down. Thanks to tchamb for the suggestion!

Update 2: Added a section explaining port forwarding. Thanks to Maren for the idea!

Update 3: This post was posted on Hacker News and even reached the front page for some time! I’m honored!

Update 4: Since systemd version 250 systemd-networkd creates routes for addresses specified in AllowedIPs for WireGuard (see changelog). This interferes with the route we create manually. This is fixed by adding RouteTable = off in the [WireGuard] section in the .netdev file. I updated the instructions accordingly.

Update 5: There was a mistake in the Docker section: when creating a Docker network via CLI you need to specify a prefix size, just as you need to in a Docker Compose file. So, instead of sudo docker network create tunneled0 --subnet 10.123.0.0 you need to run sudo docker network create tunneled0 --subnet 10.123.0.0/16. I fixed it in the article, thanks to Elluvean of Light for the hint!


Archived Comments from forum.eisfunke.com

by tchamb on 2020-04-21

Thanks for the article, very simple and works well.

Per your warning, I’ve added the following to the .network file for my ethernet interface to handle container traffic routing if the WireGuard interface is somehow destroyed:

[Route]
Destination=0.0.0.0/0
Type=blackhole
Metric=1
Table=242

This seems to do the trick after a test using ip link set down dev wg1. I can no longer curl nor ping from within the container until the wg1 device and its route are restored.

by Eisfunke on 2020-04-23

I’m glad that the article was helpful!

And thank you for the config tip. Using metrics looks very clean and simple. I’ll test it on my machine when I get around to it and add it to the article :+1:

by Eisfunke on 2020-08-03

Sorry that it took so long. I finally tested your solution, it seems to work for me as well! I added it to the article. Thanks again!

by Maren on 2020-09-05

This is exactly what I’ve been looking for where there exists a kill-switch which is active regardless of the state of the Wireguard connection. Usually most other tutorials setup a kill-switch in the Wireguard-config which is only active when there is a connection. Changing the private key, hostname (IP, or address) exposes all traffic sent through.

Speaking of the solution, would it be possible to make a shell script of all the steps which users can easily run to set up everything? And perhaps make a git of it as well? Moreover, port forwarding from a service, such as a BitTorrent container, works out-of-the-box without doing anything? Let’s say my VPN provider gives me port 42891, I simply only have to setup that port in the BitTorrent client and port forwarding should work?

by Eisfunke on 2020-09-06

Creating a script that generates the two systemd-networkd files from an existing wg-quick config is an interesting idea that should definitely be possible. If I get around to it, I’ll be sure to put it here!

Port forwarding through the VPN works almost out of the box, you’ll still have to tell docker to publish the corresponding ports as you normally would, e.g. using -p 1234:80 in a Docker call or

ports:
  - "1234:80"

in a Docker Compose file would make port 80 inside the container available as port 1234 through the VPN.

Both can be the same, so if your VPN provider gives you port 42891, run the container with port 42891 and use 42891:42891 as port option.

I added a explaination for that to the article. Thanks for the feedback!

by Maren on 2020-09-06

Thank you very much for the additional information concerning port forwarding :+1: I’ll look more into this in the coming week. What’s your take on implementing a firewall with the setup? Is it really needed given if you have a firewall setup for the system outside of the Docker containers?

by Maren on 2020-09-09

To harden the security, I’m thinking about encrypting the private key. However, I see that PostUp is not possible to run with systemd-networkd given error: ” Unknown key name ‘PostUp’ in section ‘WireGuard’, ignoring.” (inspiration from archlinux wiki). So please let me know if you can think of any solution to accomplish this with pass.

by Maren on 2020-09-09

Reading from your article: ” Note that published ports of tunneled containers are not reachable on localhost , only through the VPN. Sadly, I haven’t yet found a possibility to fix that.”

This is actually possible with multi-host-networking. The easiest way is to create two networks and add them both to the second container which wants to access ports from the first container.

Fictive example:
networks:
  default:
  tunneled0:
    ipam:
      config:
        - subnet: 10.123.0.0/16

services:
  container1:
    image: ....
    networks:
      tunneled0:

  container2:
    image: ...
    networks:
      - default
      - tunneled0

Here is a real-world example which I use where I’ve added three networks to get an inter-container network. ruTorrent is running in the front network (tunneled0) and expose port 80 and 5000 to inter network and then back network which NGINX access to get the reverse proxy work.

Real world example
version: "3.8"

networks:
  tunneled0:
    ipam:
      config:
        - subnet: 10.123.0.0/16
  inter:
    name: inter
    driver: bridge
    internal: true
    driver_opts:
      com.docker.network.bridge.name: dockerinter
  back:
    name: back
    driver: bridge
    driver_opts:
      com.docker.network.bridge.name: dockerback

  rutorrent:
    image: linuxserver/rutorrent
    expose:
      - "80" #(ruTorrent web)
      - "5000" #(ruTorrent SCGI)
    networks:
      - tunneled0
      - inter
    ...

  swag:
    image: linuxserver/swag
    cap_add:
      - NET_ADMIN
    environment:
      ...
    networks:
      - back
      - inter
    ports:
      - "443:443"
    ...

by Eisfunke on 2020-09-10

What’s your take on implementing a firewall with the setup? Is it really needed given if you have a firewall setup for the system outside of the Docker containers?

I’m no expert on that, you should have a firewall somewhere in front of Docker, e.g. I have Docker running on my home server and my router is doing the firewalling.

So please let me know if you can think of any solution to accomplish this with pass.

I haven’t tried it, but you should be able to not set the private key in the .netdev file and then use a custom oneshot systemd unit that runs the command used in PostUp from the ArchWiki on startup to enter the key.

This is actually possible with multi-host-networking. The easiest way is to create two networks and add them both to the second container which wants to access ports from the first container.

Thank you very much! That solutions seems to work. I’ll add it to the blog post when I get around to it.

by Maren on 2020-09-10

What’s your take on implementing a firewall with the setup? Is it really needed given if you have a firewall setup for the system outside of the Docker containers?

I’m no expert on that, you should have a firewall somewhere in front of Docker, e.g. I have Docker running on my home server and my router is doing the firewalling.

I have a firewall setup in my router and have setup iptables for my server outside docker. I was thinking about if it is needed to add iptables rules inside the docker setup as well. But I guess it isn’t and the reason for why most vpn-clients have it is probably to use it as a kill-switch.

by rodrigorodrigo on 2020-09-11

Hello! first of all, thanks for the guide. It is exactly the kind of approach I was looking for when rerouting docker containers. I have a problem though: even though I can $ sudo curl -4 --interface wg0 icanhazip.com and it gives me the VPN IP, when I add it to a docker network and attach it to a container (via portainer, which is how I am managing containers) and enter it, it has no connection. Using $ networkctl status wg0 shows it’s status as “routable (configured)”. I can connect to the same peer in the same local network using another user from the same (self-hosted) VPN peer. Oh, also, the peer of access is using Algo (https://github.com/trailofbits/algo/issues) to manage VPN profiles (which adds a layer of complexity to the equation, as it uses PresharedKeys along with Public and Private ones).

Do you have any idea where I might be falling short? Once again, thank you.

by Maren on 2020-09-11

@rodrigorodrigo, it sounds like a routing issue. Can you show me your 85-wg0.network file and your Algo-generated config file without the pub/priv key and endpoint?

by rodrigorodrigo on 2020-09-11

Sure thing @Maren, thanks for taking the time to help. Here are both files that compose the networkd config, as well as the Algo-generated config file:

user@server:/etc/systemd/network$ cat /etc/systemd/network/85-wg0.network
[Match]
Name=wg0

[Network]
Address = IPv4, IPv6

[RoutingPolicyRule]
From = 10.123.0.0/16
Table = 242

[Route]
Gateway = IPv4, IPv6 (same as Network / Address)
Table = 242

[Route]
Destination = 0.0.0.0/0
Type = blackhole
Metric = 1
Table = 242
user@server:/etc/systemd/network$ cat /etc/systemd/network/80-wg0.netdev
[NetDev]
Name = wg0
Kind = wireguard
Description = WireGuard VPN

[WireGuard]
PrivateKey = privkey

[WireGuardPeer]
PublicKey = publicKey
PresharedKey = PresharedKey
AllowedIPs = 0.0.0.0/0
Endpoint= VPNIP:PORT
user@VpnPeer: cat ~/algo/configs/config/wireguard/nas.conf
[Interface]
PrivateKey = PrivKey
Address = IPV4, IPV6
DNS =  DNSIPv4, DNSIPv6

[Peer]
PublicKey = PublicKey
PresharedKey = PresharedKey
AllowedIPs = 0.0.0.0/0,::/0
Endpoint = IP:port

I believe everything is right, I have double (tripled, quadrupled) checked everything according to the guide.

EDIT: I have just noticed that I have a DNS parameter that is not on the guide. It was included in the Algo profile so I just inserted it there.

by nulledcoffee on 2020-09-12

UPDATE:: I think DietPi has a bugged version of networkd that seems to be floating around. It seemed to restart successfully but never establish. I tried the same approach on Ubuntu Mate and at least got further (Connection established, handshake, basic tests).

I did run into issue on my Ubuntu Mate build though. wg0 can curl a request properly but my docker tunneled requests all fail to resolve. Not sure if I am missing something about DNS but literally nothing can escape the docker if I set its network to tunneled.

Any thoughts appreciated but I am stopping working on this for the day … ha

OLD Info

I’ve been banging my head on this one. Trying to set this up on DietPi - ran their client type install for wireguard. I think the type is only more or less changes for config. Feel free to take a look if you want: https://dietpi.com/phpbb/viewtopic.php?p=16308#p16308

I can’t seem to establish a connection through my wg0. Config values populated from auto-genned mullvad wireguard file

80-wg0.netdev

[NetDev]
Name = wg0
Kind = wireguard
Description = WireGuard VPN

[WireGuard]
PrivateKey = <PVKey>

[WireGuardPeer]
PublicKey = <PubKey>
AllowedIPs = 0.0.0.0/0,::0/0
Endpoint= <EPIP:PORT>

85-wg0.network

[Match]
Name=wg0

[Network]
Address = IPV4/CIDR
Address = IPV6/CIDR

[RoutingPolicyRule]
From = 10.123.0.0/16
Table = 242

[Route]
Gateway = IPV4/CIDR -- Same as Network
Gatewau = IPV6/CIDR -- Same as Network
Table = 242

[Route]
Destination = 0.0.0.0/0
Type = blackhole
Metric = 1
Table = 242

networkctl

IDX LINK             TYPE               OPERATIONAL SETUP
  1 lo               loopback           carrier     unmanaged
  2 eth0             ether              off         unmanaged
  3 wlan0            wlan               routable    unmanaged
  4 wg0              wireguard          off         unmanaged
  5 br-dbc8c1c558d9  bridge             no-carrier  unmanaged
  6 docker0          bridge             no-carrier  unmanaged

Testing

sudo curl -4 --interface wg0 icanhazip.com
curl: (7) Couldn't connect to server

sudo curl -4 --interface wlan0 icanhazip.com
MY.IP.SERVER.IP.NON.VPN

I dunno if I missed something here - should I have to set any config directly with wireguard? I think networkd should have just picked it up from files.

Anyways - thoughts?

Unsure how to get any logs - I wonder if its related to resolvconf and networkd

by Maren on 2020-09-13

In 85-wg0.network, there is a typo in the second Gateway (Gatewau).. Furthermore, try to use one Address and Gateway first. In my case, I had to use a minor change between them where address have /16 at the end, while Gateway does not (only IP as example 10.100.100.100).

by nulledcoffee on 2020-09-13

I appreciate the reply. The typo was only on here (I did some edits on my post and the remote and tried to keep them in sync). I will update it here but you were right that its a typo on the website.

That said - the wg0 network interface is working perfectly as described by this file.

It is only docker that is failing to make it to the internet.

Here is a cleaned snapshot direct from the computer of info I find relevant - also, I know there are keys and IPs, I scrambled them.

Testing out current setup

feeder@ubuntu-mate:~$ sudo curl -4 --interface wg0 https://icanhazip.com
69.135.55.1 //fake vpn ip
feeder@ubuntu-mate:~$ sudo curl -4  https://icanhazip.com
70.124.178.211 // fake public ip
feeder@ubuntu-mate:~$ sudo docker exec -it 7d87c09e1c9a /bin/bash
root@7d87c09e1c9a:/# curl -4 https://icanhazip.com --verbose
* Could not resolve host: icanhazip.com
* Closing connection 0
curl: (6) Could not resolve host: icanhazip.com

Checking IP inside Docker Container

root@7d87c09e1c9a:/# cat /etc/hosts
127.0.0.1       localhost
::1     localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
10.123.0.2      7d87c09e1c9a
root@7d87c09e1c9a:/# exit
exit

docker-compose.yml

feeder@ubuntu-mate:~$ cat compose/feeder/docker-compose.yml
version: "3.7"

networks:
  tunneled0:
    ipam:
      config:
        - subnet: 10.123.0.0/16
  inter:
    driver: bridge
    internal: true
    driver_opts:
      com.docker.network.bridge.name: dockerinter
  back:
    driver: bridge
    driver_opts:
      com.docker.network.bridge.name: dockerback

services:
  something:
    image: linuxserver/something
    container_name: something
    dns: "8.8.8.8" /// I tried it with and without dns
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=America/Chicago
    volumes:
      - ./config:/config
    ports:
      - 6789:6789
    networks:
      - tunneled0
    restart: unless-stopped

sudo wg

feeder@ubuntu-mate:~$ sudo wg
interface: wg0
  public key: 6uE0bdqHNZpgvt75qGaXYJxfSJ6ACiWg8zElTpogcls= // scrambled key - matches public key that pairs to the private key of my wg-quick
  private key: (hidden)
  listening port: 43750

peer: Z67ACpoiW0YHNGabJ6tgE5JxfSXpgv8zFlTdqqugcls= // scrambled key - matches WQ Quick and 80-wg0.netdev
  endpoint: 69.211.21.69:51820 // fake VPN endpoint (scrambled mullvad), matches 80-wg0.netdev
  allowed ips: ::/0, 0.0.0.0/0
  latest handshake: 2 minutes, 48 seconds ago
  transfer: 22.71 KiB received, 9.68 KiB sent

networkctl

feeder@ubuntu-mate:~$ networkctl
IDX LINK            TYPE      OPERATIONAL SETUP
  1 lo              loopback  carrier     unmanaged
  2 eth0            ether     routable    unmanaged
  3 wlan0           wlan      no-carrier  unmanaged
  4 wg0             wireguard routable    configured
  5 docker0         bridge    no-carrier  unmanaged
  6 br-5d292cc9a7f9 bridge    no-carrier  unmanaged
  7 br-e0c2770b5c6a bridge    routable    unmanaged
  9 veth36fae70     ether     degraded    unmanaged

cat 80-wg0.netdev

  feeder@ubuntu-mate:~$ cat /etc/systemd/network/80-wg0.netdev
[NetDev]
Name = wg0
Kind = wireguard
Description = WireGuard VPN

[WireGuard]
PrivateKey = 94gsq5SN2JGYC2hzVs/u1hVPNBJFXLMt4ZgZDFleOnY= // scrambled mullvad key straight from wg quick - is the private key pair to public key output in wg command

[WireGuardPeer]
PublicKey = Z67ACpoiW0YHNGabJ6tgE5JxfSXpgv8zFlTdqqugcls= // scrambled mullvad key straight from wg quick - matches output of wg
AllowedIPs = 0.0.0.0/0,::0/0
Endpoint= 69.211.21.69:51820 // scrambled vpn endpoint straight from wg-quick matches wg output

cat 85-wg0.network

feeder@ubuntu-mate:~$ cat /etc/systemd/network/85-wg0.network
[Match]
Name=wg0

[Network]
Address = 10.22.33.44/32 // straight from wg-quick, scrambled
Address = fde0:ccfc:cfcc:ff01::2:1a17/128 // straight from wg-quick, scrambled

[RoutingPolicyRule]
From = 10.123.0.0/16
Table = 242

[Route]
Gateway = 10.22.33.44/32 // straight from wg-quick, scrambled
Gateway = fde0:ccfc:cfcc:ff01::2:1a17/128 // straight from wg-quick, scrambled
Table = 242

[Route]
Destination = 0.0.0.0/0
Type = blackhole
Metric = 1
Table = 242

by Maren on 2020-09-13

I still think this is a routing issue. Could you please try to do what I recommended in the last post? Remove /32 and /128 from Gateway so you get the following in your cat 85-wg0.network file:

85-wg0.network
[Match]
Name		= wg0

[Network]
Address 	= 10.22.33.44/32
Address 	= fde0:ccfc:cfcc:ff01::2:1a17/128

[RoutingPolicyRule]
From 		= 10.123.0.0/16
Table 		= 242

[Route]
Gateway 	= 10.22.33.44
Gateway 	= fde0:ccfc:cfcc:ff01::2:1a17
Table 		= 242

[Route]
Destination = 0.0.0.0/0
Type 		= blackhole
Metric 		= 1
Table 		= 242

by nulledcoffee on 2020-09-13

Went ahead and tried that out - I assumed since the connection is working, just not from docker it wouldn’t have an impact.

Switched, restarted networkd and even rebooted - docker still can not connect to icanhazip.com

Thanks for the idea though.

by Maren on 2020-09-13

I faced the same issue as you where my wg0 connection was up and running, but my docker didn’t have any internet connection. Doing what I told you solved the issue for me. Unfortunately that didn’t solve it for you. I guess you have not added any kernel parameters to /etc/sysctl.conf, changed iptables rules etc., correct?

by nulledcoffee on 2020-09-13

Nah - nothing on this build really.

Clean build of Ubuntu Mate targeting RaspberryPi-s (x64) I did try turning on ipv4 forwarding at one point but that would only help - can’t hurt. (And that was after all the failures).

by nulledcoffee on 2020-09-13

EDIT: I guess in the spec for .network it does specify that gateway and address should each only include one ipv4 or ipv6 but you can specify multiple and it does call out CIDR so there has to be someway to work this all out. I am not a networking guru so not sure if that has any impact. It does say you can specify multiple. https://www.freedesktop.org/software/systemd/man/systemd.network.html#Address=

Edit 3 One interesting thing to note Mullvad’s files specify a cidr but they are a single address CIDR. I retried again with CIDR for Address (/32) and single for gatway and it continued to work. I don’t think its a concern since the VPN is only giving me one address to use whether its static or CIDR.

I tried your solution with both the Gateway AND Address to a single instance and it did work…. I have no idea why that worked ha.

Also - feel free to tell me how you made those collapsible sections so my posts can be cleaned up.

feeder@ubuntu-mate:~$ cat /etc/systemd/network/85-wg0.network
[Match]
Name=wg0

[Network]
Address = 10.23.23.23

[RoutingPolicyRule]
From = 10.123.0.0/16
Table = 242

[Route]
Gateway = 10.23.23.23
Table = 242

[Route]
Destination = 0.0.0.0/0
Type = blackhole
Metric = 1
Table = 242

by Maren on 2020-09-14

Glad to hear that you figures it out :)

I’m probably not the person to tell you to avoid IPv6 as it’s not my expertise, however, I follow the ideology to avoid IPv6 at all costs. There are of course several reasons for that, from privacy to increased performance. For instance, I’m using Debian and Ubuntu, so disabling IPv6 for apt (package management) makes it faster.

Also - feel free to tell me how you made those collapsible sections so my posts can be cleaned up.

At the bar to the right above where you type there is a cog you can press where you’ll find “Hide Details” :+1:

by andyleadbetter on 2020-12-29

@Maren can you post your ip route output when it was fixed, I am having the same issues on a clean install of archliux, wg0 is up, docker containers on the subnet, but no routing from docker subnet to wg.

by andyleadbetter on 2020-12-29

Scratch that, after hours of banging my head, it was the order Systemd created the route and device, I had given both netdev and network the prefix 85, instead of 80 for netdev and 85 for network. The network seemed to run first, and nothing worked. Changed netdev to 80 and its fine now.

by agokhandemir on 2021-06-03

Thank you for this tutorial. There is one thing I want to add to it though.

I was able to open web UI’s of the containers when I was browsing them via VPN. However, they were not accessible when I tried opening them with my regular connection (ie. from outside the VPN network or without the VPN connection).

Creating the docker network with the MTU option solved this issue for me. Example: docker network create tunnel0 –subnet 10.123.0.0/16 -o “com.docker.network.driver.mtu”=“1420”

Also you need to provide the subnet with CIDR, otherwise you get an error and the network is not created. Nevertheless, those two details could have been implemented to Docker after this article was written.

by _dT on 2021-08-27

Could you (or anyone else) please expand on this? I am trying to access a VPN-tunneled container WebUI via non-VPN LAN connection. I have also tried Maren’s multi-host-networking example above, but ran into a roadblock as my use case isn’t quite the same as in his example.

by webash on 2022-03-20

Has anyone been ridiculous enough to try get this working on Docker Swarm? The obvious problem is that external connectivity is given to containers via the docker_gwbridge, for which there is only one, no matter how many networks you define. Ideally you’d be able to to have multiple docker_gwbridge, and bind a specific one for the Wireguard VPN which is associated to the overlay network used by VPN-client containers. Otherwise if you wante dto try to avoid docker_gwbridge, you can make the network –internal, but then attached containers appear to have no routing; so any non-link-local packets get blackholed. I imagine there’s probably something that could be done here to have docker forward the packets per the routing rules, but I’m on the edge of my knowledge.

by justbendev on 2022-05-02

Thanks alot for this really good tutorial.

I came back here to report a bug , The blackhole is NOT bulletproof.

For some reason after 2 weeks running my setup the Host IP Leaked and all traffic was now available on the HOST Lan. (In my case 192.168.2.1/24)

The container was still using the docker interface 10.123.0.1/16

I couldn’t keep the setup running that way to investigate but i have created a test container to investigate and reproduce the bug.

Hope this will help us track down the issue


  1. Image source, licensed under CC-BY-2.0↩︎


Thank you for reading!

Follow me on Mastodon / the Fediverse! I'm @eisfunke@inductive.space.

If you have any comments, feedback or questions about this post, or if you just want to say hi, you can ping me there!