Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docker bridge network leaks internal IP addresses (masquerade not working) #1126

Open
2 of 3 tasks
flobernd opened this issue Oct 11, 2020 · 22 comments
Open
2 of 3 tasks

Comments

@flobernd
Copy link

  • This is a bug report
  • This is a feature request
  • I searched existing issues before opening this one

Expected behavior

Docker containers using only the bridge network should not send any packets with internal IP addresses to the outside.

Actual behavior

Docker containers using the bridge network sometimes send packets from the internal (172.17.0.X) IP to the network interface without masquerading them.

Steps to reproduce the behavior

Run a docker container of your choice (in my case portainer/portainer-ce) using the default bridge network. Inspect outgoing traffic using tcpdump (e.g. on the router device).

Related:

Output of docker version:

Client: Docker Engine - Community
 Version:           19.03.13
 API version:       1.40
 Go version:        go1.13.15
 Git commit:        4484c46d9d
 Built:             Wed Sep 16 17:02:55 2020
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.13
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.13.15
  Git commit:       4484c46d9d
  Built:            Wed Sep 16 17:01:25 2020
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.3.7
  GitCommit:        8fba4e9a7d01810a393d5d25a3621dc101981175
 runc:
  Version:          1.0.0-rc10
  GitCommit:        dc9208a3303feef5b3839f4323d9beb36df0a9dd
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683

Output of docker info:

Client:
 Debug Mode: false

Server:
 Containers: 1
  Running: 1
  Paused: 0
  Stopped: 0
 Images: 1
 Server Version: 19.03.13
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: true
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 8fba4e9a7d01810a393d5d25a3621dc101981175
 runc version: dc9208a3303feef5b3839f4323d9beb36df0a9dd
 init version: fec3683
 Security Options:
  apparmor
  seccomp
   Profile: default
 Kernel Version: 4.19.0-11-amd64
 Operating System: Debian GNU/Linux 10 (buster)
 OSType: linux
 Architecture: x86_64
 CPUs: 1
 Total Memory: 986.3MiB
 Name: PORTAINER
 ID: GIFN:75F4:YB36:FWJV:HKOX:55OX:67OQ:HOZM:XWBO:TN3P:AEVT:7X6T
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

WARNING: No swap limit support

Additional environment details (AWS, VirtualBox, physical, etc.)

In my example the docker container is running on a Debian VM which is running on a VMware ESXi host.

First noticed the leaked IP addresses in the "client overview" of my networking hardware (Ubiquity UniFi). This list shows the currently assigned IP address for each connected client. For all VMs running a docker container with bridge network this IP from time to time is changed to 172.17.0.X for some seconds until it switches back to the correct value.

@themylogin
Copy link

themylogin commented Nov 4, 2020

I can confirm that. We've found out this is happening after our server was blocked by Hetzner for sending packets with invalid source IP.

@jakelamotta
Copy link

We have the same issue most likely, running the container in a LXC.

@marlic7
Copy link

marlic7 commented Nov 9, 2020

In my case problem appear on Ubuntu 20.04 (on 18.04 no problems) and only with one output interface: tun (VPN).

tcpdump problematic output:
09:00:15.224652 IP 192.168.1.93.52672 > 10.7.x.x.80: Flags [S], seq 3396909146, win 64240, options [mss 1460,sackOK,TS val 1825245854 ecr 0,nop,wscale 7], length 0
(source IP from eth0 interface instead of tun)

On other computer tcpdump output (working scenario):
09:04:35.061067 IP 10.50.x.x.60280 > 10.7.x.x.80: Flags [S], seq 628596480, win 64240, options [mss 1460,sackOK,TS val 1018754195 ecr 0,nop,wscale 7], length 0

@RRAlex
Copy link

RRAlex commented May 12, 2021

Same here from traffic coming from a VPN tunnel inside docker and exiting on the local LAN using Ubuntu LTS 18.04 and latest docker.

@statop
Copy link

statop commented May 13, 2021

Same, just noticed bridged IP addresses in my firewall this morning.

@Wqrld
Copy link

Wqrld commented May 15, 2021

I can confirm that. We've found out this is happening after our server was blocked by Hetzner for sending packets with invalid source IP.

Same here.

This is a huge issue and is able to be remotely exploited as a DoS in some cases. We had a very short incoming ddos attack causing the machine to send out these packets which ended up locking our machine for a couple of hours.

A remotely triggerable DoS like this should really be fixed.

@flobernd
Copy link
Author

Sadly, this issue exists for over 7 months now and I saw similar reports from other people which were created even years ago.

No response from any "official" source yet. That's kinda disappointing.

@mrpana
Copy link

mrpana commented May 31, 2021

We are also having the same issue running on multiple different types of hosts.

@mcfriend99
Copy link

I can confirm it's happening with overlay network too

@rmdevheart
Copy link

Confirmed. bridge network leaks with container's inner IPs outside.
Trying to bypass it by additional SNAT iptables rule, but withous success.

@Wqrld
Copy link

Wqrld commented Jul 5, 2021

Confirmed. bridge network leaks with container's inner IPs outside.
Trying to bypass it by additional SNAT iptables rule, but withous success.

Untested, but a iptables rule that a friend of mine gave that should fix this:
iptables -I FORWARD -m conntrack --ctstate INVALID -j DROP

@rmdevheart
Copy link

that should fix this

this rule just drops packets with problems, but not fix origin of them.

@ZPrimed
Copy link

ZPrimed commented May 2, 2022

I'm seeing this happening with Docker running on Ubuntu 20.04 LTS as well... In my case, it's the Ubiquiti "UNMS" / "UISP" controller system (which uses a bunch of docker containers).

It has several bridge networks defined, and I know that at least one of them leaks IPs to the physical network based on the packets logged in some firewall rules...

docker network ls
NETWORK ID     NAME            DRIVER    SCOPE
fdfe7232dd05   bridge          bridge    local
29632a79e61c   host            host      local
bf6196de91d9   none            null      local
53872b1005fc   unms_internal   bridge    local
ff4a7fb8e469   unms_public     bridge    local

I am seeing mostly traffic from 172.18.251.5, source port 443, destination port random, different IPs across my internal network (point to point wireless gear that UNMS/UISP is managing).

docker network inspect unms_public
[
    {
        "Name": "unms_public",
        "Id": "ff4a7fb8e4690d4218b0e5c74acb40c839b71d5cea92518952abe4beaa293ccf",
        "Created": "2021-08-16T19:02:49.179292875-04:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.18.251.1/25",
                    "Gateway": "172.18.251.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "149251efd6d881b7f89dfcc81b69da5e9f297f5f2d6d157bccb65f5f67b3476c": {
                "Name": "unms",
                "EndpointID": "8f1ab646e832bfe4258a9056e31729c9b8b2b0b7960fedeb070c142f54a4ca28",
                "MacAddress": "02:42:ac:12:fb:03",
                "IPv4Address": "172.18.251.3/25",
                "IPv6Address": ""
            },
            "15af5d8659cf4f1e8010933449e77c04462895a344196de37d88b4d789f23d54": {
                "Name": "ucrm",
                "EndpointID": "e1de09780b56cd16e279e5de5f0d93f4ff8c44615cea77e007ee1fc5e9382ee4",
                "MacAddress": "02:42:ac:12:fb:06",
                "IPv4Address": "172.18.251.6/25",
                "IPv6Address": ""
            },
            "43074d6bae500c67b3eaae3899d8f568b75619e6d2fcd82573e99009540827eb": {
                "Name": "unms-netflow",
                "EndpointID": "6ebb76d9954450baeef4256d73619b578b4b24cbbcc304e88ccb7c2271dbc189",
                "MacAddress": "02:42:ac:12:fb:04",
                "IPv4Address": "172.18.251.4/25",
                "IPv6Address": ""
            },
            "c341c3f5bc8736338bbe837eb6221046aaf9e92494eaf08e281b56fc032b92e4": {
                "Name": "unms-nginx",
                "EndpointID": "7588f6708b4e8818cbf6e3d6ab86c35647fc6ef44fa78bedb15ee5d00f480e97",
                "MacAddress": "02:42:ac:12:fb:05",
                "IPv4Address": "172.18.251.5/25",
                "IPv6Address": ""
            },
            "f29ad48117e1304a051771931fb11a918998976053ac6dbbe9e411a421be3bec": {
                "Name": "unms-fluentd",
                "EndpointID": "1bcda7f1d1b745ff846600f46823ffb88a09095e79a4dc852a8972db5e527d76",
                "MacAddress": "02:42:ac:12:fb:02",
                "IPv4Address": "172.18.251.2/25",
                "IPv6Address": ""
            }
        },
        "Options": {},
        "Labels": {}
    }
]
Server: Docker Engine - Community
 Engine:
  Version:          20.10.14
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.16.15
  Git commit:       87a90dc
  Built:            Thu Mar 24 01:45:53 2022
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.5.11
  GitCommit:        3df54a852345ae127d1fa3092b95168e4a88e2f8
 runc:
  Version:          1.0.3
  GitCommit:        v1.0.3-0-gf46b6ba
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

@k4pu77
Copy link

k4pu77 commented Aug 8, 2022

Can confirm that bridged IP addresses show up in the log of my firewall...

Details of the host running the docker containers:

$ docker version
Client: Docker Engine - Community
 Version:           20.10.17
 API version:       1.41
 Go version:        go1.17.11
 Git commit:        100c701
 Built:             Mon Jun  6 23:03:17 2022
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          20.10.17
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.17.11
  Git commit:       a89b842
  Built:            Mon Jun  6 23:01:23 2022
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.6.6
  GitCommit:        10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1
 runc:
  Version:          1.1.2
  GitCommit:        v1.1.2-0-ga916309
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

$ cat /etc/debian_version 
11.4

@karniemi
Copy link

This is a legacy project, if you have a look at the readme. Odd, that they keep this legacy project open. The bug should be reported to the moby project?

@flobernd
Copy link
Author

@karniemi Well, I reported that issue nearly 2 days ago. The notice in the readme is new (12 days ago).

@flobernd
Copy link
Author

flobernd commented Aug 23, 2022

I re-created the issue in the new repository, but judging from 4000 open issues, they seem to have just as nice community management as in this repo. So maybe in 10 years we will get an official answer 🙃

@karniemi
Copy link

@flobernd ... you meant 2 years, not 2 days :-). And yes, I did notice, but I wanted to point out why there's propably no progress. And further, I'm seeing the same problem, that's why I'm concerned about it.

@dwinston
Copy link

  1. This sucks.
  2. I used sudo ufw deny out from any to 172.16.0.0/12 to stop the bleeding so that Hetzner will stop blocking my server IP.

@flobernd
Copy link
Author

It's "funny" that it's still not fixed after 3 years.

@gabrielmocanu
Copy link

This problem is still painful...

@msartor92
Copy link

msartor92 commented Jun 13, 2024

Hi all,
we have same problem with Docker on Linux both Centos7 and Ubuntu 22.04 with docker 25.0.x and 26.0.x with containerd 1.6.28
tons of FIN packet from container to remote host are sent without masquerading source ip. This causa tons of packet drop on our firewall and connection that isn't closed well.
Below an example of firewall log

Jun 12 14:22:43 FIREWALL-PKTLOG: 1dca0e40 INET match PASS 13390 OUT 52 TCP 10.159.0.5/46366->172.22.26.28/443 FA 
Jun 12 14:22:43 FIREWALL-PKTLOG: 1dca0e40 INET match PASS 13390 OUT 52 TCP 10.159.0.5/46036->172.22.26.28/443 FA 
Jun 12 14:22:43 FIREWALL-PKTLOG: 06055e16 INET match PASS 13390 OUT 52 TCP 10.159.10.5/48444->172.22.26.26/443 FA
Jun 12 14:22:43 FIREWALL-PKTLOG: 1dca0e40 INET match PASS 13390 OUT 52 TCP 10.159.0.5/37394->172.22.26.26/443 FA 
Jun 12 14:22:43 FIREWALL-PKTLOG: 1dca0e40 INET match PASS 13390 OUT 52 TCP 10.159.0.5/49457->172.22.26.27/443 FA 
Jun 12 14:22:43 FIREWALL-PKTLOG: 06055e16 INET match PASS 13390 OUT 52 TCP 10.159.10.5/56342->172.22.26.26/443 FA
Jun 12 14:22:43 FIREWALL-PKTLOG: 06055e16 INET match PASS 13390 OUT 52 TCP 10.159.10.5/45235->172.22.26.28/443 FA
Jun 12 14:22:43 FIREWALL-PKTLOG: 06055e16 INET match PASS 13390 OUT 52 TCP 10.159.10.5/33241->172.22.26.28/443 FA
Jun 12 14:22:44 FIREWALL-PKTLOG: ddbfe6a8 INET match PASS 13390 IN 40 TCP 10.151.30.4/42554->172.22.26.103/6551 R

our rule that drop this kind of traffic has 300 hit per second

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests