Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

IPTables rules missing from Flannel/CNI on Kubernetes installation #799

Closed
limited opened this issue Aug 24, 2017 · 19 comments
Closed

IPTables rules missing from Flannel/CNI on Kubernetes installation #799

limited opened this issue Aug 24, 2017 · 19 comments

Comments

@limited
Copy link

limited commented Aug 24, 2017

The following IP Tables rules are missing, causing routing between nodes to not work properly between containers. I can ping between hosts, but not between containers running on hosts.

sudo  /sbin/iptables -I FORWARD 1 -i cni0 -j ACCEPT -m comment --comment "flannel subnet"
sudo  /sbin/iptables -I FORWARD 1 -o cni0 -j ACCEPT -m comment --comment "flannel subnet"
sudo /sbin/iptables -t nat -A POSTROUTING -s 10.244.0.0/16 ! -d 10.244.0.0/16 -j MASQUERADE

Expected Behavior

I expect by default, without special modifications to IPTables to connect to containers running on other flannel nodes (i.e. kube master/api-server and kube-worker).

Current Behavior

IP connectivity between containers running on flannel nodes is broken

Possible Solution

Add iptables rules above

Steps to Reproduce (for bugs)

Install k8s cluster v1.6 using kubeadm with CNI and flannel plugin.

Context

Your Environment

  • Flannel version: v0.7.1-amd64
  • Backend used (e.g. vxlan or udp): vxlan
  • Etcd version: 3.1
  • Kubernetes version (if used): 1.6 via kubeadm and CNI/Flannel plugins
  • Operating System and version: centos7
  • Link to your project (optional):
@cdyue
Copy link

cdyue commented Aug 25, 2017

@limited Thanks!It works!

  • flannel version: v0.8.0

@domino14
Copy link

I see this issue too. How do we make sure those iptables rules run on reboot?

@cdyue
Copy link

cdyue commented Aug 29, 2017

I have the same issue.But i think it's not a bug of FLANNEL.
My env:

  • centos 7.2
  • flannel: v0.8.0
  • docker: 17.06.0-ce
  • kubernetes:v1.7.4

It seems docker version >=1.13 will add iptables rule like below,and it make this issue happen:

iptables -P FORWARD DROP

All you need to do is add a rule below:

iptables -P FORWARD ACCEPT

@limited
Copy link
Author

limited commented Aug 29, 2017

I'm using Docker 1.12, so I think the behavior must start in an earlier version. Also, I don't think its an acceptable solution to change the default behavior for an IPTables rules. My two rules are a more precise fix.

@tomdee
Copy link
Contributor

tomdee commented Sep 18, 2017

The default changed with Docker v1.13 - https://docs.docker.com/engine/userguide/networking/default_network/container-communication/#container-communication-between-hosts

It's currently unclear to me how this issue shoudl be fixed. Maybe flannel you automatically change the iptables rules, or just document the docker change, or maybe the bridge CNI plugin should be doing something about it.

Also @limited - for NAT you should just pass the ip-masq option to flannel

@limited
Copy link
Author

limited commented Sep 19, 2017

Thanks will give the ip-masq a shot

rhuss added a commit to rhuss/ansible-kubernetes-openshift-pi3 that referenced this issue Oct 12, 2017
+ some work on flannel integration, not completed yet.
See also flannel-io/flannel#799 for an issue why iptables rules need to be changed.
rhuss added a commit to rhuss/ansible-kubernetes-openshift-pi3 that referenced this issue Oct 12, 2017
+ some work on flannel integration, not completed yet.
See also flannel-io/flannel#799 for an issue why iptables rules need to be changed.
@jeffmhastings
Copy link

This seems related: containernetworking/plugins#75

@roffe
Copy link

roffe commented Oct 26, 2017

containernetworking/plugins#75 originates from kubernetes/kubernetes#40182 I belive

@GheRivero
Copy link
Contributor

I can confirm this issue with flannel 0.9.0 (both vxlan & host-gw), k8s 1.8.2, docker 17.05
Applying the iptables rules solves the problems.

tomdee added a commit to tomdee/flannel that referenced this issue Nov 11, 2017
To work around the Docker change from v1.13 which
changed the default FORWARD policy to DROP.

The change has bitten many many users.

The troubleshooting documentation is also updated talk about the issue.

Replaces PR flannel-io#862
Fixes flannel-io#834
Fixes flannel-io#823
Fixes flannel-io#609
Fixes flannel-io#799
tomdee added a commit to tomdee/flannel that referenced this issue Nov 16, 2017
To work around the Docker change from v1.13 which
changed the default FORWARD policy to DROP.

The change has bitten many many users.

The troubleshooting documentation is also updated talk about the issue.

Replaces PR flannel-io#862
Fixes flannel-io#834
Fixes flannel-io#823
Fixes flannel-io#609
Fixes flannel-io#799
@balbalas
Copy link

@tomdee
Do you know which version flannel has the fix? We are seeing it with 0.10.0.

[bbalasubram@cirrus-vm1 Demo]$ docker version
Client:
Version: 17.12.1-ce
API version: 1.35
Go version: go1.9.4
Git commit: 7390fc6
Built: Tue Feb 27 22:15:20 2018
OS/Arch: linux/amd64

Server:
Engine:
Version: 17.12.1-ce
API version: 1.35 (minimum version 1.12)
Go version: go1.9.4
Git commit: 7390fc6
Built: Tue Feb 27 22:17:54 2018
OS/Arch: linux/amd64
Experimental: false
[bbalasubram@cirrus-vm1 Demo]$ kubectl version
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.4", GitCommit:"bee2d1505c4fe820744d26d41ecd3fdd4a3d6546", GitTreeState:"clean", BuildDate:"2018-03-12T16:29:47Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.4", GitCommit:"bee2d1505c4fe820744d26d41ecd3fdd4a3d6546", GitTreeState:"clean", BuildDate:"2018-03-12T16:21:35Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
[bbalasubram@cirrus-vm1 Demo]$

@hectorqin
Copy link

I see it with 0.10.0 too. And it dosen't work after i apply those iptables rules.

root@XMT01-VIDEO01:~# docker version
Client:
 Version:      1.13.1
 API version:  1.26
 Go version:   go1.6.2
 Git commit:   092cba3
 Built:        Thu Nov  2 20:40:23 2017
 OS/Arch:      linux/amd64

Server:
 Version:      1.13.1
 API version:  1.26 (minimum version 1.12)
 Go version:   go1.6.2
 Git commit:   092cba3
 Built:        Thu Nov  2 20:40:23 2017
 OS/Arch:      linux/amd64
 Experimental: false
root@XMT01-VIDEO01:~# kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.2", GitCommit:"81753b10df112992bf51bbc2c2f85208aad78335", GitTreeState:"clean", BuildDate:"2018-04-27T09:22:21Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.2", GitCommit:"81753b10df112992bf51bbc2c2f85208aad78335", GitTreeState:"clean", BuildDate:"2018-04-27T09:10:24Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
root@XMT01-VIDEO01:~# kubectl get po -o wide
NAME                                   READY     STATUS    RESTARTS   AGE       IP            NODE
kubernetes-bootcamp-7799cbcb86-cdsdx   1/1       Running   0          2d        10.244.3.9    xmt01-middleware01
kubernetes-bootcamp-7799cbcb86-wxglw   1/1       Running   0          2d        10.244.5.25   xmt01-web02
root@XMT01-VIDEO01:~# kubectl describe pod/kubernetes-bootcamp-7799cbcb86-cdsdx
Name:           kubernetes-bootcamp-7799cbcb86-cdsdx
Namespace:      default
Node:           xmt01-middleware01/192.168.82.113
Start Time:     Tue, 15 May 2018 10:40:38 +0800
Labels:         pod-template-hash=3355767642
                run=kubernetes-bootcamp
Annotations:    <none>
Status:         Running
IP:             10.244.3.9
Controlled By:  ReplicaSet/kubernetes-bootcamp-7799cbcb86
Containers:
  kubernetes-bootcamp:
    Container ID:   docker://f09a0af14d43335a9982ab991dcde30ca75491a879c7ca6acaed27c98370452a
    Image:          jocatalin/kubernetes-bootcamp:v2
    Image ID:       docker-pullable://jocatalin/kubernetes-bootcamp@sha256:fb1a3ced00cecfc1f83f18ab5cd14199e30adc1b49aa4244f5d65ad3f5feb2a5
    Port:           8080/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Tue, 15 May 2018 10:40:41 +0800
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-6x9qk (ro)
Conditions:
  Type           Status
  Initialized    True 
  Ready          True 
  PodScheduled   True 
Volumes:
  default-token-6x9qk:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-6x9qk
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:          <none>
root@XMT01-VIDEO01:~# curl 10.244.3.9:8080
curl: (7) Failed to connect to 10.244.3.9 port 8080: Connection timed out
root@XMT01-VIDEO01:~# iptables -nL
Chain INPUT (policy DROP)
target     prot opt source               destination         
KUBE-EXTERNAL-SERVICES  all  --  0.0.0.0/0            0.0.0.0/0            ctstate NEW /* kubernetes externally-visible service portals */
KUBE-FIREWALL  all  --  0.0.0.0/0            0.0.0.0/0           
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0           
ACCEPT     112  --  0.0.0.0/0            0.0.0.0/0           
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            state RELATED,ESTABLISHED
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            state NEW tcp dpt:22
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            state NEW tcp dpt:80
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            state NEW tcp dpt:21
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            state NEW tcp dpts:20000:30000
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            state NEW tcp dpt:443
ACCEPT     tcp  --  192.168.82.0/24      0.0.0.0/0            tcp dpt:111
ACCEPT     udp  --  192.168.82.0/24      0.0.0.0/0            udp dpt:111
ACCEPT     tcp  --  192.168.82.0/24      0.0.0.0/0            tcp dpt:2049
ACCEPT     udp  --  192.168.82.0/24      0.0.0.0/0            udp dpt:2049
ACCEPT     tcp  --  192.168.82.0/24      0.0.0.0/0            tcp dpts:30001:30004
ACCEPT     udp  --  192.168.82.0/24      0.0.0.0/0            udp dpts:30001:30004
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            state NEW tcp dpt:6443
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            state NEW tcp dpts:10250:10252
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            state NEW tcp dpt:10255
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            state NEW tcp dpts:30000:32767
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            state NEW tcp dpts:2379:2380
ACCEPT     icmp --  0.0.0.0/0            0.0.0.0/0            limit: avg 1/sec burst 10
ACCEPT     all  -f  0.0.0.0/0            0.0.0.0/0            limit: avg 100/sec burst 100
syn-flood  tcp  --  0.0.0.0/0            0.0.0.0/0            tcp flags:0x17/0x02
REJECT     all  --  0.0.0.0/0            0.0.0.0/0            reject-with icmp-host-prohibited

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            /* flannel subnet */
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            /* flannel subnet */
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            /* flannel subnet */
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            /* flannel subnet */
KUBE-FORWARD  all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes forwarding rules */
DOCKER-ISOLATION  all  --  0.0.0.0/0            0.0.0.0/0           
DOCKER     all  --  0.0.0.0/0            0.0.0.0/0           
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0           
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0           
ACCEPT     all  --  10.244.0.0/16        0.0.0.0/0           
ACCEPT     all  --  0.0.0.0/0            10.244.0.0/16       

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         
KUBE-SERVICES  all  --  0.0.0.0/0            0.0.0.0/0            ctstate NEW /* kubernetes service portals */
KUBE-FIREWALL  all  --  0.0.0.0/0            0.0.0.0/0           

Chain DOCKER (1 references)
target     prot opt source               destination         

Chain DOCKER-ISOLATION (1 references)
target     prot opt source               destination         
RETURN     all  --  0.0.0.0/0            0.0.0.0/0           

Chain KUBE-EXTERNAL-SERVICES (1 references)
target     prot opt source               destination         

Chain KUBE-FIREWALL (2 references)
target     prot opt source               destination         
DROP       all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes firewall for dropping marked packets */ mark match 0x8000/0x8000

Chain KUBE-FORWARD (1 references)
target     prot opt source               destination         
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes forwarding rules */ mark match 0x4000/0x4000
ACCEPT     all  --  10.244.0.0/16        0.0.0.0/0            /* kubernetes forwarding conntrack pod source rule */ ctstate RELATED,ESTABLISHED
ACCEPT     all  --  0.0.0.0/0            10.244.0.0/16        /* kubernetes forwarding conntrack pod destination rule */ ctstate RELATED,ESTABLISHED

Chain KUBE-SERVICES (1 references)
target     prot opt source               destination         

Chain syn-flood (1 references)
target     prot opt source               destination         
RETURN     tcp  --  0.0.0.0/0            0.0.0.0/0            limit: avg 3/sec burst 6
REJECT     all  --  0.0.0.0/0            0.0.0.0/0            reject-with icmp-port-unreachable
root@XMT01-VIDEO01:~# 
root@XMT01-MIDDLEWARE01:~# iptables -nL
Chain INPUT (policy DROP)
target     prot opt source               destination         
KUBE-EXTERNAL-SERVICES  all  --  0.0.0.0/0            0.0.0.0/0            ctstate NEW /* kubernetes externally-visible service portals */
KUBE-FIREWALL  all  --  0.0.0.0/0            0.0.0.0/0           
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0           
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            state RELATED,ESTABLISHED
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            state NEW tcp dpt:22
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            state NEW tcp dpt:80
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            state NEW tcp dpt:443
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            state NEW tcp dpt:4869 /* zimg server */
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            state NEW tcp dpts:8000:8100 /* proxy server */
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            state NEW tcp dpt:6443
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            state NEW tcp dpts:10250:10252
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            state NEW tcp dpt:10255
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            state NEW tcp dpts:30000:32767
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            state NEW tcp dpts:2379:2380
ACCEPT     icmp --  0.0.0.0/0            0.0.0.0/0            limit: avg 1/sec burst 10
ACCEPT     all  -f  0.0.0.0/0            0.0.0.0/0            limit: avg 100/sec burst 100
syn-flood  tcp  --  0.0.0.0/0            0.0.0.0/0            tcp flags:0x17/0x02
REJECT     all  --  0.0.0.0/0            0.0.0.0/0            reject-with icmp-host-prohibited

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            /* flannel subnet */
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            /* flannel subnet */
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            /* flannel subnet */
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            /* flannel subnet */
KUBE-FORWARD  all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes forwarding rules */
DOCKER-ISOLATION  all  --  0.0.0.0/0            0.0.0.0/0           
DOCKER     all  --  0.0.0.0/0            0.0.0.0/0           
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0           
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0           
ACCEPT     all  --  10.244.0.0/16        0.0.0.0/0           
ACCEPT     all  --  0.0.0.0/0            10.244.0.0/16       

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         
KUBE-SERVICES  all  --  0.0.0.0/0            0.0.0.0/0            ctstate NEW /* kubernetes service portals */
KUBE-FIREWALL  all  --  0.0.0.0/0            0.0.0.0/0           

Chain DOCKER (1 references)
target     prot opt source               destination         

Chain DOCKER-ISOLATION (1 references)
target     prot opt source               destination         
RETURN     all  --  0.0.0.0/0            0.0.0.0/0           

Chain KUBE-EXTERNAL-SERVICES (1 references)
target     prot opt source               destination         

Chain KUBE-FIREWALL (2 references)
target     prot opt source               destination         
DROP       all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes firewall for dropping marked packets */ mark match 0x8000/0x8000

Chain KUBE-FORWARD (1 references)
target     prot opt source               destination         
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes forwarding rules */ mark match 0x4000/0x4000
ACCEPT     all  --  10.244.0.0/16        0.0.0.0/0            /* kubernetes forwarding conntrack pod source rule */ ctstate RELATED,ESTABLISHED
ACCEPT     all  --  0.0.0.0/0            10.244.0.0/16        /* kubernetes forwarding conntrack pod destination rule */ ctstate RELATED,ESTABLISHED

Chain KUBE-SERVICES (1 references)
target     prot opt source               destination         

Chain syn-flood (1 references)
target     prot opt source               destination         
RETURN     tcp  --  0.0.0.0/0            0.0.0.0/0            limit: avg 3/sec burst 6
REJECT     all  --  0.0.0.0/0            0.0.0.0/0            reject-with icmp-port-unreachable

@strigazi
Copy link

strigazi commented Nov 20, 2018

I think this issue need to be re-opened. WIth [0], I still need to apply iptables -P FORWARD ACCEPT

[0] quay.io/coreos/flannel:v0.10.0-amd64

cc @tomdee

@FengyunPan2
Copy link

I see it with 0.10.0 too.
/reopen

@ajay-alef
Copy link

I was also facing the same, until I allowed "All Traffic" in aws security group.

@rajeshmuraleedharan
Copy link

Flushed all my firewalls with iptables --flush and iptables -tnat --flush then restart docker fixed it

check this link

willgorman pushed a commit to willgorman/flannel that referenced this issue Jun 19, 2019
To work around the Docker change from v1.13 which
changed the default FORWARD policy to DROP.

The change has bitten many many users.

The troubleshooting documentation is also updated talk about the issue.

Replaces PR flannel-io#862
Fixes flannel-io#834
Fixes flannel-io#823
Fixes flannel-io#609
Fixes flannel-io#799
@HindrikStegenga
Copy link

HindrikStegenga commented Oct 18, 2019

I fixed it permanently by doing this:
Edit /etc/sysctl.conf
Add line: net.ipv4.ip_forward=1
Reboot

@mcoto004CR
Copy link

Modifying the /etc/sysctl.conf made the trick, txs

@Peter0x48
Copy link

For me too, after that the iptables policy for Forward is set to ACCEPT, before that it was DROP and traffic worked only if i set the policy manually to ACCEPT.

Is this really the correct solution for this? I would prefer if the policy stays at DROP and appropriate rules allow the traffic needed.

@echoface
Copy link

[UFW BLOCK] IN=flannel.1 OUT=cni0 MAC=72:b6:26:dd:65:45:8a:ad:1c:19:5b:d5:08:00 SRC=10.244.1.8 DST=10.244.0.14 LEN=93 TOS=0x00 PREC=0x00 TTL=62 ID=22646 DF PROTO=UDP SPT=52343 DPT=53 LEN=73

sudo ufw allow in on flannel.1 && sudo ufw allow out on flannel.1
sudo ufw allow in on cni0 && sudo ufw allow out on cni0
both configured, but still not work;

any suggestion?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests