-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
source ip is not preserved if out of clustercidr #8812
Comments
This seems related to iptables dataplane. Do you see an MASQUERADE rule hit and is it from kube-proxy or calico? It seems like this is kube-proxy's doing. I wonder whether you would see the same with eBPF dataplane as there we (a) replace kube-proxy and (b) treat any service hitting resolution on the main node interfaces pretty much like a nodeport. Also we always preserve source IP regardless of the ExternalTraffic policy. |
Yeah, the external traffic policy is implemented by kube-proxy in this case.
Source: https://kubernetes.io/docs/reference/command-line-tools-reference/kube-proxy/ I suspect the Calico docs are subtly misleading if stating the ClusterIP advertisement is going to preserve the IP in this scenario, since kube-proxy will NAT this traffic if it's coming from outside the cluster CIDR (cluster IP is really intended to be within the cluster) |
Agree , this requires a bit more clarification. |
Tried with eBPF on multiple cluster and manged to fixed it , it's working as expected for now. |
@tomastigera Noticed one different behavior when eBPF is enabled, when ippool natOutgoing set to false connection to kubernetes api through serviceIP starts timing out from pods wanting to use it, since there is no iptables rule sending service ip out to correct node. How does this work in eBPF mode ? I am forced to enable outgoing nat to solve this issue, is this a bug or expected behavior? |
We have a cluster up and running , peered with ToR and clusterIP advertised.
We have noticed that if traffic received on ClusterIP then it is get masquerade unless it is received from the range set as kube-proxy clusterCIDR which is matching our calico default IPPool
According to docs :
https://docs.tigera.io/calico/latest/about/kubernetes-training/about-kubernetes-services#externaltrafficpolicylocal
when traffic policy set to Local I was expecting to see source ip down to pod but it seems to be only working if I user NodePort directly sending traffic to Node IP address not ServiceIP, Is this expected ?
I have my cluster pushed via Kubespray and using version 2.24.1
The text was updated successfully, but these errors were encountered: