Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bpf: lxc: simplify RevNAT path for loopback replies #32480

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

julianwiedmann
Copy link
Member

@julianwiedmann julianwiedmann commented May 11, 2024

This PR aims to clean up some long-standing tech debt, in how we handle reply traffic for a loopback connection (a client connecting to itself through a service).

Quoting from the first patch

 The usual flow for handling service traffic to a local backend is as
follows:
* requests are load-balanced in from-container. This entails selecting
a backend (and caching the selection in a CT_SERVICE entry), DNATing the
packet, creating a CT_EGRESS entry for the resulting `client -> backend`
flow, applying egress network policy, and local delivery to the backend
pod. As part of the local delivery, we also create a CT_INGRESS entry and
apply ingress network policy.
* replies bypass the backend's egress network policy (because the CT
lookup returns CT_REPLY), and pass to the client via local delivery. In
the client's ingress path they bypass ingress network policy (the packets
match as reply against the CT_EGRESS entry), and we apply RevDNAT based on
the `rev_nat_index` in the CT_EGRESS entry.

For a loopback connection (where the client pod is selected as backend for
the connection) this looks slightly more complicated:
* As we can't establish a `client -> client` connection, the requests are
also SNATed with IPV4_LOOPBACK. Network policy in forward direction is
explicitly skipped (as the matched CT entries have the `.loopback` flag
set).
* In reply direction, we can't deliver to IPV4_LOOPBACK (as that's not a
valid IP for an endpoint lookup). So a reply already gets fully RevNATed
by from-container, using the CT_INGRESS entry's `rev_nat_index`. But this
means that when passing into the client pod (either via to-container, or
via the ingress policy tail-call), the packet doesn't match as reply to the
CT_EGRESS entry - and so we don't benefit from automatic network policy
bypass. We ended up with two workarounds for this aspect:
(1) when to-container is installed, it contains custom logic to match the
    packet as a loopback reply, and skip ingress policy
    (see https://github.com/cilium/cilium/pull/27798).
(2) otherwise we skip the ingress policy tailcall, and forward the packet
    straight into the client pod.

The downside of these workarounds is that we bypass the *whole* ingress
program, not just the network policy part. So the CT_EGRESS entry doesn't
get updated (lifetime, statistics, observed packet flags, ...), and we
have the hidden risk that when we add more logic to the ingress program,
it doesn't get executed for loopback replies.

This patch aims to eliminate the need for such workarounds. At its core,
it detects loopback replies in from-container and overrides the packet's
destination IP. Instead of attempting an endpoint lookup for IPV4_LOOPBACK,
we can now look up the actual client endpoint - and deliver to the ingress
policy program, *without* needing to early-RevNAT the packet. Instead the
replies follow the usual packet flow, match the CT_EGRESS entry in the
ingress program, naturally bypass ingress network policy, and are *then*
RevNATed based on the CT_EGRESS entry's `rev_nat_index`.

Consequently we follow the standard datapath, without needing to skip over
policy programs. The CT_EGRESS entry is updated for every reply.

Thus we can also remove the manual policy bypass for loopback replies,
when using per-EP routing. It's no longer needed and in fact the
replies will no longer match the lookup logic, as they haven't
been RevNATed yet. This effectively reverts
e2829a061a53 ("bpf: lxc: support Pod->Service->Pod hairpinning with endpoint routes").

@julianwiedmann julianwiedmann added release-note/misc This PR makes changes that have no direct user impact. kind/tech-debt Technical debt area/loadbalancing Impacts load-balancing and Kubernetes service implementations sig/datapath Impacts bpf/ or low-level forwarding details, including map management and monitor messages. labels May 11, 2024
@julianwiedmann
Copy link
Member Author

/test

@julianwiedmann
Copy link
Member Author

/test

@julianwiedmann julianwiedmann force-pushed the 1.16-bpf-loopback-revnat branch 2 times, most recently from c4a430e to afd51f7 Compare May 11, 2024 09:33
@julianwiedmann
Copy link
Member Author

/test

@julianwiedmann
Copy link
Member Author

/test

@julianwiedmann julianwiedmann changed the title 1.16 bpf loopback revnat bpf: lxc: simplify RevNAT path for loopback replies May 17, 2024
@julianwiedmann
Copy link
Member Author

/test

@julianwiedmann julianwiedmann added the dont-merge/preview-only Only for preview or testing, don't merge it. label May 17, 2024
@julianwiedmann julianwiedmann marked this pull request as ready for review May 17, 2024 08:20
@julianwiedmann julianwiedmann requested a review from a team as a code owner May 17, 2024 08:20
@julianwiedmann julianwiedmann requested review from jibi and ti-mo and removed request for jibi May 17, 2024 08:20
The usual flow for handling service traffic to a local backend is as
follows:
* requests are load-balanced in from-container. This entails selecting
a backend (and caching the selection in a CT_SERVICE entry), DNATing the
packet, creating a CT_EGRESS entry for the resulting `client -> backend`
flow, applying egress network policy, and local delivery to the backend
pod. As part of the local delivery, we also create a CT_INGRESS entry and
apply ingress network policy.
* replies bypass the backend's egress network policy (because the CT
lookup returns CT_REPLY), and pass to the client via local delivery. In
the client's ingress path they bypass ingress network policy (the packets
match as reply against the CT_EGRESS entry), and we apply RevDNAT based on
the `rev_nat_index` in the CT_EGRESS entry.

For a loopback connection (where the client pod is selected as backend for
the connection) this looks slightly more complicated:
* As we can't establish a `client -> client` connection, the requests are
also SNATed with IPV4_LOOPBACK. Network policy in forward direction is
explicitly skipped (as the matched CT entries have the `.loopback` flag
set).
* In reply direction, we can't deliver to IPV4_LOOPBACK (as that's not a
valid IP for an endpoint lookup). So a reply already gets fully RevNATed
by from-container, using the CT_INGRESS entry's `rev_nat_index`. But this
means that when passing into the client pod (either via to-container, or
via the ingress policy tail-call), the packet doesn't match as reply to the
CT_EGRESS entry - and so we don't benefit from automatic network policy
bypass. We ended up with two workarounds for this aspect:
(1) when to-container is installed, it contains custom logic to match the
    packet as a loopback reply, and skip ingress policy
    (see cilium#27798).
(2) otherwise we skip the ingress policy tailcall, and forward the packet
    straight into the client pod.

The downside of these workarounds is that we bypass the *whole* ingress
program, not just the network policy part. So the CT_EGRESS entry doesn't
get updated (lifetime, statistics, observed packet flags, ...), and we
have the hidden risk that when we add more logic to the ingress program,
it doesn't get executed for loopback replies.

This patch aims to eliminate the need for such workarounds. At its core,
it detects loopback replies in from-container and overrides the packet's
destination IP. Instead of attempting an endpoint lookup for IPV4_LOOPBACK,
we can now look up the actual client endpoint - and deliver to the ingress
policy program, *without* needing to early-RevNAT the packet. Instead the
replies follow the usual packet flow, match the CT_EGRESS entry in the
ingress program, naturally bypass ingress network policy, and are *then*
RevNATed based on the CT_EGRESS entry's `rev_nat_index`.

Consequently we follow the standard datapath, without needing to skip over
policy programs. The CT_EGRESS entry is updated for every reply.

Thus we can also remove the manual policy bypass for loopback replies,
when using per-EP routing. It's no longer needed and in fact the
replies will no longer match the lookup logic, as they haven't
been RevNATed yet. This effectively reverts
e2829a0 ("bpf: lxc: support Pod->Service->Pod hairpinning with endpoint routes").

Signed-off-by: Julian Wiedmann <[email protected]>
The `hairpin_flow` parameter was previously needed so that loopback replies
could bypass the ingress network policy. But now that all callers set
this parameter to `false`, we can safely remove the corresponding logic.

Signed-off-by: Julian Wiedmann <[email protected]>
@julianwiedmann
Copy link
Member Author

Rebase to resolve a trivial conflict.

@julianwiedmann
Copy link
Member Author

/test

Copy link
Contributor

@ti-mo ti-mo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Glad the hack is gone. 👍 Can't really give a meaningful comprehensive review, though.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/loadbalancing Impacts load-balancing and Kubernetes service implementations dont-merge/preview-only Only for preview or testing, don't merge it. kind/tech-debt Technical debt release-note/misc This PR makes changes that have no direct user impact. sig/datapath Impacts bpf/ or low-level forwarding details, including map management and monitor messages.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants