Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug: "Default Networking" mode not compatible with Azure CNI Overlay #618

Open
pjlewisuk opened this issue Jul 12, 2023 · 1 comment
Open
Assignees
Labels
bug Something isn't working good first issue Good for newcomers helper-ui 🧙‍♀️ An issue with the UI in the helper

Comments

@pjlewisuk
Copy link
Contributor

Describe the bug
Currently, it is possible to select the "Default Networking" and "CNI Overlay Network" options on the "Networking Details", but this will result in a cluster with nodes and pods running in the same subnet (i.e. pods not running in the overlay subnet, as expected). We should add some warnings to make it clear to users that this is the case (or even prevent them from copying the shell scripts, like if an IP address is missing and allowed IP ranges is enabled on the cluster).

To Reproduce
Steps to reproduce the behavior:

  1. Go to AKS Construction homepage
  2. Navigate to "Networking Details"
  3. Check the "CNI Overlay Networking" box under "CNI Features"
  4. Select "Default Networking" option under "Default or Custom VNET"
  5. Navigate to the "Bash" or "PowerShell" tabs under "Deploy Cluster", and examine the output
  6. Note missing podCidr parameter (even though networkPluginMode=Overlay is set)

If you deploy this cluster, you will get pods running in the same subnet as the nodes, which is not as expected. It doesn't seem to be possible to fix this once the cluster is deployed.

When selecting "Default Networking", there is a pop-over with the following information, but it doesn't mention CNI Overlay (presumably because it was in preview until 30th May):
image

Expected behavior
I think a stronger notification is required to inform the user that the selected options are not a compatible configuration. Something like the warning provided when Azure AppGW Ingress Controller add-on is selected with "Enable KeyVault Integration for TLS certificates", but the Secrets Store CSI driver is not provisioned into the cluster:
image

Or maybe provide a waning and prevent them from copying the shell scripts, like in the case when "allowed IP ranges" is enabled on the cluster, but an IP address/range is not provided:
image

A clear and concise description of what you expected to happen.
AKS Construction should warn me that I have created an invalid configuration which will not work in the way I might be expecting.

Screenshots
If applicable, add screenshots to help explain your problem.

Additional context
Related to #617

@pjlewisuk pjlewisuk changed the title "Default Networking" mode not compatible with Azure CNI Overlay Bug: "Default Networking" mode not compatible with Azure CNI Overlay Jul 12, 2023
@khowling khowling added bug Something isn't working good first issue Good for newcomers helper-ui 🧙‍♀️ An issue with the UI in the helper labels Jul 20, 2023
@pjlewisuk pjlewisuk self-assigned this Jul 27, 2023
@samaea
Copy link
Contributor

samaea commented Aug 10, 2023

@pjlewisuk in regards to the pods not going in their Overlay subnet, this should now resolved in #633. However, let me know if that's not the case. Regarding the other feedback, if there's anything else pending, you might want to spin up another issue and close this one or attach another PR and rephrase the issue description? Will leave that for you to decide. :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working good first issue Good for newcomers helper-ui 🧙‍♀️ An issue with the UI in the helper
Projects
None yet
Development

No branches or pull requests

3 participants