You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
Currently, it is possible to select the "Default Networking" and "CNI Overlay Network" options on the "Networking Details", but this will result in a cluster with nodes and pods running in the same subnet (i.e. pods not running in the overlay subnet, as expected). We should add some warnings to make it clear to users that this is the case (or even prevent them from copying the shell scripts, like if an IP address is missing and allowed IP ranges is enabled on the cluster).
Check the "CNI Overlay Networking" box under "CNI Features"
Select "Default Networking" option under "Default or Custom VNET"
Navigate to the "Bash" or "PowerShell" tabs under "Deploy Cluster", and examine the output
Note missing podCidr parameter (even though networkPluginMode=Overlay is set)
If you deploy this cluster, you will get pods running in the same subnet as the nodes, which is not as expected. It doesn't seem to be possible to fix this once the cluster is deployed.
When selecting "Default Networking", there is a pop-over with the following information, but it doesn't mention CNI Overlay (presumably because it was in preview until 30th May):
Expected behavior
I think a stronger notification is required to inform the user that the selected options are not a compatible configuration. Something like the warning provided when Azure AppGW Ingress Controller add-on is selected with "Enable KeyVault Integration for TLS certificates", but the Secrets Store CSI driver is not provisioned into the cluster:
Or maybe provide a waning and prevent them from copying the shell scripts, like in the case when "allowed IP ranges" is enabled on the cluster, but an IP address/range is not provided:
A clear and concise description of what you expected to happen.
AKS Construction should warn me that I have created an invalid configuration which will not work in the way I might be expecting.
Screenshots
If applicable, add screenshots to help explain your problem.
The text was updated successfully, but these errors were encountered:
pjlewisuk
changed the title
"Default Networking" mode not compatible with Azure CNI Overlay
Bug: "Default Networking" mode not compatible with Azure CNI Overlay
Jul 12, 2023
@pjlewisuk in regards to the pods not going in their Overlay subnet, this should now resolved in #633. However, let me know if that's not the case. Regarding the other feedback, if there's anything else pending, you might want to spin up another issue and close this one or attach another PR and rephrase the issue description? Will leave that for you to decide. :)
Describe the bug
Currently, it is possible to select the "Default Networking" and "CNI Overlay Network" options on the "Networking Details", but this will result in a cluster with nodes and pods running in the same subnet (i.e. pods not running in the overlay subnet, as expected). We should add some warnings to make it clear to users that this is the case (or even prevent them from copying the shell scripts, like if an IP address is missing and allowed IP ranges is enabled on the cluster).
To Reproduce
Steps to reproduce the behavior:
If you deploy this cluster, you will get pods running in the same subnet as the nodes, which is not as expected. It doesn't seem to be possible to fix this once the cluster is deployed.
When selecting "Default Networking", there is a pop-over with the following information, but it doesn't mention CNI Overlay (presumably because it was in preview until 30th May):
Expected behavior
I think a stronger notification is required to inform the user that the selected options are not a compatible configuration. Something like the warning provided when Azure AppGW Ingress Controller add-on is selected with "Enable KeyVault Integration for TLS certificates", but the Secrets Store CSI driver is not provisioned into the cluster:
Or maybe provide a waning and prevent them from copying the shell scripts, like in the case when "allowed IP ranges" is enabled on the cluster, but an IP address/range is not provided:
A clear and concise description of what you expected to happen.
AKS Construction should warn me that I have created an invalid configuration which will not work in the way I might be expecting.
Screenshots
If applicable, add screenshots to help explain your problem.
Additional context
Related to #617
The text was updated successfully, but these errors were encountered: