-
Notifications
You must be signed in to change notification settings - Fork 185
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create an Internal Load Balancer if configured #1222
Create an Internal Load Balancer if configured #1222
Conversation
Hi @bfournie. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
✅ Deploy Preview for kubernetes-sigs-cluster-api-gcp ready!
To edit notification comments on pull requests, go to your Netlify site configuration. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/ok-to-test
2d101b7
to
e312d9c
Compare
// APIInternalAddress is the IPV4 regional address assigned to the | ||
// internal Load Balancer. | ||
// +optional | ||
APIInternalAddress *string `json:"apiInternalIpAddress,omitempty"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I understand that this is a breaking change, but should we consider a structure to hold these values? The added values are the same as those for the api server, so a lot of duplicated values appear.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The main reason I didn't do that was because it is a breaking change, I tried to keep the same API for consistency and just add the additional fields.
If the consensus is that its OK to change this API I can make it an array.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
make sense as we are adding this to add as a struct
wdyt @damdo
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, how would we handle the API in this case though? Would the existing fields - APIServerAddress
, APIServerHealthCheck
etc need to be marked as deprecated but still handled for a few releases until the deprecation is complete? It seems like it will add a lot of complexity to this API.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tend to agree with @bfournie here.
I'd probably stick with a non-breaking change, and then schedule a full API re-review for a new v1beta2 where we can improve the situation and plan the breaking changes we want to make to the API to clean it up.
cc. @nrb |
e312d9c
to
173c005
Compare
/hold |
6db2752
to
f5670bf
Compare
Remove the configuration of the GCP Internal Load Balancer from the installer as its now being done in the CAPG provider. This allows only the Internal LB to be created to support Private Clusters. This depends on kubernetes-sigs/cluster-api-provider-gcp#1222 which is still under review.
/unhold |
Correct, it will not break existing clusters on upgrade which will not have LoadBalancerType set so only an External LB will created which is the current behavior, see https://github.com/kubernetes-sigs/cluster-api-provider-gcp/pull/1222/files#diff-a7274316ed5d60f25cc61e1bcb771a612c3566832a59ecc3a13b279a849c95ebR61 |
/approve thanks! |
will wait for the e2e tests :) |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: bfournie, cpanato The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/unhold |
Thank you |
Adds an e2e test when the internal load balancer is configured. This requires kubernetes-sigs#1222
Adds an e2e test when the internal load balancer is configured. This requires kubernetes-sigs#1222
Adds an e2e test when the internal load balancer is configured. This requires kubernetes-sigs#1222
Adds an e2e test when the internal load balancer is configured. This requires kubernetes-sigs#1222
Adds an e2e test when the internal load balancer is configured. This requires kubernetes-sigs#1222
Adds an e2e test when the internal load balancer is configured. This requires kubernetes-sigs#1222
Adds an e2e test when the internal load balancer is configured. This requires kubernetes-sigs#1222
Adds an e2e test when the internal load balancer is configured. This requires kubernetes-sigs#1222
Adds an e2e test when the internal load balancer is configured. This requires kubernetes-sigs#1222
Adds an e2e test when the internal load balancer is configured. This requires kubernetes-sigs#1222
Adds an e2e test when the internal load balancer is configured. This requires kubernetes-sigs#1222
Adds an e2e test when the internal load balancer is configured. This requires kubernetes-sigs#1222
Adds an e2e test when the internal load balancer is configured. This requires kubernetes-sigs#1222
I'm having an issue with the
Seems super weird. Creating a new identical frontend/forwardingrule or recreating the old one causes the healthcheck to start passing and the LB to start accepting requests. This is for a 1 node cluster. If there's a better place to post this, I can do that. |
So this was with just the Internal LB or both External and Internal LB? Probably best to create a Issue under https://github.com/kubernetes-sigs/cluster-api-provider-gcp/issues so we can track it. |
Internal Only LB. No external. I'll work to create an issue soon. |
What type of PR is this?
/kind feature
What this PR does / why we need it:
Provide the ability to configure the types of Load Balancers to be created (Internal and/or External). By default, an External Proxy Load Balancer will be created per the current implementation. If set for an Internal Load Balancer, an Internal Passthrough Load Balancer will be created using resources in the specified region.
Which issue(s) this PR fixes :
Fixes #1221
Special notes for your reviewer:
TODOs:
Release note:
-->