Skip to content
This repository has been archived by the owner on Jan 22, 2021. It is now read-only.

Ingresses load balancer status should be updated #12

Open
excieve opened this issue May 5, 2019 · 3 comments
Open

Ingresses load balancer status should be updated #12

excieve opened this issue May 5, 2019 · 3 comments
Labels
enhancement New feature or request stabilisation Issues required to stabilise the IC

Comments

@excieve
Copy link
Member

excieve commented May 5, 2019

When Tyk service is a LoadBalancer the ingress controller should update all the ingresses status with its IP/hostname, otherwise it leaves them such that e.g. k8s ExternalDNS, Rancher, etc. can't discover that connection properly.

See how status.loadBalancer is empty in this ingress:

        {
            "apiVersion": "extensions/v1beta1",
            "kind": "Ingress",
            "metadata": {
                "annotations": {
                    "kubernetes.io/ingress.class": "tyk"
                },
                "creationTimestamp": "2019-05-05T00:57:27Z",
                "generation": 1,
                "labels": {
                    "app": "dash-ing-tyk-pro",
                    "chart": "tyk-pro-0.2.0",
                    "heritage": "Tiller",
                    "io.cattle.field/appId": "tyk-pro",
                    "release": "tyk-pro"
                },
                "name": "dash-ing-tyk-pro",
                "namespace": "tyk-ingress",
                "resourceVersion": "265698",
                "selfLink": "/apis/extensions/v1beta1/namespaces/tyk-ingress/ingresses/dash-ing-tyk-pro",
                "uid": "c07d0141-6ed0-11e9-b7f0-067176ca22be"
            },
            "spec": {
                "rules": [
                    {
                        "host": "tyk-dashboard.local",
                        "http": {
                            "paths": [
                                {
                                    "backend": {
                                        "serviceName": "dashboard-svc-tyk-pro",
                                        "servicePort": 3000
                                    },
                                    "path": "/"
                                }
                            ]
                        }
                    }
                ]
            },
            "status": {
                "loadBalancer": {}
            }
        }

While there's a LB service for the gateway:

      {
            "apiVersion": "v1",
            "kind": "Service",
            "metadata": {
                "annotations": {
                    "field.cattle.io/publicEndpoints": "[{\"addresses\":[\"a1cc6f66d6ebb11e9b7f0067176ca22b-457263279.us-east-2.elb.amazonaws.com\"],\"port\":443,\"protocol\":\"TCP\",\"serviceName\":\"tyk-ingress:gateway-svc-tyk-pro\",\"allNodes\":false}]"
                },
                "creationTimestamp": "2019-05-04T22:22:33Z",
                "labels": {
                    "app": "gateway-svc-tyk-pro",
                    "chart": "tyk-pro-0.2.0",
                    "heritage": "Tiller",
                    "io.cattle.field/appId": "tyk-pro",
                    "release": "tyk-pro"
                },
                "name": "gateway-svc-tyk-pro",
                "namespace": "tyk-ingress",
                "resourceVersion": "261904",
                "selfLink": "/api/v1/namespaces/tyk-ingress/services/gateway-svc-tyk-pro",
                "uid": "1cc6f66d-6ebb-11e9-b7f0-067176ca22be"
            },
            "spec": {
                "clusterIP": "10.100.134.70",
                "externalTrafficPolicy": "Local",
                "healthCheckNodePort": 30713,
                "ports": [
                    {
                        "nodePort": 32327,
                        "port": 443,
                        "protocol": "TCP",
                        "targetPort": 443
                    }
                ],
                "selector": {
                    "app": "gateway-tyk-pro",
                    "release": "tyk-pro"
                },
                "sessionAffinity": "None",
                "type": "LoadBalancer"
            },
            "status": {
                "loadBalancer": {
                    "ingress": [
                        {
                            "hostname": "a1cc6f66d6ebb11e9b7f0067176ca22b-457263279.us-east-2.elb.amazonaws.com"
                        }
                    ]
                }
            }
        }

Possible solutions are to specify in the config options the service, which should provide the LB info like Traefik does or auto-detect in the namespace like nginx does.

This also needs to be kept in sync in case LB service changes.

@excieve excieve added the enhancement New feature or request label May 5, 2019
@lonelycode
Copy link
Member

So the workflow would be:

  1. Ingress spec is pushed with Tyk annotation
  2. tyk-k8s detects the ingress, generates API Definition + route and pushes to Dashboard / Gateway API
  3. If tyk-k8s has been configured to know that Ingress type is LoadBalancer, then modify the ingress object to include the Tyk Load-balancer service LB endpoint (and needs to poll this)?

or is it: If tyk-k8s has been configured to know that Ingress type is LoadBalancer, then modify the ingress object to be the URL that has been specified for the ingress (i.e. host+path) since having the LB won't actually route since it requires the Host header to route properly?

The thing is, with either of those options the controller doesn't update the ingress specs at all atm. So that's a whole new K8s API integration.

Also, keeping them in sync, would the controller need to detect all ingresses that have the correct annotation (across namespaces), and then regularly poll them to update LB value? That could get v. tricky with the OSS approach where the controller is a sidecar, there's the possibility of multiple wasteful writes to the API.

@stale
Copy link

stale bot commented Mar 18, 2020

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs, please comment if you would like this issue to remain open. Thank you for your contributions.

@stale stale bot added the wontfix This will not be worked on label Mar 18, 2020
@stale stale bot closed this as completed Apr 1, 2020
@excieve
Copy link
Member Author

excieve commented Apr 13, 2020

This is not stale.

@excieve excieve reopened this Apr 13, 2020
@stale stale bot removed the wontfix This will not be worked on label Apr 13, 2020
@excieve excieve added the stabilisation Issues required to stabilise the IC label May 14, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
enhancement New feature or request stabilisation Issues required to stabilise the IC
Projects
None yet
Development

No branches or pull requests

2 participants