Skip to content
This repository has been archived by the owner on Apr 19, 2024. It is now read-only.

Cross-DC discovery + updates? #17

Open
e271828- opened this issue Jul 29, 2019 · 6 comments
Open

Cross-DC discovery + updates? #17

e271828- opened this issue Jul 29, 2019 · 6 comments
Assignees

Comments

@e271828-
Copy link

e271828- commented Jul 29, 2019

@thrawn01 The architecture docs don’t seem to cover this. How is multi-geo/multi-cluster intended to work? Are you doing full peering across all of your clusters?

Also, would be interesting to understand better why you chose this route rather than simply shipping a rate limiting library or sidecar hitting a pre-existing memory cache. By “no deployment synchronization with a dependent service” are you referring to data structure changes?

@thrawn01
Copy link
Contributor

Hi @e271828- sorry it took so long to respond. I was on a short hiatus when all this went public.

@thrawn01 The architecture docs don’t seem to cover this. How is multi-geo/multi-cluster intended to work? Are you doing full peering across all of your clusters?

We currently only run a single cluster per region which handles all our traffic. You can setup a different cluster within the same region by changing how the peers discover each other. For ETCD you would change the GUBER_ETCD_KEY_PREFIX be something like /cluster-1, /cluster-2, etc. For kubernetes you would change the GUBER_K8S_ENDPOINTS_SELECTOR to some thing that would select gubernator peers for one cluster over another, app=guber-cluster-1

Also, would be interesting to understand better why you chose this route rather than simply shipping a rate limiting library or sidecar hitting a pre-existing memory cache.

I attempted to explain our rationale for not using redis in this blog post, it also touches on the library vs service/sidecar question. https://www.mailgun.com/blog/gubernator-cloud-native-distributed-rate-limiting-microservices

By “no deployment synchronization with a dependent service” are you referring to data structure changes?

Yes, but mostly I was highlighting config synchronization. If gubernator had a config file of available rate limit definitions that was required at startup. New rate limit definitions required by dependent services would need an out of band process (typically chef or puppet) to ensure the config has the proper definitions before the dependent service could make use of them. This deployment synchronization is cumbersome and error prone.

@e271828-
Copy link
Author

e271828- commented Aug 4, 2019

@thrawn01 Are your rate limits not shared across regions, then?

@thrawn01
Copy link
Contributor

thrawn01 commented Aug 5, 2019

@e271828- Currently we do not share rate limits across regions. The simplest answer is to shard accounts across regions such that an account always uses the same region. This works great if your rate limits are tied to an account (like mailgun's).

But for a more flexible use case you could have gubernator forward requests across datacenters. You will incur the latency involved in a round trip call to a remote datacenter but gubernators batching feature would make this efficient for handling many requests. Provided the user is ok with this cross datacenter latency, I feel this could be a decent solution for some. (Cassandra works this way)

The real problem for cross datacenter clusters is in keeping the peer list up to date. I don't know of any multi-datacenter key/store systems that would do this well. Any raft based system will be sensitive to latency and might not be a good fit. I'm open to suggestions on how best to solve this issue. Gubernator's peer discovery is pluggable by design so implementing a solution would not be hard, find the right solution for this problem is the hard part.

I'm open to alternative solutions, as this sounds like a fun problem to solve!

Thanks for bringing this up!

@thrawn01
Copy link
Contributor

thrawn01 commented Aug 5, 2019

One of the guys here mentioned using the gossip protocol for peer discovery across data centers.

This might be a good solution. https://github.com/hashicorp/memberlist
https://dev.to/davidsbond/golang-creating-distributed-systems-using-memberlist-2fa9

I'll take a stab at making a plugin using this library, at a minimum it would be nice to have a peer discovery setup that has no external dependencies.

@e271828-
Copy link
Author

e271828- commented Aug 6, 2019

Sounds interesting, look forward to taking a look!

@thrawn01 thrawn01 self-assigned this Jan 5, 2020
@thrawn01
Copy link
Contributor

thrawn01 commented Jan 5, 2020

Finally getting started on true multi-datacenter support.

The approach I'm taking here is that the rate limit is shared across data centers but responses to the client don't need to wait for a call to an owning node in a different datacenter. The responses are always from the local datacenter, while hit updates are asynchronously aggregated and batch updated across datacenters.

To accomplish this, each DC will have their own hash ring, and rate limits will be "owned" by both rings, as such rate limits will need to hash their keys against each datacenter ring. The local owner will aggregate the hits much like when using Behavior = GLOBAL and forwarded to the owning node in the other datacenter after an configurable interval, again just like Behavior = GLOBAL, while the responses are immediately returned by the local owning node.

I'm going to use memberlist to discover the nodes in each datacenter. Also, static node config will be an option.

I had toyed with the idea of making a large cluster across data centers where there would be only one owning node and all requests to that node would be forwarded to the owner across DC's. You could still do this with a properly implemented memberlist implementation, but doing this would introduce a ton of cross datacenter chatter and latency of waiting for the other datacenter. It also defeats the purpose of running your app in multiple DC's as if one goes down you lose half of all the rate limits currently in flight. With my proposed approach the rate limits will be owned by both datacenters and only batched hits are transferred across DC boundaries.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants