Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding Producers/Consumers to the spec #54

Open
TristenHarr opened this issue Nov 7, 2023 · 0 comments
Open

Adding Producers/Consumers to the spec #54

TristenHarr opened this issue Nov 7, 2023 · 0 comments

Comments

@TristenHarr
Copy link

Perhaps we should add Producers/Consumers to the spec such that a connector could emit or consume events.

This would allow us to potentially build connectors for things like Kafka/RabbitMQ and more importantly provide a way for connector developers to handle eventing.

Connectors could emit events that could be "self-consumed" (handled by the same connector) or "consumed" by a different connector.

We could make it so that you could implement a connector that emits events either by calling a web-hook with the event data (easy escape hatch, similar to v2), or if the event is self-consumed then via an action that the connector exposes on the graph.

An example of this might be.. A user wants to have an event that calls some custom business logic whenever a new order is inserted in the orders table. Now that Hasura will be able to run custom business logic, this would let a user set the connector as the event target directly. Ideally this might also provide a way to implicitly handle auth for that connector action, there might be a shared secret key, loaded in from secrets from the environment in engine then passed via headers to the connector, which also loads in the secret from it's environment. This would allow users to lock down each event target with open-ended or fine-tuned precision. A user could have 1 API key for the entire backend of events, 1 API key for each backend service, or if they wanted 1 API key for each event target.

If the connector self-consumes the event, it could mean that the connector exposes a mutation that would be connector specific perhaps something like newOrderInserted(orderData), and when the event occurs then newOrderInserted is called and passed the event data. At the same time, if a connector emits an event it should also be able to be consumed by a different connector. Kind of like "Remote-eventing" where one connector could emit an event which is then passed through and handled by a different connector.

The exact design for this might be tricky. I feel like you'd almost want to think about designing a module that lives at the engine level that functions as a universal message broker thats job is to handle all event delivery across the entire graph, but this might be a lot of work. In a perfect world, producers and consumers should be able to be loosely coupled. I.e. A connector that has producers self-describes what the events it produces will look like, a connector with consumers self-describes what events it can consume look like, and then in the metadata level there's a link between producers/consumers, with an edge case being producers that consume their own events. (Ideally, if a connector emits an event, and the same connector should consume that event, it should be able to immediately consume it even if it will also push it to the broker, otherwise its a redundant round-trip to engine and back) The message broker's job would be to make sure that events emitted from producers are delivered to the proper consumers. This would allow developers to wrap existing API's from either side. Any generic SaaS platform that has the ability to call a web-hook whenever XYZ thing happens could now have a connector that wraps it that turns it into a producer/event-source.

Depending on implementation this could get complex, especially because it could easily become a bottleneck. For Enterprise level traffic you'd want multiple message queues, one that is async we at least once delivery and "unlimited throughput" that never clogs, and the ability to have separate per-service FIFO message queue's for services that strictly need events to be synchronous that have exactly once delivery.

A caveat of this is that connectors that contained producers would need to run with 1 instance warmed to work properly, however connectors with only consumers could spin up on request.

These are just thoughts/ideas I've had lately, would love others thoughts.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant