-
Notifications
You must be signed in to change notification settings - Fork 773
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add sequential flag to mqtrigger #1814
base: master
Are you sure you want to change the base?
Conversation
Codecov Report
@@ Coverage Diff @@
## master #1814 +/- ##
==========================================
- Coverage 28.41% 21.17% -7.25%
==========================================
Files 69 52 -17
Lines 5040 3708 -1332
==========================================
- Hits 1432 785 -647
+ Misses 3390 2850 -540
+ Partials 218 73 -145
Continue to review full report at Codecov.
|
f9ada3e
to
514479c
Compare
|
||
var resp *http.Response | ||
for attempt := 0; attempt <= trigger.Spec.MaxRetries; attempt++ { | ||
// Make the request |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @daniel-shuy , can you please set the headers inside the for loop? This is to avoid an error I encountered at other places: https://stackoverflow.com/questions/31337891/net-http-http-contentlength-222-with-body-length-0
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@harshthakur9030 should this be in a separate PR? I'm actually just moving the entire code block into an anonymous function so that I can conditionally call it as a goroutine, I didn't change any of the existing logic
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that bug existed before you touched the code base. Since you have already touched this part of the code base, I was hoping you could do it. Everything remains the same, just the for loop which sets the headers needs to go into the loop which makes the requests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok sure, I can do it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks !!
} | ||
if err == nil && resp.StatusCode == http.StatusOK { | ||
// Success, quit retrying | ||
break |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you please change this to an appropriate return
statement?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
? Why? If we return here the rest of the function (closing the response body, checking if the response returned an error, trigger ack message, publishing to response topic) won't execute
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh yes! My bad. One quick question though: Shouldn't we consider for status codes between 200 and 300 to be successful rather than just 200?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think the router can return anything other than HTTP 200
for a successful request? Maybe @vishal-biyani can confirm
Hey @therahulbhati This will also impact all Keda connectors I believe? |
@vishal-biyani I don't think so, don't Keda connectors have their own implementation (https://github.com/fission/keda-connectors)? Looking at the implementation of the Keda Kafka HTTP Connector (https://github.com/fission/keda-connectors/tree/master/kafka-http-connector), I don't think it will have this issue as it is consuming messages sequentially, unlike the mqtrigger Kafka connector which is consuming messages concurrently. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @daniel-shuy Thanks for this PR and looks great except for 2 really minor comments. Please let me know your thoughts and I will merge this one.
msgHandler := func() { kafkaMsgHandler(&kafka, producer, trigger, msg, consumer) } | ||
if trigger.Spec.Sequential { | ||
msgHandler() | ||
} else { | ||
go msgHandler() | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Won't we need a wg.Add
& wg.Done
similar to AzureQueue here?
if trigger.Spec.Sequential { | ||
cb() | ||
} else { | ||
go cb() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just checking if wg.Add
and wg.Done
similar to AzureQueue would be needed here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I took a look at the AzureQueue mqtrigger implementation, and seems like it is using sync.WaitGroup
to notify AzureQueueSubscription.done (a chan bool
) (see asq.go#L275).
The other mqtrigger implementations do not have anything equivalent, so it shouldn't be needed.
Thanks @daniel-shuy this looks good, I will merge in a day or so, again thanks for your effort on this one 🎉 |
Hello @daniel-shuy , Thank you for contributing to Fission project. Could you please fill out this form, so we can send you the well deserved awesome swags :) Team Fission |
@chetanpdeshmukh yay, thanks! The form is restricted though |
Hey @daniel-shuy can you give it a try now? Should work smoothly |
@chetanpdeshmukh It works now, I've filled it up, thanks! |
@vishal-biyani just noticed that this PR hasn't been merged! |
Hello, @daniel-shuy! Nice work you did! Any updates on this pull request? |
@AnatoliyYakimov Unfortunately I don't have permission to merge, I'm also wondering why the PR hasn't been merged after so long 😅 |
Maybe they just forgot) |
469d8b7
to
2a3f88c
Compare
I've resolved the conflicts with the latest |
Also looking at the code, i saw that we continue to consume messages from topic if lambda returned an error. So if lambda returns an error, we still lose order, if'd i get it right. And some personal question: im not very strong in go, and don't know Fission code that deep, so i don't understend what happens if after polling some messages weren't commited. Does sacrama (go kafka lib, if i spelled it right) return this messages from cache, commit all polled messages anyway or it will try to poll again from previous commit? |
@AnatoliyYakimov Oh wow, you're right, didn't think of that. We should break out of the loop on error and repoll Kafka again (since the failed message is not committed, it will be polled again), good catch! |
If error is not transient and we will poll again and again, we will DDOS our lambdas. So maybe we need some kind of error retry count and then stop trigger. I think in Kubeless you can set retry count, before sending to DLQ. Maybe spec could look like this: |
@AnatoliyYakimov actually, let me get back to you on this. sarama is returning a channel, which is looped over sequentially. I'll need to look into sarama's code to see if its caching messages |
Fixes #1569
Adds a
sequential
flag to themqtrigger
function. If set totrue
, consumes messages sequentially instead of concurrently.I wasn't sure whether to add support for all 3 MQs (Kafka, Azure Queue Storage, NATS), as they currently have different behaviors (Kafka and Azure Queue Storage MQs currently consumes messages concurrently, while NATS MQ consumes messages sequentially). Therefore I separated the implementation for each MQ into its own commit so that they can be dropped if we decide not to support them.
Note that if support for the
sequential
flag is added to the NATS MQ, it is a breaking change, as it would mean changing the default behavior from sequential to concurrent.This change is