-
Notifications
You must be signed in to change notification settings - Fork 359
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
exactlyOne in List of SourceRecords #827
Comments
Hi, Please can you help me out by defining what you mean by "Debulk"? Which connector are you referring to? There is some implementation differences between the connectors. The connectors can be configured for different strategies depending on number of records. For example the S3 connector has a FLUSH_COUNT property that enables you to set the number of records after which you would like to flush. Regards, David. |
Hi David, I’m referring to FTP connector. Debulk means I receive file A.xml and I break down in a list of SourceRecords. we need transactionallity, if we cannot send all the files atomically revert transaction. |
Good morning, I have a question with the Kafka Connectors and the Custom Converters. If I debulk a SourceRecord into a List of 10k SourceRecords. Is theConnector publishing this List of 10k in a batch process to guarantee that all of them have been published?
Let’s said that when it was sending 5k the Connector die. This 5k will be sent, or not?.
In our case we have this atomic scenario, where we need to guarantee that all records of a debluk process are sent, otherwise revert and start over
So I guess my question it would be, if the whole list of SourceRecords has “exactlyOne” delivery strategy
Regards.
The text was updated successfully, but these errors were encountered: