Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

StreamMessageListenerContainer stops consuming messages after connection timeout to Redis #2833

Open
radjavi opened this issue Jan 16, 2024 · 3 comments
Labels
status: feedback-provided Feedback has been provided status: waiting-for-triage An issue we've not yet triaged

Comments

@radjavi
Copy link

radjavi commented Jan 16, 2024

Problem

I have been following this guide for implementing asynchronous message listeners for Redis streams. The listeners work as expected at first. However, if a temporary connection timeout occurs towards Redis (e.g. Redis becomes unavailable), the listeners stop consuming messages indefinitely (even though Redis is available).

How to reproduce

  1. Create a message listener using this guide (with a StreamMessageListenerContainer).
  2. Publish an event (to make sure the message is successfully received).
  3. Restart the Redis instance (to simulate a connection timeout)
  4. Publish an event (should not be received)
@spring-projects-issues spring-projects-issues added the status: waiting-for-triage An issue we've not yet triaged label Jan 16, 2024
@christophstrobl
Copy link
Member

Thank you for getting in touch. Which Redis driver are you using? Lettuce has a connection watchdog that can trigger reconnect automatically depending on the options set.

If you'd like us to spend some time investigating, please take the time to provide a complete minimal sample (something that we can unzip or git clone, build, and deploy) that reproduces the problem.

@christophstrobl christophstrobl added the status: waiting-for-feedback We need additional information before we can continue label Jan 17, 2024
@radjavi
Copy link
Author

radjavi commented Jan 17, 2024

I have tried both Lettuce and Jedis. Same issue with both. Worth mentioning is that code in other places (e.g. an event publisher) that also connects to Redis works fine after Redis becomes available again. It's only the message listeners that stop working.

I'll see if I get some time over to create an example project.

@spring-projects-issues spring-projects-issues added status: feedback-provided Feedback has been provided and removed status: waiting-for-feedback We need additional information before we can continue labels Jan 17, 2024
@zhanglc
Copy link

zhanglc commented Jun 19, 2024

seems like the spring-data-redis auto unsubscribe when exception.

if (cancelSubscriptionOnError.test(ex)) {
cancel();
}

the value of cancelSubscriptionOnError default is true:

Predicate<Throwable> cancelSubscriptionOnError = t -> true;

when the RuntimeException occurs, spring does not consume the message any more, so, we just need change this options in builder:

        Subscription subscription =
                listenerContainer.register(
                        StreamMessageListenerContainer.StreamReadRequest
                                .builder(StreamOffset
                                        .create(REDIS_STREAM_KEY_REFRESH_RELATION, ReadOffset.lastConsumed()))
                                // here
                                .cancelOnError(t -> false)
                                .consumer(Consumer.from(REDIS_STREAM_KEY_REFRESH_RELATION_GROUP_NAME,
                                        REDIS_STREAM_KEY_REFRESH_RELATION_CONSUMER_NAME))
                                .autoAcknowledge(true).build(),
                        streamListener);

by the way ,i think the default value of cancelSubscriptionOnError should be t -> false .

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
status: feedback-provided Feedback has been provided status: waiting-for-triage An issue we've not yet triaged
Projects
None yet
Development

No branches or pull requests

4 participants