-
Notifications
You must be signed in to change notification settings - Fork 3.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug] Redelivering messages doesn't take dispatcherMaxReadSizeBytes into account in Shared and Key_Shared subscriptions #23505
Comments
@ZhaoGuorui666 'll share details, possibly tomorrow. Just curious, are you facing this issue? |
No, I'm just trying to solve an issue and want to contribute to Pulsar. I'm just starting out now, I'll take a look at your previous PRs and learn from them. |
@ZhaoGuorui666 One good way to start valuable contributions is to fix flaky tests. We have plenty of them: https://github.com/apache/pulsar/issues?q=is%3Aissue+is%3Aopen+flaky . In many cases, there could also be a production code issue that is causing the flakiness. You'll learn a lot of Pulsar while addressing flaky tests too. Usually you can reproduce a flaky test by temporarily replacing |
@ZhaoGuorui666 for priority of flaky tests, you can check one of the recent reports in https://github.com/lhotari/pulsar-flakes. I triggered a new runs to get a reports of the most flaky tests in the last 2 weeks (runs: https://github.com/lhotari/pulsar-flakes/actions). |
list of flaky tests to address: https://github.com/lhotari/pulsar-flakes/tree/master/2024-10-23-14d-master |
Thank you for your suggestion. This method sounds very helpful. |
Search before asking
Read release policy
Version
all released versions
Minimal reproduce step
Problem description:
In the Shared subscription, messages get added to the replay queue when a consumer disconnects. In Key_Shared subscription, the replay queue is also used when messages cannot be dispatched to a target consumer due to insufficient permits or when the hash is blocked.
There's a problem in the current implementation since the
dispatcherMaxReadSizeBytes
(default 5MB) setting isn't taken into account in the reads. The impact of this is that consumers will receive a large batch messages at once if the reads succeed. The exact implication of this isn't fully known at this time. However, it's against the design to ignore thedispatcherMaxReadSizeBytes
setting which is helpful in making smaller incremental progress on individual dispatchers and service all active dispatchers in the broker one-by-one as fairly as possible.What did you expect to see?
That
dispatcherMaxReadSizeBytes
is used for redelivering message.What did you see instead?
dispatcherMaxReadSizeBytes
is ignored.Anything else?
No response
Are you willing to submit a PR?
The text was updated successfully, but these errors were encountered: