stream: fix spurious event on termination when Debounce is used #29347
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Currently, stream.Debounce can emit a spurious event when the source stream terminates. Indeed, at that point, the items channel gets closed, and the completion error is published to a separate error channel. Depending on the (undeterministic) select unblocking order, we may first observe the items channel closure, hence emitting the default value of the object, and only afterwards propagate the termination error.
Let's fix this issue by checking whether the items channel unblocked because it had just been closed, and do not emit any item in that case. Additionally, let's also slightly adapt the corresponding test to catch this issue: we never discovered it because another event had just been emitted, and the spurious one was just queued. Adding an extra delay, instead, causes the test (without this fix) to almost always hang, because of the spurious event not being received by anyone.
Additionally, let's also prevent entering into a busy loop when the items channel gets closed, until we receive on the errors channel, by immediately setting it to nil afterwards. Thanks to David Bimmler for pointing out this issue.