-
Notifications
You must be signed in to change notification settings - Fork 3.7k
Fix various timing issues with docker pusher #8379
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Hi @jedevc. Thanks for your PR. I'm waiting for a containerd member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/cc |
@akhilerm: GitHub didn't allow me to request PR reviews from the following users: akhilerm. Note that only containerd members and repo collaborators can review this PR, and authors cannot review their own PRs. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
b52596f
to
d9df7c3
Compare
d9df7c3
to
e491f1f
Compare
remotes/docker/pusher.go
Outdated
case err2 := <-pw.errC: | ||
err = err2 | ||
default: | ||
case <-pw.done: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let me make sure I understand the 3 cases correctly...
- The request has failed, we need to attempt a reset, so we can expect a
new pipe incoming on pipeC.
A redirect or something that retries where the http library requests a new body
- The request has failed, we don't need to attempt a reset, so we can
expect an incoming error on errC.
Either the request fails or an unexpected status code is returned
- Something else externally has called Close, so we can expect the done
channel to be closed.
straightforward, but don't see any case where this happens
There is a condition where the registry (unexpectedly, not to spec) returns 201 or 204 on the put before the body is fully written. I would expect that the http library would issue close and could fall into a deadlock here. We could just read respC
and call setResponse
. In that case ErrClosedPipe
would get returned and Commit
shouldn't be called anyway.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for the delay in getting back to this! ❤️
Let me make sure I understand the 3 cases correctly...
Yup, that's how I understand it as well.
There is a condition where the registry (unexpectedly, not to spec) returns 201 or 204 on the put before the body is fully written. I would expect that the http library would issue close and could fall into a deadlock here.
Yeah, I think this is also possible. I've updated to do this as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@dmcgowan could you take another look? 🙏
Sorry, I know this one is fairly hellish and concurrent.
@dmcgowan sorry to bother, could you take another look? If there's concerns about changing the complexity of the code here, I'm happy to try and put together some test to run with the race checker, if that helps move things forward. |
Put on my list to review, I'm just coming back from a break and this one is not quick review due to the complexity. Thanks for updating! |
@jedevc added to 2.0 milestone, current change LGTM if you want to rebase |
io.Pipe produces a PipeReader and a PipeWriter - a close on the write side, causes an error on both the read and write sides, while a close on the read side causes an error on only the read side. Previously, we explicitly prohibited closing from the read side. However, http.Request.Body requires that "calling Close should unblock a Read waiting for input". Our reader will not do this - calling close becomes a no-op. This can cause a deadlock because client.Do may never terminate in some circumstances. We need the Reader side to close its side of the pipe as well, which it already does using the go standard library - otherwise, we can hang forever, writing to a pipe that will never be closed. Allowing the requester to close the body should be safe - we never reuse the same reader between requests, as the result of body() will never be reused by the guarantees of the standard library. Signed-off-by: Justin Chadwell <me@jedevc.com>
If Close is called externally before a request is attempted, then we will accidentally attempt to send to a closed channel, causing a panic. To avoid this, we can check to see if Close has been called, using a done channel. If this channel is ever done, we drop any incoming errors, requests or pipes - we don't need them, since we're done. Signed-off-by: Justin Chadwell <me@jedevc.com>
If we get io.ErrClosedPipe in pushWriter.Write, there are three possible scenarios: - The request has failed, we need to attempt a reset, so we can expect a new pipe incoming on pipeC. - The request has failed, we don't need to attempt a reset, so we can expect an incoming error on errC. - Something else externally has called Close, so we can expect the done channel to be closed. This patch ensures that we block for as long as possible (while still handling each of the above cases, so we avoid hanging), to make sure that we properly return an appropriate error message each time. Signed-off-by: Justin Chadwell <me@jedevc.com>
Signed-off-by: Justin Chadwell <me@jedevc.com>
If a writer continually asks to be reset then it should always succeed - it should be the responsibility of the underlying content.Writer to stop producing ErrReset after some amount of time and to instead return the underlying issue - which pushWriter already does today, using the doWithRetries function. doWithRetries already has a separate cap for retries of 6 requests (5 retries after the original failure), and it seems like this would be previously overridden by content.Copy's max number of 5 attempts, hiding the original error. Signed-off-by: Justin Chadwell <me@jedevc.com>
If sending two messages from goroutine X: a <- 1 b <- 2 And receiving them in goroutine Y: select { case <- a: case <- b: } Either branch of the select can trigger first - so when we call .setError and .Close next to each other, we don't know whether the done channel will close first or the error channel will receive first - so sometimes, we get an incorrect error message. We resolve this by not sending both signals - instead, we can have .setError *imply* .Close, by having the pushWriter call .Close on itself, after receiving an error. Signed-off-by: Justin Chadwell <me@jedevc.com>
We also need an additional check to avoid setting both the error and response which can create a race where they can arrive in the receiving thread in either order. If we hit an error, we don't need to send the response. > There is a condition where the registry (unexpectedly, not to spec) > returns 201 or 204 on the put before the body is fully written. I would > expect that the http library would issue close and could fall into a > deadlock here. We could just read respC and call setResponse. In that > case ErrClosedPipe would get returned and Commit shouldn't be called > anyway. Signed-off-by: Justin Chadwell <me@jedevc.com>
7a1b8c8
to
a9152eb
Compare
Rebased 🎉 Thanks @dmcgowan! Sorry this one was such a pain. |
While investigating docker/buildx#1726, I found a couple of timing issues when using the docker pusher.
I discovered these issues in a test environment, hopefully should be able to duplicate from: https://gist.github.com/jedevc/ffb16d1319618d116df89e327d3bd589.
Copying the information from the commit messages (see them individually for more details):
Not quite sure about trying to add tests for these? They're very dependent on exact timings, since the patches are about correcting racy behavior. I have confirmed using my test environment shared above (repeating commands hundreds of times in a row), that we don't see the deadlocks, panics, and incorrect error messages.
Also happy to split these up into separate PRs, if some of them are easier reviews than others.