Skip to content

Conversation

danicheg
Copy link
Member

This PR brings several proposals to the blaze server:

  1. Tries to reduce lock contention in case of saving cancel token.
  2. Avoid canceling the previous request in the case of failing the current one.

@mergify mergify bot added series/0.22 PRs targeting 0.22.x module:blaze-server labels Mar 24, 2022
closeConnection()
)
IO(logger.error(t)(s"Error running request: $req")).attempt *>
IO.delay(cancelToken.set(None)) *>
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The closeConnection() is cancel previous request. Probably after successful request processing, nothing is happening in the case of that canceling. But I think it's better to avoid that canceling at all.

}.unsafeRunSync()
)

parser.synchronized {
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I actually don't get why we need that synchronization here. If anyone can explain this, I'd be very grateful.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cancel() is also synchronized on parser, so it linearizes those. I'm not awake enough yet to be 100% sure to what end.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@RafalSumislawski moved the running of the action out of this block in #4761, but left the synchronization. I'd be interested to hear if he can think of any reason it needs to remain.

Copy link
Member

@RafalSumislawski RafalSumislawski Mar 27, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need this synchronized for it's Java Memory Model effects. We need a cancelToken = theCancelToken -happens-before-> cancelToken.foreach {...}. The syncronized provides that thanks to the fact that the cancelToken.foreach {...} sits in the cancel methood which is called only from parser.syncrionized{...} block inside stageShutdown.

volatile will do the job as well.

@wjoel
Copy link
Contributor

wjoel commented Mar 26, 2022

Cool. A port to of this to the 1.x branch improves performance by 3% on plaintext, and 13% on JSON, in the TechEmpower framework benchmarks that I'm playing around with. Nice work!

@danicheg
Copy link
Member Author

@wjoel wow, thank you so much for benchmarking this!

Copy link
Member

@rossabaker rossabaker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That is good news. Thanks, @wjoel!

Unfortunately, my other comment got stuck in pending...

@@ -124,7 +124,7 @@ private[blaze] class Http1ServerStage[F[_]](
// protected by synchronization on `parser`
private[this] val parser = new Http1ServerParser[F](logger, maxRequestLineLen, maxHeadersLen)
private[this] var isClosed = false
private[this] var cancelToken: Option[CancelToken[F]] = None
private[this] val cancelToken = new AtomicReference[Option[CancelToken[F]]](None)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Was this change due to threading? A volatile would be cheaper.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, probably we could do so. Though on get/set operations atomic has ~ the same performance, I think.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And, yeah, sorry. I didn't address your question fully. As I said above, I didn't get why that synchronization was needed. And so my original intent was to reduce lock contentions by changing synchronized to non-blocking atomic.

@danicheg
Copy link
Member Author

@wjoel If you have resources would be super-helpful if you run performance tests with recent changes one more time. 🙏🏻 I bet to no changes in results.

@wjoel
Copy link
Contributor

wjoel commented Mar 26, 2022

@danicheg Sure, happy to run them again tomorrow, but I won't have time today.

@wjoel
Copy link
Contributor

wjoel commented Mar 26, 2022

@danicheg I managed to find the time after all, and the news are good, the wins might be within the margin of error but it looks like it's doing better at lower levels of concurrency, and maybe absolutely better on plaintext.

AtomicReference: https://www.techempower.com/benchmarks/#section=test&shareid=32961ba7-8804-4348-8928-5997525ed9c5&hw=ph&test=json&a=2
volatile: https://www.techempower.com/benchmarks/#section=test&shareid=a75aa08c-0529-43c1-b0f7-15fd76b67d39&hw=ph&test=json&a=2

@danicheg
Copy link
Member Author

Perfect, thanks again @wjoel

@rossabaker rossabaker merged commit 53570c6 into http4s:series/0.22 Mar 27, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants