-
Notifications
You must be signed in to change notification settings - Fork 803
Blaze server enhancements #6179
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Blaze server enhancements #6179
Conversation
closeConnection() | ||
) | ||
IO(logger.error(t)(s"Error running request: $req")).attempt *> | ||
IO.delay(cancelToken.set(None)) *> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The closeConnection()
is cancel previous request. Probably after successful request processing, nothing is happening in the case of that canceling. But I think it's better to avoid that canceling at all.
}.unsafeRunSync() | ||
) | ||
|
||
parser.synchronized { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I actually don't get why we need that synchronization here. If anyone can explain this, I'd be very grateful.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cancel()
is also synchronized on parser
, so it linearizes those. I'm not awake enough yet to be 100% sure to what end.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@RafalSumislawski moved the running of the action out of this block in #4761, but left the synchronization. I'd be interested to hear if he can think of any reason it needs to remain.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need this synchronized for it's Java Memory Model effects. We need a cancelToken = theCancelToken
-happens-before-> cancelToken.foreach {...}
. The syncronized provides that thanks to the fact that the cancelToken.foreach {...}
sits in the cancel
methood which is called only from parser.syncrionized{...}
block inside stageShutdown
.
volatile
will do the job as well.
Cool. A port to of this to the 1.x branch improves performance by 3% on plaintext, and 13% on JSON, in the TechEmpower framework benchmarks that I'm playing around with. Nice work! |
@wjoel wow, thank you so much for benchmarking this! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is good news. Thanks, @wjoel!
Unfortunately, my other comment got stuck in pending...
@@ -124,7 +124,7 @@ private[blaze] class Http1ServerStage[F[_]]( | |||
// protected by synchronization on `parser` | |||
private[this] val parser = new Http1ServerParser[F](logger, maxRequestLineLen, maxHeadersLen) | |||
private[this] var isClosed = false | |||
private[this] var cancelToken: Option[CancelToken[F]] = None | |||
private[this] val cancelToken = new AtomicReference[Option[CancelToken[F]]](None) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Was this change due to threading? A volatile would be cheaper.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, probably we could do so. Though on get/set operations atomic has ~ the same performance, I think.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And, yeah, sorry. I didn't address your question fully. As I said above, I didn't get why that synchronization was needed. And so my original intent was to reduce lock contentions by changing synchronized
to non-blocking atomic.
@wjoel If you have resources would be super-helpful if you run performance tests with recent changes one more time. 🙏🏻 I bet to no changes in results. |
@danicheg Sure, happy to run them again tomorrow, but I won't have time today. |
@danicheg I managed to find the time after all, and the news are good, the wins might be within the margin of error but it looks like it's doing better at lower levels of concurrency, and maybe absolutely better on plaintext. AtomicReference: https://www.techempower.com/benchmarks/#section=test&shareid=32961ba7-8804-4348-8928-5997525ed9c5&hw=ph&test=json&a=2 |
Perfect, thanks again @wjoel |
This PR brings several proposals to the blaze server: