Skip to content

Conversation

olix0r
Copy link
Member

@olix0r olix0r commented Sep 5, 2020

There's a lot of needless boilerplate around the HTTP client, mostly due
to manual future implementations. These have been converted to boxed
futures.

Furthermore, the HasSettings trait has been eliminated, in favor of
AsRef.

There's a lot of needless boilerplate around the HTTP client, mostly due
to manual future implementations. These have been converted to boxed
futures.

Furthermore, the HasSettings trait has been eliminated, in favor of
AsRef.
@olix0r olix0r requested a review from a team September 5, 2020 00:27
@olix0r olix0r changed the title proxy-http: Box response futures proxy-http: Remove unneeded boilerplate Sep 5, 2020
Copy link
Contributor

@hawkw hawkw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new code definitely looks nicer!

As a follow-up, it might be nice to switch to nightly rust and change the boxed futures to impl Future, at least to run the benchmarks. I'm not too concerned about the performance implications but it would be nice to do a comparison.

I noticed that we're no longer polling Connect readiness. Is that potentially an issue?

fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll<Result<(), Self::Error>> {
self.connect.poll_ready(cx).map_err(Into::into)
fn poll_ready(&mut self, _: &mut Context<'_>) -> Poll<Result<(), Self::Error>> {
Poll::Ready(Ok(()))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shouldn't we still poll the Connect for readiness? IDK if they are ever actually not ready, but this seems like it could be a problem if the impl changed...

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We clone the connect and oneshot, so any polling we do cannot be used, anyway.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah, yup, i guess polling it here was never necessary. should maybe have a comment? but, not a big deal.

@olix0r olix0r merged commit b97909f into main Sep 8, 2020
@olix0r olix0r deleted the ver/http-async branch September 8, 2020 17:32
olix0r added a commit to linkerd/linkerd2 that referenced this pull request Sep 10, 2020
This release includes several major changes to the proxy's behavior:

- Service profile lookups are now necessary and fundamental to outbound
  discovery for HTTP traffic. That is, if a service profile lookup is
  rejected, endpoint discovery will not be performed; and endpoint
  discovery must succeed for all destinations that are permitted by
  service profiles. This simplifies caching and buffering to reduce
  latency (especially under concurrency).
- Service discovery is now performed for all TCP traffic, and
  connections are balanced over endpoints according to connection
  latency.
- This enables mTLS for **all** meshed connections; not just HTTP.
- Outbound TCP metrics are now hydrated with endpoint-specific labels.

---

* outbound: Cache balancers within profile stack (linkerd/linkerd2-proxy#641)
* outbound: Remove unused error type (linkerd/linkerd2-proxy#648)
* Eliminate the ConnectAddr trait (linkerd/linkerd2-proxy#649)
* profiles: Do not rely on tuples as stack targets (linkerd/linkerd2-proxy#650)
* proxy-http: Remove unneeded boilerplate (linkerd/linkerd2-proxy#651)
* outbound: Clarify Http target types (linkerd/linkerd2-proxy#653)
* outbound: TCP discovery and load balancing (linkerd/linkerd2-proxy#652)
* metrics: Add endpoint labels to outbound TCP metrics (linkerd/linkerd2-proxy#654)
olix0r added a commit to linkerd/linkerd2 that referenced this pull request Sep 10, 2020
This release includes several major changes to the proxy's behavior:

- Service profile lookups are now necessary and fundamental to outbound
  discovery for HTTP traffic. That is, if a service profile lookup is
  rejected, endpoint discovery will not be performed; and endpoint
  discovery must succeed for all destinations that are permitted by
  service profiles. This simplifies caching and buffering to reduce
  latency (especially under concurrency).
- Service discovery is now performed for all TCP traffic, and
  connections are balanced over endpoints according to connection
  latency.
- This enables mTLS for **all** meshed connections; not just HTTP.
- Outbound TCP metrics are now hydrated with endpoint-specific labels.

---

* outbound: Cache balancers within profile stack (linkerd/linkerd2-proxy#641)
* outbound: Remove unused error type (linkerd/linkerd2-proxy#648)
* Eliminate the ConnectAddr trait (linkerd/linkerd2-proxy#649)
* profiles: Do not rely on tuples as stack targets (linkerd/linkerd2-proxy#650)
* proxy-http: Remove unneeded boilerplate (linkerd/linkerd2-proxy#651)
* outbound: Clarify Http target types (linkerd/linkerd2-proxy#653)
* outbound: TCP discovery and load balancing (linkerd/linkerd2-proxy#652)
* metrics: Add endpoint labels to outbound TCP metrics (linkerd/linkerd2-proxy#654)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants