Skip to content

Conversation

zuercher
Copy link
Member

Converts TcpProxy::Filter and WebSocket::WsHandlerImpl to use
Tcp::ConnectionPool to obtain connections. Much of the stats
handling and connection timeouts are handled by the connection
pool.

Stats were manually verified by comparing stats produced by
the tcp_proxy_integration_test with and without the connection
pool change.

Relates to #3818.

Risk Level: medium
Testing: unit/integration testing
Docs Changes: n/a
Release Notes: n/a

Signed-off-by: Stephan Zuercher stephan@turbinelabs.io

Converts TcpProxy::Filter and WebSocket::WsHandlerImpl to use
Tcp::ConnectionPool to obtain connections. Much of the stats
handling and connection timeouts are handled by the connection
pool.

Stats were manually verified by comparing stats produced by
the tcp_proxy_integration_test with and without the connection
pool change.

*Risk Level*: medium
*Testing*: unit/integration testing
*Docs Changes*: n/a
*Release Notes*: n/a

Signed-off-by: Stephan Zuercher <stephan@turbinelabs.io>
@zuercher zuercher requested a review from ggreenway July 23, 2018 23:52
@ggreenway
Copy link
Member

One initial question/discussion: It seems weird to me that TcpProxy holds a bare pointer to ConnectionData, and that ConnectionData has a release() method that must be called. Could/should this be reworked so that it holds a unique_ptr, and the destructor takes the place of release()?

@zuercher
Copy link
Member Author

I'll see if I can get that to work. I had originally wanted release-on-destruction, but wasn't able to make it work for reasons that I can't remember.

zuercher added 3 commits July 25, 2018 14:13
Signed-off-by: Stephan Zuercher <stephan@turbinelabs.io>
Signed-off-by: Stephan Zuercher <stephan@turbinelabs.io>
Signed-off-by: Stephan Zuercher <stephan@turbinelabs.io>
@zuercher
Copy link
Member Author

@ggreenway PTAL -- the ConnectionData is now passed as a unique_ptr and destroying it releases the connection.

Signed-off-by: Stephan Zuercher <stephan@turbinelabs.io>
@ggreenway ggreenway self-assigned this Jul 26, 2018
Signed-off-by: Stephan Zuercher <stephan@turbinelabs.io>
Copy link
Member

@ggreenway ggreenway left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks great. I really like that the stats, connect timeouts, etc all moved out of TcpProxy and into the pool.

void onIdleTimeout();
void resetIdleTimer();
void disableIdleTimer();

const ConfigSharedPtr config_;
Upstream::ClusterManager& cluster_manager_;
Network::ReadFilterCallbacks* read_callbacks_{};
Network::ClientConnectionPtr upstream_connection_;
Tcp::ConnectionPool::Cancellable* upstream_handle_{};
Tcp::ConnectionPool::ConnectionDataPtr upstream_connection_;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please rename to upstream_connection_data_ or similar.

Signed-off-by: Stephan Zuercher <stephan@turbinelabs.io>
@ggreenway
Copy link
Member

@mattklein123 Do you want to give this a final pass?

@mattklein123 mattklein123 self-assigned this Jul 26, 2018
@mattklein123
Copy link
Member

Yup will do.

Copy link
Member

@mattklein123 mattklein123 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice, love the code sharing.

std::move(idle_timer), upstream_host,
std::move(connected_timespan)));
const Upstream::HostDescriptionConstSharedPtr& upstream_host) {
DrainerPtr drainer(new Drainer(*this, config, callbacks, std::move(upstream_conn_data),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not actionable, but I wonder if the drain manager could now be moved into the cluster manager to help similar situations elsewhere? cc @ggreenway

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's an interesting idea. I think that would work fine. It could also live in the tcp pool. That may also avoid some of the complicated lifetime-issues around stats in the current implementation.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added a note to follow up on this in #3818.

@zuercher zuercher merged commit 028387a into envoyproxy:master Jul 30, 2018
danielhochman added a commit that referenced this pull request Aug 3, 2018
This reverts commit 028387a.

Signed-off-by: Daniel Hochman <danielhochman@users.noreply.github.com>
mattklein123 pushed a commit that referenced this pull request Aug 3, 2018
…" (#4043)

This reverts commit 028387a.

Signed-off-by: Daniel Hochman <danielhochman@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants