-
Notifications
You must be signed in to change notification settings - Fork 5.1k
Flow control for Http::Filters #1417
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Flow control for Http::Filters #1417
Conversation
scope_.counter(fmt::format("{}rq_too_large", stat_prefix_)).inc(); | ||
Http::Utility::sendLocalReply(*decoder_callbacks_, stream_reset_, Http::Code::PayloadTooLarge, | ||
Http::CodeUtility::toString(Http::Code::PayloadTooLarge)); | ||
return Http::FilterDataStatus::StopIterationAndBuffer; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not StopIterationNoBuffer
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was modeling what buffer_filter.cc does. Granted it just falls through so if you think NoBuffer makes more sense I'm happy to switch.
source/common/dynamo/dynamo_filter.h
Outdated
@@ -27,7 +27,8 @@ class DynamoFilter : public Http::StreamFilter { | |||
} | |||
|
|||
// Http::StreamFilterBase | |||
void onDestroy() override {} | |||
void setBufferLimit(uint32_t limit) override { buffer_limit_ = limit; } | |||
void onDestroy() override { stream_reset_ = true; } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can decodeData
ever be called after onDestroy
? If not, then stream_reset_
isn't doing anything.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is required for sendLocalReply on the event the stream is destroyed mid-call.
see source/common/http/utility.h and #1283
source/common/dynamo/dynamo_filter.h
Outdated
std::string operation_{}; | ||
RequestParser::TableDescriptor table_descriptor_{"", true}; | ||
std::string error_type_{}; | ||
MonotonicTime start_decode_; | ||
Http::HeaderMap* response_headers_; | ||
Http::StreamDecoderFilterCallbacks* decoder_callbacks_{}; | ||
Http::StreamEncoderFilterCallbacks* encoder_callbacks_{}; | ||
uint32_t buffer_limit_{0}; | ||
bool enabled_{}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: consistency in primitive type initialization with below.
Http::HeaderMapPtr response_headers{new HeaderMapImpl{ | ||
{Headers::get().Status, std::to_string(enumToInt(Http::Code::PayloadTooLarge))}}}; | ||
callbacks_->encodeHeaders(std::move(response_headers), true); | ||
Http::Utility::sendLocalReply(*callbacks_, stream_reset_, Http::Code::PayloadTooLarge, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Vaguely wondering if we should have a StreamDecoderFilterWithBoundedBuffer
class to inherit from that handles some of this automagically, but don't feel that strongly given how simple this watermark check is.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was thinking of a base class with onDestroy() which I found more annoying to have all over. But I figured the odds of forgetting the up-call in an override out-weighted the code savings
// accumulating data during a delay. | ||
if (limiting_buffers_ && !high_watermark_called_) { | ||
high_watermark_called_ = true; | ||
callbacks_->onDecoderFilterAboveWriteBufferHighWatermark(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems... quite aggressive. Why can't we rely on ActiveStream just taking care of this when we hit the real limit?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a DoS filter, no? I figured this is just the sort of place we want to be aggressive because if DoS is kicking in you want to reserve your memory for uses who aren't flagged as bad.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, this filter is for injecting faults for testing (latency and failures).
@@ -54,6 +54,7 @@ class WatermarkBuffer : public LibEventInstance { | |||
void postProcess() override { checkLowWatermark(); } | |||
|
|||
void setWatermarks(uint32_t low_watermark, uint32_t high_watermark); | |||
uint32_t high_watermark() const { return high_watermark_; } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should it be highWatermark() ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Huh, looks like!
source/common/router/config_impl.h: uint32_t retryOn() const override { return retry_on_; }
source/common/router/config_impl.h: uint32_t numRetries() const override { return num_retries_; }
source/common/http/access_log/request_info_impl.h: const Optional<uint32_t>& responseCode() const override { return response_code_; }
I'll send a separate PR to clarify the style guide.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FWIW I have no great opinion on this one way or the other but @dnoe is correct that is what most of the code is doing.
if (state_ != State::Calling) { | ||
return FilterDataStatus::Continue; | ||
} | ||
// The fault filter mimizes buffering even more aggressively than configured, to avoid |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
typo: minimizes
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My general comment here is that we should push most of this logic into the connection manager to avoid filters becoming more complicated and/or having to duplicate it all over the place. Thoughts?
// If the request is too large, send a 413 to the user. This could be applied before the check | ||
// above, but as Envoy buffer limits are soft limits, if we already have the data buffered might | ||
// as well pass it through. | ||
if (buffer_limit_ > 0 && data.length() > buffer_limit_) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This filter pushes buffering down to the connection manager. In this case, should we rely on the connection manager to actually send the 413 and reset? Very few filters will end up doing internal buffering that is outside of what the connection manager will provide IMO. Just something to think about.
source/common/grpc/grpc_web_filter.h
Outdated
@@ -21,7 +21,9 @@ class GrpcWebFilter : public Http::StreamFilter, NonCopyable { | |||
virtual ~GrpcWebFilter(){}; | |||
|
|||
// Http::StreamFilterBase | |||
void onDestroy() override{}; | |||
// Ignore buffer limtis: see ASSERT in decodeData: decoding_buffer_ buffers less than 4 bytes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
typo "limtis"
if (buffer_limit_ > 0 && data.length() > buffer_limit_) { | ||
Http::Utility::sendLocalReply(*decoder_callbacks_, stream_reset_, | ||
Http::Code::InternalServerError, | ||
Http::CodeUtility::toString(Http::Code::InternalServerError)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
InternalServerError vs. PayloadTooLarge? Also same comment about centralized 413 logic?
@@ -51,6 +51,7 @@ class BufferFilter : public StreamDecoderFilter { | |||
static BufferFilterStats generateStats(const std::string& prefix, Stats::Scope& scope); | |||
|
|||
// Http::StreamFilterBase | |||
void setBufferLimit(uint32_t) override {} // Buffer filter uses its own configured limits. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mostly thought exercise: Could we actually allow filter to set buffer limit and push 413 to connection manager? This would make this filter trivial in that all it does is provide config to set buffer limit to min(configured_limit, stream_limit).
if (delay_timer_ == nullptr) { | ||
return FilterDataStatus::Continue; | ||
} | ||
if (buffer_limit_ > 0 && data.length() > buffer_limit_ && !high_watermark_called_) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In this case, perhaps filter can somehow tell connection manager whether to reset and return 413 vs. fire callbacks?
Ah, the connection manger hadn't sent replies thusfar so I didn't realize it was an option. Hm, so how about if setBufferLimit returned an enum { BUFFERING_FILTER, STREAMING_FILTER } (better names welcome. hard_limit / watermark_limits? ) where if any are non-streaming/non-watermark the connection manager will 413/500 and if all are streaming it'll do watermarks? The one thing that gets weird there is the buffering filter. I'd be inclined to say it's a configuration error to have a different limit for the buffering filter than any other filter, but that's not terribly reverse-compatible. Should it be able to just override the buffer limit with its own call back through streamDecoderFilterCallbacks? Seems messy. |
This sounds good to me. (Haven't though much about names but concept seems good).
Just thinking out loud, what if setBufferLimit returns a struct that includes additional optional data, for example the buffer limit? Then connection manager can take min of internal limit and filter limit? |
So something like this?
WIP patch (only integration tests build + pass) moving the logic to the
connection manager
Code worth looking at in conn_manager_impl.* and the filter *.h files
…On Wed, Aug 9, 2017 at 4:14 PM, Matt Klein ***@***.***> wrote:
Hm, so how about if setBufferLimit returned an enum { BUFFERING_FILTER,
STREAMING_FILTER } (better names welcome. hard_limit / watermark_limits? )
where if any are non-streaming/non-watermark the connection manager will
413/500 and if all are streaming it'll do watermarks?
This sounds good to me. (Haven't though much about names but concept seems
good).
The one thing that gets weird there is the buffering filter. I'd be
inclined to say it's a configuration error to have a different limit for
the buffering filter than any other filter, but that's not terribly
reverse-compatible. Should it be able to just override the buffer limit
with its own call back through streamDecoderFilterCallbacks? Seems messy.
Just thinking out loud, what if setBufferLimit returns a struct that
includes additional optional data, for example the buffer limit? Then
connection manager can take min of internal limit and filter limit?
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#1417 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/ARYFvZCoadWQ9D01oVdTTW9aw-8MB6XWks5sWhMYgaJpZM4OyKZx>
.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Dropped some comments in. In general I like this approach much better. I have been trying to push as much of the complex logic into connection manager as possible so that filters can stay as simple as possible. WDYT?
include/envoy/http/filter.h
Outdated
* | ||
* @param settings supplies the default buffer limit and filter type for this filter. | ||
*/ | ||
virtual void setDecoderBufferLimit(BufferLimitSettings& settings) PURE; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thought: It might be better to just return a struct here to make it more clear that the filter needs to tell the connection manager what it wants. We could even make the default constructor for the struct set sane defaults. I think for this type of struct the compiler will optimize well.
@@ -966,7 +986,7 @@ void ConnectionManagerImpl::ActiveStreamFilterBase::commonHandleBufferData( | |||
// rebuffer, because we assume the filter has modified the buffer as it wishes in place. | |||
if (bufferedData().get() != &provided_data) { | |||
if (!bufferedData()) { | |||
bufferedData().reset(new Buffer::OwnedImpl()); | |||
bufferedData().reset(createBuffer().release()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: bufferedData() = createBuffer();
(might need std::move())
if (parent_.encoder_filters_all_streaming_) { | ||
onDecoderFilterAboveWriteBufferHighWatermark(); | ||
} else { | ||
HeaderMapPtr response_headers{new HeaderMapImpl{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should probably use sendLocalReply here.
onEncoderFilterAboveWriteBufferHighWatermark(); | ||
} else { | ||
// FIXME 500 | ||
HeaderMapPtr response_headers{new HeaderMapImpl{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this case needs a bit of thinking. It is technically possible for the response to start, but then have buffering occur, so I think we need to check if response has started and if so reset, otherwise we can send a 500.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess one could argue this is filter bug. Not sure. (Like if filter continues headers but then buffers data it's really a streaming filter). Anyway needs a bit of thinking IMO.
@@ -352,7 +354,16 @@ class ConnectionManagerImpl : Logger::Loggable<Logger::Id::http>, | |||
: ActiveStreamFilterBase(parent, dual_filter), handle_(filter) {} | |||
|
|||
// ActiveStreamFilterBase | |||
Buffer::InstancePtr& bufferedData() override { return parent_.buffered_request_data_; } | |||
Buffer::WatermarkBufferPtr createBuffer() override { | |||
auto buffer = new Buffer::WatermarkBuffer(Buffer::InstancePtr{new Buffer::OwnedImpl()}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please immediately assign to unique_ptr, same below.
@alyssawilk random quick thought before I forget. I'm not sure why I didn't think about this before, but I think we can simplify this even further. Basically, IMO the connection manager will automatically handle flow control callbacks and assume the filters are all streaming. The buffer has access to the current buffer, and if current buffer + new data > it can just return 413? We could probably even check that buffer filter limit < per stream limit during config load? Anyway just another thought. |
Hmm. I guess my previous comment isn't really fleshed out since we do need limitations also for dynamo and other filters as you point out since we must do full buffering. We can discuss more Monday. |
"Basically, IMO the connection manager will automatically handle flow control callbacks and assume the filters are all streaming. The buffer has access to the current buffer, and if current buffer + new data > it can just return 413? We could probably even check that buffer filter limit < per stream limit during config load? Anyway just another thought." Assume all buffers are streaming and if the buffer ever gets too large 413? I don't think that works but correct me if I'm missing something. Say we are willing to bufffer 10k. We assume things are streaming, so we buffer the 10k then fire watermark callbacks to stop reading more data. We now wait until the session times out because the gRPC filter is waiting on the full request but downstream won't send it because we've stopped consuming. We really do want to halt reading in some cases and give up in others and I think the filters are the only ones who know which to do. |
I think what I'm trying to sort out here is whether the filter's intention could be inferred from what it does in the callback APIs. For example, does it stop iteration and buffer? If it doesn't it is streaming. If it does, it's buffering, and we should do some kind of hard limit on the buffering and return 413 otherwise. This gets really muddy because even in the case of the router, if it does retries with body, it is a buffering filter sometimes, and streaming other times. I feel like if the filter ever does stop iteration and buffer it has to be buffering and if we hit buffer limit we could just return 413 directly? Sorry this is kind of confusing, if you want to brainstorm about this in realtime let me know. |
Thinking more about this, I think this approach is too simplistic. Let's say we have some message based thing like gRPC where you can have protos within an HTTP body but in this case there's usually multiple protos within each H2 stream. This makes it a buffering filter (if the proto is too large we should drop it on the floor). Say we have a route with a protoFilter and a router filter. With the current logic, if we took a long time to establish an upstream connection the connection manager wouldn't disambiguate if the protoFilter were causing the buffering (overly large proto: 413) or the router filter (too much transient data: watermark) and would make the wrong call. It's not really that the entire chain has some state but the entity which is causing the chain to back up. So we can let that entity make the call and have some duplicate code in the filters, or we can communicate more information from the filters to the connection manager. If you want to keep the logic in the connection manager I think rather than getting the filter type at creation time we have to get it at decodeData time. I think the easiest way to do that would be to split StopIterationAndBuffer to [StopIterationAndBuffer and StopIterationAndWatermark] which would be latched, and then request/responseDataTooLarge() would act based on if the entity which most recently caused things to back up. I think this will be a little hairy so I'm interested in your thoughts before I start hacking away. |
We definitely can't 413 if the buffer gets too large because you could have something like a DoS delay where you want to inject latency (but not 413) or a router waiting on slow upstream where we really do want to stop reading data for a while rather than process more incoming data and return an error. As said above (our emails raced) I think we could handle this by splitting the FilterDataStatus return so I'll wait and see what you think of that! |
Yes, I think we are converging on the same thing. I think splitting the return value of decodeData() is the way to go since then filter intention is clear and it's still very easy for filter writer. What do you think? |
It comes out pretty cleanly except for the router filter which is suboptimal but "works" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@alyssawilk took a look through. Other than one API comment, IMO this looks good to me. What do you think? Feel free to push back if you think a different solution would be better. In general, as we discussed I'm trying to optimize towards making filters as simple as possible to write in the common case even if it pushes more complexity into the connection manager.
include/envoy/http/filter.h
Outdated
* @param settings supplies the default buffer limit and filter type for this filter. | ||
* @return the buffer limit the filter will apply. | ||
*/ | ||
virtual uint32_t setDecoderBufferLimit(uint32_t limit) PURE; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm still not crazy about having every filter need to implement this function. Thoughts on ways of making this optional with a sane default? My thought here is to make this a callback function that needs to be called when set*Callbacks() is called on the filter? Open to other thoughts and your general opinion here.
I can either make it optional or make it a call a filter may perform on
callbacks. My only concern there is that if someone implements a new
filler which buffers it's non-obvious they're picking up the defaults. In
this case I lean towards making filters do an extra line of copy-paste
busywork for clarity but I'm happy to be overruled if you think it's better
off tucked away somewhere.
…On Sun, Aug 20, 2017 at 8:40 PM, Matt Klein ***@***.***> wrote:
***@***.**** commented on this pull request.
@alyssawilk <https://github.com/alyssawilk> took a look through. Other
than one API comment, IMO this looks good to me. What do you think? Feel
free to push back if you think a different solution would be better. In
general, as we discussed I'm trying to optimize towards making filters as
simple as possible to write in the common case even if it pushes more
complexity into the connection manager.
------------------------------
In include/envoy/http/filter.h
<#1417 (comment)>:
> @@ -261,6 +278,16 @@ class StreamFilterBase {
class StreamDecoderFilter : public StreamFilterBase {
public:
/**
+ * This routine is called on filter creation, setting the buffer limit for the
+ * filter. Filters should abide by these limits unless custom configuration
+ * overrides the limit. A buffer limit of 0 bytes indicates no limits are applied.
+ *
+ * @param settings supplies the default buffer limit and filter type for this filter.
+ * @return the buffer limit the filter will apply.
+ */
+ virtual uint32_t setDecoderBufferLimit(uint32_t limit) PURE;
I'm still not crazy about having every filter need to implement this
function. Thoughts on ways of making this optional with a sane default? My
thought here is to make this a callback function that needs to be called
when set*Callbacks() is called on the filter? Open to other thoughts and
your general opinion here.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#1417 (review)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/ARYFvXfkgvQBEvePC9jF6Vo9xlLT5am-ks5saNHogaJpZM4OyKZx>
.
|
My vote is less boilerplate and more documentation, but I don't have a super strong opinion. Can you ask some folks sitting around you what they think? |
Less boilerplate it is then! Will upload another patch in a bit :-)
…On Mon, Aug 21, 2017 at 10:50 AM, Matt Klein ***@***.***> wrote:
I can either make it optional or make it a call a filter may perform on
callbacks. My only concern there is that if someone implements a new
filler which buffers it's non-obvious they're picking up the defaults. In
this case I lean towards making filters do an extra line of copy-paste
busywork for clarity but I'm happy to be overruled if you think it's better
off tucked away somewhere.
My vote is less boilerplate and more documentation, but I don't have a
super strong opinion. Can you ask some folks sitting around you what they
think?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#1417 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/ARYFvVlCcZpuJ3Yr03Wy5Y4LoO1lmfSrks5saZk8gaJpZM4OyKZx>
.
|
Sorry which changes are propagated? |
Making sure if a filter changes the limit, other filters are notified.
That said, I already "handled" that with the only caller of
decoderBufferLimit()
// As the decoder filter only pushes back via watermarks once data has
reached
// it, it can latch the current buffer limit and does not need to update
the
// limit if another filter increases it.
buffer_limit_ = callbacks_->decoderBufferLimit();
…On Thu, Aug 31, 2017 at 4:28 PM, Matt Klein ***@***.***> wrote:
I am going to add a TODO to make sure changes are propagated
Sorry which changes are propagated?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#1417 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/ARYFvTavaOTRpOKdR7-XjinJpyhuGhl-ks5sdxdQgaJpZM4OyKZx>
.
|
Got it, yes, TODO is totally fine with me. Can you merge master also? |
Sorry if I didn't clarify enough - the TODO is not necessary - filters can
check the callbacks for the latest buffer limits and the only filter which
uses the limit is documented as OK with latching the limit it was set up
with. Merged again since last week's push was a bit stale :-)
…On Thu, Aug 31, 2017 at 4:54 PM, Matt Klein ***@***.***> wrote:
Got it, yes, TODO is totally fine with me. Can you merge master also?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#1417 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/ARYFvfB5tHuNw5LoU5BHFdvYVX3tjpvAks5sdx1_gaJpZM4OyKZx>
.
|
@alyssawilk can you merge master again and I will look at this over the weekend? (I actually would like to defer merging for a couple days just to make sure the other flow control issue is actually fixed when we finish deploy tomorrow). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for long review delay. A few more comments but overall this looks great to me. Not going to lie this is a scary change (particularly around all of the call stack permutations if we reset / send reply, etc. during a watermark event) so will probably do some quick smoke tests once we merge.
Buffer::WatermarkBufferPtr ConnectionManagerImpl::ActiveStreamEncoderFilter::createBuffer() { | ||
auto buffer = new Buffer::WatermarkBuffer([this]() -> void { this->responseDataDrained(); }, | ||
[this]() -> void { this->responseDataTooLarge(); }); | ||
if (parent_.buffer_limit_ > 0) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Q: Why don't we need this same guard in the request version of this function?
@@ -23,6 +23,7 @@ | |||
#include "envoy/tracing/http_tracer.h" | |||
#include "envoy/upstream/upstream.h" | |||
|
|||
#include "common/buffer/watermark_buffer.h" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Q: With the new factory stuff that we have from the other PRs, should we expose watermark directly to conn_manager_impl? Or just use the factory to create a watermark buffer and return an Instance?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mildly prefer doing this for now - the only reason we did the factory is to allow injecting buffers in end to end tests, and it'll complicate the test logic if we use it places we don't need to inject mock buffers. 100% fine if you think it's more consistent though - I can go tweak the tests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nah I don't care. Was just curious.
@@ -503,21 +520,27 @@ class ConnectionManagerImpl : Logger::Loggable<Logger::Id::http>, | |||
State() : remote_complete_(false), local_complete_(false), saw_connection_close_(false) {} | |||
|
|||
uint32_t filter_call_state_{0}; | |||
bool encoder_filters_streaming_{true}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry I think we are talking past each other on this state stuff. The only reason the below booleans are a bitfield is to save space. It has nothing to do with mutual exclusion or flag like variables. For this reason any reason to not just define the three new variables as part of the bitfield?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
bool& is not exchangeable for bitfields in arguments.
destroyed_ is handed to sendLocalReply, and apparently the const bool& is good enough to keep the compiler happy. Unfortunately it doesn't actually work - the const convinces the compiler to convert the bitfield to a boolean, but if you change the value during header serialization it just fails to pick up the edit. Highly related: I'm so glad I wrote (and deleted) a unit test for this one.
EXPECT_CALL(callbacks, encodeHeaders_(, false)).WillOnce(InvokeWithoutArgs(& -> void {
bitfield.destroyed = true; // fails to have an effect
}));
EXPECT_CALL(callbacks, encodeData(_, true)).Times(0); // unexpectedly called, test fails.
For commonHandleAfterDataCallback the bitfield/bool conversion fails outright. This one we could arguably refactor to take the State struct instead of the specific boolean it edits, I just prefer the clarity of passing the argument to eb edited.. Let me know if you prefer the compactness and I'll get right on it :-)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yikes OK. TIL something new. It's not a big deal either way. I guess I would say pick between one of these:
- Just get rid of bitfield entirely in there for consistency.
- Add a comment on why some are bitfield and some are not.
- Refactor to pass state around as you mentioned.
// As the decoder filter only pushes back via watermarks once data has reached | ||
// it, it can latch the current buffer limit and does not need to update the | ||
// limit if another filter increases it. | ||
buffer_limit_ = callbacks_->decoderBufferLimit(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
random thought: Should we assert inside connection manager that set/get buffer limit is only called in the context of the set callbacks calls?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm skeptical it will be called willy nilly but I'd be happy to have scope guards for setBufferLimits if you'd like. I don't think we ought to for getting buffer limits - there could be a filter which did not latch limits which wanted to check to see if another filter changed it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't feel that strongly about it. Either way. Was just a random thought that might help find bugs later.
source/common/router/router.cc
Outdated
buffered_request_body_.reset( | ||
new Buffer::WatermarkBuffer([this]() -> void { this->enableDataFromDownstream(); }, | ||
[this]() -> void { this->disableDataFromDownstream(); })); | ||
if (parent_.buffer_limit_ > 0) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see you removed the > 0 check elsewhere. Is it still needed here?
- Drops nghttp2 PR1468 patch - Requires bazel_external_cmake to support copts, defines to drop the rest Risk Level: low Testing: CI Fixes #1417 Signed-off-by: William A Rowe Jr <wrowe@vmware.com>
Signed-off-by: qinggniq <livewithblank@gmail.com> add auth helper Signed-off-by: qinggniq <livewithblank@gmail.com> refactor mysql server greeting codec Signed-off-by: qinggniq <livewithblank@gmail.com> codec encode Signed-off-by: qinggniq <livewithblank@gmail.com> add unit test for mysql greeting codec Signed-off-by: qinggniq <livewithblank@gmail.com> complete login resp codec Signed-off-by: qinggniq <livewithblank@gmail.com> add login message test Signed-off-by: qinggniq <livewithblank@gmail.com> feat link error Signed-off-by: qinggniq <livewithblank@gmail.com> refactor mysql login resp Signed-off-by: qinggniq <livewithblank@gmail.com> pass all codec test Signed-off-by: qinggniq <livewithblank@gmail.com> only contain codec change Signed-off-by: qinggniq <livewithblank@gmail.com> remove header Signed-off-by: qinggniq <livewithblank@gmail.com> http: expose encoded headers/trailers via callbacks (envoyproxy#14544) In order to support upstream filters passing ownership of headers and then being able to reference them after the fact, expose a HTTP filter function that allows reading the header maps back. Signed-off-by: Snow Pettersen <snowp@lyft.com> Implement request header processing in ext_proc (envoyproxy#14385) Send request headers to the server and apply header mutations based on the response. The rest of the protocol is still ignored. Signed-off-by: Gregory Brail <gregbrail@google.com> 1.17.0 release (envoyproxy#14624) Signed-off-by: Matt Klein <mklein@lyft.com> kick off v1.18.0 (envoyproxy#14637) Signed-off-by: Matt Klein <mklein@lyft.com> filter manager: drop assert (envoyproxy#14633) Signed-off-by: Raul Gutierrez Segales <rgs@pinterest.com> docs: update ext_proc docs to reflect implementation status (envoyproxy#14636) Signed-off-by: Gregory Brail <gregbrail@google.com> [tls] Expose ServerContextImpl::selectTlsContext (envoyproxy#14592) Signed-off-by: Chad Retz <chad.retz@stackpath.com> http: support creating filters with match tree (envoyproxy#14430) Adds support for wrapping a HTTP filter with an ExtensionWithMatcher proto to create the filters with an associated match tree. Under the hood this makes use of a wrapper filter factory that manages creating the match tree and adding it to the FM alongside the associated filter. Also includes some code to register factories for input/actions, allowing them to be referenced in the proto configuration. Signed-off-by: Snow Pettersen <snowp@lyft.com> fix empty connection debug logs (envoyproxy#14666) Fixes envoyproxy#14661 Signed-off-by: Rama Chavali <rama.rao@salesforce.com> tcp_proxy: ignore transfer encoding in HTTP/1.1 CONNECT responses (envoyproxy#14623) Commit Message: Ignore the transfer encoding header in CONNECT responses Additional Description: NONE Risk Level: low Testing: integration test Docs Changes: NONE Release Notes: https://github.com/irozzo-1A/envoy/blob/ignore-transfer-encoding/docs/root/version_history/current.rst#new-features Platform Specific Features: NONE Fixes envoyproxy#11308 Signed-off-by: Iacopo Rozzo <iacopo@kubermatic.com> ci: fix docs tag build (envoyproxy#14653) Signed-off-by: Lizan Zhou <lizan@tetrate.io> HTTP health checker: handle GOAWAY from HTTP2 upstreams (envoyproxy#13599) Makes the HTTP health checker handle GOAWAY properly. When the NO_ERROR code is received, any in flight request will be allowed to complete, at which time the connection will be closed and a new connection created on the next interval. GOAWAY frames with codes other than NO_ERROR are treated as a health check failure, and immediately close the connection. Signed-off-by: Michael Puncel <mpuncel@squareup.com> upstream: clean up feature parsing code (envoyproxy#14629) Fixing a perfectly safe and fairly terrible version merge in the ALPN pr the "refactor all upstream config" PRs. the original code created the new options for new config, and parseFeatures handled parsing features from either the new options, or the old config. I decided that was too complicated, changed the code to always create the new options struct and forgot to clean up parseFeatures to assume the presence of the new options struct and remove handling things the old style way. Risk Level: low (clean up inaccessible code) Testing: added one extra unit test just because Docs Changes: n/a Release Notes: n/a Signed-off-by: Alyssa Wilk <alyssar@chromium.org> upstream: force a full rebuild on host weight changes (envoyproxy#14569) This will allow us to build load balancers that pre-compute data structures based on host weights (for example using weighted queues), to work around some of the deficiencies of EDF scheduling. This behavior can be temporarily disabled by setting the envoy.reloadable_features.upstream_host_weight_change_causes_rebuild feature flag to false. Fixes envoyproxy#14360 Signed-off-by: Matt Klein <mklein@lyft.com> access log: add support for command formatter extensions (envoyproxy#14512) Signed-off-by: Raul Gutierrez Segales <rgs@pinterest.com> test: improving dynamic_forward_proxy coverage (envoyproxy#14672) Risk Level: n/a (test only) Signed-off-by: Alyssa Wilk <alyssar@chromium.org> access_logs: removing disallow_unbounded_access_logs (envoyproxy#14677) Signed-off-by: Alyssa Wilk <alyssar@chromium.org> wasm: replace the obsolete contents in wasm-cc's README with docs link (envoyproxy#14628) Signed-off-by: Kenjiro Nakayama <nakayamakenjiro@gmail.com> grpc-json-transcoder: support root path (envoyproxy#14585) Signed-off-by: Xuyang Tao <taoxuy@google.com> ecds: add config source for network filter configs (envoyproxy#14674) Signed-off-by: Kuat Yessenov <kuat@google.com> fix comment for parameters end_stream of decodeData/encodeData. (envoyproxy#14620) Signed-off-by: wangfakang <fakangwang@gmail.com> [fuzz] Fix bugs in HPACK fuzz test (envoyproxy#14638) - Use after free because nghttp2_nv object has pointers to the underlying strings and copying them resulted in a use after free when the copy was used after the original was destroyed - Fixed sorting issues and tested leading/trailing whitespace headers (I can no longer reproduce an issue I saw where a null byte appeared after decoding whitespace, maybe the former fix fixed this) Risk Level: Low Testing: Added regression tests and cases for whitespace headers Fixes https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=28880 https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=28869 Signed-off-by: Asra Ali <asraa@google.com> v3 packages updates for omit_canary_hosts proto (envoyproxy#14117) Risk Level: LOW Testing: unit ( proto_format and docs ) part of envoyproxy#12841 Signed-off-by: Abhay Narayan Katare <abhay.katare@india.nec.com> streaminfo/mocks: delay filter_state_ dereference (envoyproxy#14612) By dereferencing filter_state_ in the constructor, any test that sets filter_state_ will dereference an invalid pointer. This may not be a common use-case, but it came up when writing some microbenchmarks for a custom filter where I needed to reset the FilterState on each iteration of the benchmark. Signed-off-by: Brian Wolfe <brian.wolfe@airbnb.com> http: support passing match result action to filter (envoyproxy#14462) Adds support for passing through a match action from a match tree to the associated HTTP filter. Some care has to be taken here around dual filters, so we introduce an abstraction that moves handling HttpMatchingData updates and applying the match result into a FilterMatchState object that is shared between all filter wrappers for a given filter. This should also avoid having to match twice for dual filters: the match result is shared for both filters, instead of both of them having to independently arrive at it with the same data. Signed-off-by: Snow Pettersen <snowp@lyft.com> refactor: use unitfloat in more places (envoyproxy#14396) Commit Message: Use UnitFloat in place of float in more locations Additional Description: UnitFloat represents a floating point value that is guaranteed to be in the range [0, 1]. Use it in place of floats that also have the same expectation in OverloadActionState and connection listeners. This PR introduces no functional changes. Risk Level: low Testing: ran affected tests Docs Changes: n/a Release Notes: n/a Platform Specific Features: n/a Signed-off-by: Alex Konradi <akonradi@google.com> [tls] add missing built in cipher stat names (envoyproxy#14676) * add missing ciphers Signed-off-by: Asra Ali <asraa@google.com> docs: adding coverage walkthroguh (envoyproxy#14688) Risk Level: n/a Testing: n/a Docs Changes: adding developer docs Release Notes: n/a Signed-off-by: Alyssa Wilk <alyssar@chromium.org> local ratelimit: Add descriptor support in HTTP Local Rate Limiting (envoyproxy#14588) Signed-off-by: Kuat Yessenov <kuat@google.com> Co-authored-by: gargnupur <gargnupur@google.com> http: prefetch for upstreams (envoyproxy#14143) Commit Message: Adding predictive prefetch (useful mainly for HTTP/1.1 and TCP proxying) and uncommenting prefetch config. Additional Description: Risk Level: low (mostly config guarded) Testing: unit, integration tests Docs Changes: APIs unhidden Release Notes: inline Fixes envoyproxy#2755 Signed-off-by: Alyssa Wilk <alyssar@chromium.org> docs: Give a hint to specify type_url instead (envoyproxy#14562) Signed-off-by: Dhi Aurrahman <dio@rockybars.com> Remove flaky_on_windows tag from proxy_filter_integration_test (envoyproxy#14680) Testing: Ran proxy_filter_integration_test thousands of times Signed-off-by: Randy Miller <rmiller14@gmail.com> upstream: Fix moving EDS hosts between priorities. (envoyproxy#14483) At present if health checks are enabled and passing then moving an EDS host from P0->P1 is a NOOP, and P1->P0 results in an abort. In the first case: * P0 processing treats A as being removed because it's not in P0's list of endpoints anymore. * P0 marks A's existing Host as PENDING_DYNAMIC_REMOVAL. It marks A as having been updated in this config apply. * P1 skips over A because it is marked as updated in this update cycle already. In the second case: * P0 updates the priority on the existing Host. It is appended to the vector of added hosts. * P1 marks A's existing Host as PENDING_DYNAMIC_REMOVAL. It does adjust the removed host vector as the host is still pending removal. * A's Host is now in both priorities and is PENDING_DYNAMIC_REMOVAL. This is wrong, and would cause problems later but doesn't have a chance to because: * onClusterMemberUpdate fires with A's existing Host in the added vector (remember it wasn't removed from P1!) * HealthChecker attempts to create a new health check session on A, which results in an abort from the destructor of the already existing one. This was masked in tests by the tests enabling ignore_health_on_host_removal. We fix this by passing in the set of addresses that appear in the endpoint update. If a host being considered for removal appears in this set, and it isn't being duplicated into the current priority as a result of a health check address change, then we assume it's being moved and will immediately remove it. To simplify the case where a host's health check address is being changed AND it is being moved between priorities we always apply priority moves in place before we attempt any other modifications. This means such a case is treated as a separate priority move, followed by the health check address change. fixes envoyproxy#11517 Signed-off-by: Jonathan Oddy <jonathan.oddy@transferwise.com> examples: Add TLS SNI sandbox (envoyproxy#13975) Signed-off-by: Ryan Northey <ryan@synca.io> [utility]: Change behavior of main thread verification utility (envoyproxy#14660) Currently, the isMainThread function can only be called during the lifetime of thread local instance because the singleton that store main thread id is initialized in the constructor of tls instance and cleared in the destructor of tls instance. Change the utility so that outside the lifetime of tls instance, the function return true by default because everything is in main thread when threading is off. Risk Level: low Testing: change unit to reflect change of behavior. Signed-off-by: chaoqin-li1123 <chaoqin@uchicago.edu> Windows build: Add repository cache to CI (envoyproxy#14678) Signed-off-by: Sunjay Bhatia <sunjayb@vmware.com> ext-proc: Support "immediate_response" options for request headers (envoyproxy#14652) This lets ext_proc servers return an immediate HTTP response (such as to indicate an error) in response to a request_headers message. Signed-off-by: Gregory Brail <gregbrail@google.com> stats: convert tag extractor regexs to Re2 (envoyproxy#14519) Risk Level: high, the regexes are updated to match more specific patterns. Testing: unit tests Fixes envoyproxy#14439 Signed-off-by: Dmitry Rozhkov <dmitry.rozhkov@intel.com> hcm: removing envoy.reloadable_features.early_errors_via_hcm envoyproxy#14641 (envoyproxy#14684) Risk Level: Low (removal of deprecated disabled code) Testing: n/a Docs Changes: n/a Release Notes: inline Fixes envoyproxy#14641 Signed-off-by: Alyssa Wilk <alyssar@chromium.org> test: Fix O(1/32k) flakiness in H2 flood tests that disable writes based on source port of outgoing connections. (envoyproxy#14695) It is possible for the kernel to assign the same source port to both the client connection used by the test framework to connect to the Envoy and the Envoy's client connection to the upstream. When the source port is reused by both connections, the test client times out while trying to send the request because disabling write on the upstream connection also disabled writes on the test's client connection. Signed-off-by: Antonio Vicente <avd@google.com> tls: add missing stats for signature algorithms. (envoyproxy#14703) While there, refresh supported cipher suites and add more warnings. Signed-off-by: Piotr Sikora <piotrsikora@google.com> connection: tighten network connection buffer limits (envoyproxy#14333) Signed-off-by: Antonio Vicente <avd@google.com> xdstp: LDS glob collection support. (envoyproxy#14311) This patch introduces support for LDS xdstp:// collection URLs for glob collections over ADS. Context parameters are currently computed from node and resource URLs. Followup PRs will add support for other collection types (CDS, SRDS), non-ADS, provide dynamic context parameter update, extend support to singleton resources and then other xdstp:// features (list collections, redirects, alternatives, etc.) Part of envoyproxy#11264. Risk level: Low (opt-in) Testing: ADS integration test added. Various unit tests following implementation. Signed-off-by: Harvey Tuch <htuch@google.com> listener manager: avoid unique -> shared conversion (envoyproxy#14693) buildFilterChainInternal() returns a shared_ptr, so let's make that instead of unique_ptr. Signed-off-by: Raul Gutierrez Segales <rgs@pinterest.com> proto: re-implement RepeatedPtrUtil::hash(). (envoyproxy#14701) This changes RepeatedPtrUtil::hash() implementation to match MessageUtil::hash(), which was re-implemented in envoyproxy#8231. Reported by Tomoaki Fujii (Google). Signed-off-by: Piotr Sikora <piotrsikora@google.com> tcp: setting nodelay on all connections (envoyproxy#14574) This should have minimal effect, new server side connections had no-delay, codecs set no-delay, and upstream pools set no-delay. Traffic not using the tcp connection pool may be affected as well as raw use of the TCP client. Risk Level: Medium (data plane) Testing: new unit tests Docs Changes: n/a Release Notes: inline Runtime guard: envoy.reloadable_features.always_nodelay Signed-off-by: Alyssa Wilk <alyssar@chromium.org> test: Add multiheader TE + Content-Length test (envoyproxy#14686) Signed-off-by: Yan Avlasov <yavlasov@google.com> http2: Flip the upstream H2 frame flood and abuse checks to ON by default (envoyproxy#14443) Signed-off-by: Yan Avlasov <yavlasov@google.com> Fix the emsdk patching. (envoyproxy#14673) If the patch fails, because of `|| true`, bazel continues the build. Signed-off-by: Jonh Wendell <jonh.wendell@redhat.com> test: print test parameters meaningfully (envoyproxy#14604) Signed-off-by: Alex Konradi <akonradi@google.com> Migrate v2 thrift_filter to v3 api and corresponding docs changes. (envoyproxy#13885) part of envoyproxy#12841 Signed-off-by: Abhay Narayan Katare <abhay.katare@india.nec.com> http: reinstating prior connect timeout behavior (envoyproxy#14685) Signed-off-by: Alyssa Wilk <alyssar@chromium.org> Fix typo (envoyproxy#14716) Signed-off-by: Hu Shuai <hus.fnst@cn.fujitsu.com> master -> main (envoyproxy#14729) Various fixes Signed-off-by: Matt Klein <mklein@lyft.com> readme: fix logo URL (envoyproxy#14733) Signed-off-by: Matt Klein <mklein@lyft.com> Bump nghttp2 to 1.42.0 (envoyproxy#14730) - Drops nghttp2 PR1468 patch - Requires bazel_external_cmake to support copts, defines to drop the rest Risk Level: low Testing: CI Fixes envoyproxy#1417 Signed-off-by: William A Rowe Jr <wrowe@vmware.com> Pick up current bazel-build-tools tag (envoyproxy#14734) Signed-off-by: William A Rowe Jr <wrowe@vmware.com> access-logger: support request/response headers size (envoyproxy#14692) Add following command operator in access logger %REQUEST_HEADER_BYTES% %RESPONSE_HEADER_BYTES% %RESPONSE_TRAILER_BYTES% Risk Level: Low Testing: unit test Docs Changes: done Release Notes: done Signed-off-by: Xuyang Tao <taoxuy@google.com> dynamic_forward_proxy: envoy.reloadable_features.enable_dns_cache_circuit_breakers deprecation (envoyproxy#14683) * dynamic_forward_proxy: deprecation Signed-off-by: Shikugawa <rei@tetrate.io> Add support for google::protobuf::ListValue formatting (envoyproxy#14518) Signed-off-by: Itamar Kaminski <itamark@google.com> tls: improve TLS handshake/read/write error log (envoyproxy#14600) Signed-off-by: Shikugawa <Shikugawa@gmail.com> config: switch from std::set to absl::flat_hash_set for resource names. (envoyproxy#14739) This was a cleanup deferred from the review of envoyproxy#14311. The idea is to switch to the more efficient unordered absl::flat_hash_set across the resource subscription code base. Internally, we still use std::set (and even explicitly sort in the http_subscription_impl) to avoid changing any wire ordering. It seems desirable to preserve this for two reasons: (1) this derisks this PR as an internal-only change and (2) having deterministic wire ordering makes debug of xDS issues somewhat easier. Risk level: Low Testing: Updated tests. Signed-off-by: Harvey Tuch <htuch@google.com> network filters: avoid unnecessary std::shared_ptrs (envoyproxy#14711) While debugging a crash in: envoyproxy#13592 I ended up discussing with @lambdai and @mattklein123 whether network filters can hold references to things owned by their corresponding FactoryFilterCb. The answer is yes and the HCM and some other notable filters already use references instead of std::shared_ptrs. So let's consistently do this everywhere to avoid someone else asking this same question in the future. Plus, it's always nice to create fewer std::shared_ptrs. Follow-up on: envoyproxy#8633 Signed-off-by: Raul Gutierrez Segales <rgs@pinterest.com> docs: Updated version history with 1.13.8 release notes. (envoyproxy#14742) Signed-off-by: Christoph Pakulski <christoph@tetrate.io> Dispatcher: keeps a stack of tracked objects. (envoyproxy#14573) Dispatcher will now keep a stack of tracked objects; on crash it'll "unwind" and have those objects dump their state. Moreover, it'll invoke fatal actions with the tracked objects. This allows us to dump more information during crash. See related PR: envoyproxy#14509 Will follow up with another PR dumping information at the codec/parser level. Signed-off-by: Kevin Baichoo <kbaichoo@google.com> bootstrap-extensions: fix a crash on http callout (envoyproxy#14478) Currently when the ServerFactoryContext is passed to bootstrap extensions, it is only partially initialized. Specifically, attempting to access the cluster manager will cause a nullptr access (and hence a crash) This PR splits the creation and initialized to 2 seperate fucntions. Early creation is required to not break the `default_socket_interface` feature. Once created, the extension will receive the ServerFactoryContext in a different callback (the newly added `serverInitialized`), once they are fully initialized. Commit Message: Fix a crash that happens when bootstrap extensions perform http calls. Additional Description: Risk Level: Low (small bug-fix) Testing: Unit tests updated; tested manually with the changes as well. Docs Changes: N/A Release Notes: N/A Fixes envoyproxy#14420 Signed-off-by: Yuval Kohavi <yuval.kohavi@gmail.com> overload: create scaled timers via the dispatcher (envoyproxy#14679) Refactor the existing pathway for creating scaled Timer objects away from the ThreadLocalOverloadState and into the Dispatcher interface. This allows scaled timers to be created without plumbing through a bunch of extra state. Signed-off-by: Alex Konradi <akonradi@google.com> http: removing nvoy.reloadable_features.fix_upgrade_response envoyproxy#14643 (envoyproxy#14706) Risk Level: Low (removing deprecated disabled code) Testing: n/a Docs Changes: n/a Release Notes: inline Fixes envoyproxy#14643 Signed-off-by: Alyssa Wilk <alyssar@chromium.org> http: removing envoy.reloadable_features.fixed_connection_close (envoyproxy#14705) Risk Level: Low (removing deprecated guarded code) Testing: n/a Docs Changes: n/a Release Notes: inline Fixes envoyproxy#14645 Signed-off-by: Alyssa Wilk <alyssar@chromium.org> Revert "network filters: avoid unnecessary std::shared_ptrs (envoyproxy#14711)" (envoyproxy#14755) This reverts commit 72db81d. Per discussion in envoyproxy#14717 and via Slack, we'll come up with a different approach since using a std::function to keep state presents a few challenges. Signed-off-by: Raul Gutierrez Segales <rgs@pinterest.com> Clarify Consecutive Gateway Failure docs (envoyproxy#14738) It was initially unclear to me that when split_external_local_origin_errors is in the default setting of false that local origin failures will be counted as Consecutive Gateway Failures. It is clear above that they are counted by the Consecutive 5xx detection type but since I had that disabled I was surprised to find them counted in Consecutive Gateway Failure. I think the logic makes sense though so just attempting to clarify the docs here. Signed-off-by: Matthew Mead-Briggs <mmb@yelp.com> thrift proxy: fix crash when using payload_passthrough (envoyproxy#14723) We started seeing crashes triggered by ConnectionManager::passthroughEnabled() once we enabled `payload_passthrough`. That code assumes that there will _always_ be an active RPC. However, this is not true after a local response has been sent (e.g.: no healthy upstream, no cluster, no route, etc.). Risk Level: low Testing: unit tests added Doc Changes: n/a Release Notes: n/a Signed-off-by: Raul Gutierrez Segales <rgs@pinterest.com> thrift proxy: add comments explaining local replies (envoyproxy#14754) Risk Level: low Testing: n/a Docs Changes: n/a Release Notes: n/a Signed-off-by: Raul Gutierrez Segales <rgs@pinterest.com> wasm: update V8 to v8.9.255.6. (envoyproxy#14764) Signed-off-by: Piotr Sikora <piotrsikora@google.com> Request headers to add (envoyproxy#14747) Being consistent about treating Host: and :authority the same way in Envoy header modification. Risk Level: Medium (changes allowed modifiable headers) Testing: new unit tests Docs Changes: yes Release Notes: inline Signed-off-by: Alyssa Wilk <alyssar@chromium.org> tls: update BoringSSL to fbbf8781 (4324). (envoyproxy#14763) Signed-off-by: Piotr Sikora <piotrsikora@google.com> docs: change getting started order (envoyproxy#14774) Sandboxes are more relevant to new users than the other sections. Signed-off-by: Matt Klein <mklein@lyft.com> oauth2: set accept header on access_token request (envoyproxy#14538) Co-authored-by: Dhi Aurrahman <dio@tetrate.io> Signed-off-by: Richard Patel <me@terorie.dev> router: Remove envoy.reloadable_features.consume_all_retry_headers (envoyproxy#14662) This patch removes the envoy.reloadable_features.consume_all_retry_headers runtime flag. Signed-off-by: Martin Matusiak <numerodix@gmail.com> docs: API review checklist (envoyproxy#14399) * API review checklist Signed-off-by: Mark D. Roth <roth@google.com> [docs] Add guidance on ENVOY_BUG in STYLE.md (envoyproxy#14575) * Add guidanceon ENVOY_BUG and macro usage to STYLE.md Signed-off-by: Asra Ali <asraa@google.com> filters: Add test/server:filter_config_test (envoyproxy#14746) As part of envoyproxy#14470, I'll be modifying the base filter interface to include an overridable dependencies() function. This is prep work. Risk Level: Low (test only) Doc Change: n/a Release Notes: n/a Signed-off-by: Auni Ahsan <auni@google.com> dns: removing envoy.reloadable_features.fix_wildcard_matching envoyproxy#14644 (envoyproxy#14768) Signed-off-by: Alyssa Wilk <alyssar@chromium.org> test: FakeUpstream threading fixes (envoyproxy#14526) Signed-off-by: Antonio Vicente <avd@google.com> server: add FIPS mode statistic indicating FIPS compliance (envoyproxy#14719) Signed-off-by: Ravindra Akella <rakella@rakella-ltm.internal.salesforce.com> Add error_state to all config dump resources (envoyproxy#14689) Store the NACKed resource in each resources Risk Level: None Fixes: envoyproxy#14431 Signed-off-by: Lidi Zheng <lidiz@google.com> docs: fix two typos in jwt_authn_filter (envoyproxy#14796) Signed-off-by: Lukasz Jernas <lukasz.jernas@allegro.pl> tcp: adding logs and debug checks (envoyproxy#14771) Adding some logs and one ENVOY_BUG around the new TCP pool. Signed-off-by: Alyssa Wilk <alyssar@chromium.org> google_grpc: attempt to reduce lock contention between completionThread() and onCompletedOps() (envoyproxy#14777) Holding a stream's lock while running handleOpCompletion can result in the completion queue having to wait until the lock is released before adding messages on that stream to completed_ops_. In cases where the completion queue is shared across multiple gRPC streams, delivery of new messages on all streams is blocked until the lock held by the first stream while executing onCompletedOps. Signed-off-by: Antonio Vicente <avd@google.com> filters: Add dependencies.proto (envoyproxy#14750) Introduces the FilterDependency proto. This isn't quite an extension, but it's a common proto to be used by all filter extensions. Risk Level: Low (proto addition only) Signed-off-by: Auni Ahsan <auni@google.com> tools: Syncing api/BUILD file to generated_api_shadow (envoyproxy#14792) After chatting with @akonradi on Slack, it seems the generated_api_shadow/BUILD file was not being updated by proto_format since PR envoyproxy#9719. This PR copies the api/BUILD file to generated_api_shadow. Risk Level: Low (relevant for development) Signed-off-by: Adi Suissa-Peleg <adip@google.com> Add debug log for slow config updates for GRPC subscriptions (envoyproxy#14343) Risk Level: Low Testing: Docs Changes: N/A Release Notes: N/A Signed-off-by: Adam Schaub <adamsjob@google.com> oauth2 filter: Make OAuth scopes configurable. (envoyproxy#14168) New optional parameter 'auth_scopes' added to the filter. The default value is 'user' (if not provided) to avoid breaking changes to users updating to the latest version. Signed-off-by: andreyprezotto <andreypp@gmail.com> Co-authored-by: Nitin Goyal <nitingoyal.dev@gmail.com> upstream: Optimize LoadStatsReporter::startLoadReportPeriod implementation (envoyproxy#14803) cm_.clusters() is not O(1) in part due to it creating maps and returning by value. This means that startLoadReportPeriod was effectively O(n**2) on number of clusters since cm_.clusters() is called for every active cluster. Risk Level: low, functional no-op Testing: Existing tests. We may want a benchmark. Signed-off-by: Antonio Vicente <avd@google.com> ext_proc: Implement response path for headers only (envoyproxy#14713) Implement header processing on the response path by sending the response_headers message to the processor and handling the result. Also update the docs in the .proto file. Signed-off-by: Gregory Brail <gregbrail@google.com> reformat code Signed-off-by: qinggniq <livewithblank@gmail.com>
Description: Fixes a leak in our usage of strings produced by JNI translation. The introduction of the stats interface changes this from something that would only occur O(1) on engine creation, to something that will occur as frequently as stats are emitted. Risk Level: Low Testing: Local and CI Signed-off-by: Mike Schore <mike.schore@gmail.com> Signed-off-by: JP Simard <jp@jpsim.com>
Description: Fixes a leak in our usage of strings produced by JNI translation. The introduction of the stats interface changes this from something that would only occur O(1) on engine creation, to something that will occur as frequently as stats are emitted. Risk Level: Low Testing: Local and CI Signed-off-by: Mike Schore <mike.schore@gmail.com> Signed-off-by: JP Simard <jp@jpsim.com>
This PR adds buffer limits to the Http::Filters
Fixes #150
There's basically 3 modes of filter flow control.
The dynamo, buffer and Http1 bridge filters will buffer up to |limit| bytes then 413
Rate limit filter start back-pressure immediately for better DoS protection
Router filter and fault filter respect watermarks.
Most of the other filters don't buffer or have strict limits already and are commented up accordingly.