-
Notifications
You must be signed in to change notification settings - Fork 37.7k
p2p: Disconnect, not discourage, misbehaving NODE_BLOOM peers #20083
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
} else if (pfrom.m_tx_relay != nullptr) { | ||
LOCK(pfrom.m_tx_relay->cs_filter); | ||
if (pfrom.m_tx_relay->pfilter) { | ||
pfrom.m_tx_relay->pfilter->insert(vData); | ||
} else { | ||
bad = true; | ||
pfrom.fDisconnect = true; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This behavior seems to have previously triggered log messages too-large bloom filter
or bad filteradd message
. I see that in other instances of pfrom.fDisconect
it seems to be often preceded by log messages, e.g. LogPrintf("Disconnecting outbound peer %d -- headers chain has insufficient work\n", pfrom.GetId());
.
Should there be a log message here for the disconnect as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's a good idea to log to the NET category whenever net_processing disconnects a peer.
Perhaps we should make this into a function CConnman::DisconnectPeer(CNode& peer, std::string msg)
and require a log message.
Receiving an oversized FILERLOAD or FILTERADD message is about as costly as receiving any large-sized unknown message. Light clients are already preferred for eviction, so no need to additionally discourage them.
Can you add a motivation to the PR description please? :) |
fa79ba7
to
faf3044
Compare
The following sections might be updated with supplementary metadata relevant to reviewers and maintainers. ConflictsReviewers, this pull request conflicts with the following ones:
If you consider this pull request important, please also help to review the conflicting pull requests. Ideally, start with the one that should be merged first. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure if this is an improvement. Setting fDisconnect
directly bypasses the logic in MaybeDiscourageAndDisconnect()
that prevents us from disconnecting peers with PF_NOBAN
or manual connections. Is that intentional?
} else if (pfrom.m_tx_relay != nullptr) { | ||
LOCK(pfrom.m_tx_relay->cs_filter); | ||
if (pfrom.m_tx_relay->pfilter) { | ||
pfrom.m_tx_relay->pfilter->insert(vData); | ||
} else { | ||
bad = true; | ||
pfrom.fDisconnect = true; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's a good idea to log to the NET category whenever net_processing disconnects a peer.
Perhaps we should make this into a function CConnman::DisconnectPeer(CNode& peer, std::string msg)
and require a log message.
if (vData.size() > MAX_SCRIPT_ELEMENT_SIZE) { | ||
bad = true; | ||
pfrom.fDisconnect = true; | ||
} else if (pfrom.m_tx_relay != nullptr) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we also disconnect if pfrom.m_tx_relay == nullptr
? That means this is a block-relay-only connection that we opened and the peer has asked us to load a bloom filter. Disconnecting seems like the polite thing to do.
Maybe out of scope for this PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually this is impossible, since we'll never advertise NODE_BLOOM
for outbound peers.
with self.nodes[0].assert_debug_log(['Misbehaving']): | ||
filter_peer.send_and_ping(msg_filteradd(data=b'\xcc'*(MAX_SCRIPT_ELEMENT_SIZE+1))) | ||
with self.nodes[0].assert_debug_log(['disconnecting peer']): | ||
filter_peer = self.send_and_wait_for_disconnect(filter_peer, msg_filteradd(data=b'\xcc' * (MAX_SCRIPT_ELEMENT_SIZE + 1))) | ||
|
||
filter_peer.send_and_ping(msg_filterclear()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe not required now that the connection gets reset in the previous test?
@@ -197,21 +198,25 @@ def test_filter(self, filter_peer): | |||
assert not filter_peer.tx_received | |||
|
|||
self.log.info('Check that sending "filteradd" if no filter is set is treated as misbehavior') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe change the word misbehavior here to avoid ambiguity with the function in net_processing.
|
||
self.log.info("Check that division-by-zero remote crash bug [CVE-2013-5700] is fixed") | ||
filter_peer.send_and_ping(msg_filterload(data=b'', nHashFuncs=1)) | ||
filter_peer.send_and_ping(msg_filteradd(data=b'letstrytocrashthisnode')) | ||
self.nodes[0].disconnect_p2ps() | ||
|
||
def send_and_wait_for_disconnect(self, filter_peer, msg): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like this function, and think it could be generally useful in other tests, but the name is slightly confusing for me, since it doesn't indicate that we'll also reconnect.
Sorry, this was opened accidentally (not ready yet) |
This continues the story started in commit 3a10d93