-
Notifications
You must be signed in to change notification settings - Fork 37.7k
script: return verification flag responsible for error upon validation failure #33012
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
The following sections might be updated with supplementary metadata relevant to reviewers and maintainers. Code Coverage & BenchmarksFor details see: https://corecheck.dev/bitcoin/bitcoin/pulls/33012. ReviewsSee the guideline for information on the review process. ConflictsReviewers, this pull request conflicts with the following ones:
If you consider this pull request important, please also help to review the conflicting pull requests. Ideally, start with the one that should be merged first. |
00ce527
to
48a5d18
Compare
🚧 At least one of the CI tasks failed. HintsTry to run the tests locally, according to the documentation. However, a CI failure may still
Leave a comment here, if you need help tracking down a confusing failure. |
I'm skeptical whether this behaviour is really worth preserving in a limited fashion? With this change, an attacker can waste your resources without being discouraged or risking having to pay tx fees by making a consensus invalid tx that fails standardness before the consensus failure is seen; and any innocent third parties are more likely to waste your resources by relaying txs under different standardness policies, and won't be penalised at all. I think getting different errors for standardness vs consensus failures can be useful for debugging (and perhaps functional tests or network behaviour monitoring) so could see it being worthwhile to retain a (default off) config option to continue testing transactions twice. |
The behaviour already exists in a limited fashion. Even without this change, an attacker can trivially create such a transaction (as discussed in OP). Here is for instance a Click to see patchdiff --git a/test/functional/data/invalid_txs.py b/test/functional/data/invalid_txs.py
index 36b274efc27..46bb2bba367 100644
--- a/test/functional/data/invalid_txs.py
+++ b/test/functional/data/invalid_txs.py
@@ -286,6 +286,22 @@ class NonStandardAndInvalid(BadTxTemplate):
self.spend_tx, 0, script_sig=b'\x00' * 3 + b'\xab\x6a',
amount=(self.spend_avail // 2))
+
+class FirstInNonStdSecondInInvalid(BadTxTemplate):
+ """A transaction with first a standardness error and then a consensus error."""
+ reject_reason = "non-mandatory-script-verify-flag (Using OP_CODESEPARATOR in non-witness script)"
+ # NOTE: feature_block would need to be tweaked for the spent tx to have two outputs.
+ block_reject_reason = "mandatory-script-verify-flag-failed (OP_RETURN was encountered)"
+ expect_disconnect = False
+ valid_in_block = False
+
+ def get_tx(self):
+ tx = create_tx_with_script(self.spend_tx, 0, script_sig=b"", amount=(self.spend_avail // 2))
+ tx.vin.append(CTxIn(COutPoint(self.spend_tx.txid_int, 1)))
+ tx.vin[0].scriptSig = b"\xab" # A CODESEP in the first input (non-standard)
+ tx.vin[1].scriptSig = b"\x6a" # An OP_RETURN in the second input (invalid)
+ return tx
+
# Disabled opcode tx templates (CVE-2010-5137)
DisabledOpcodeTemplates = [getDisabledOpcodeTemplate(opcode) for opcode in [
OP_CAT,
diff --git a/test/functional/p2p_invalid_tx.py b/test/functional/p2p_invalid_tx.py
index ae9771e7cb4..33082337cc8 100755
--- a/test/functional/p2p_invalid_tx.py
+++ b/test/functional/p2p_invalid_tx.py
@@ -14,6 +14,7 @@ from test_framework.messages import (
CTxOut,
)
from test_framework.p2p import P2PDataStore
+from test_framework.script import CScript, OP_TRUE
from test_framework.test_framework import BitcoinTestFramework
from test_framework.util import (
assert_equal,
@@ -56,7 +57,7 @@ class InvalidTxRequestTest(BitcoinTestFramework):
self.log.info("Create a new block with an anyone-can-spend coinbase.")
height = 1
- block = create_block(tip, create_coinbase(height), block_time)
+ block = create_block(tip, create_coinbase(height, extra_output_script=CScript([OP_TRUE])), block_time)
block.solve()
# Save the coinbase for later
block1 = block I don't think this logic is actually meant to prevent an attacker from wasting resources anyways. I think it's rather to make sure we don't split the network when introducing new Script-runtime standardness rules. And this change doesn't affect this. |
typedef enum is a C idiom which is unnecessary in C++
48a5d18
to
f7dc2be
Compare
Rebased to fix a silent merge conflict that was making the "test each commit" CI job fail. |
…rrors We distinguish between consensus and standardness errors in validating a transaction to decide whether to disconnect the peer. When a transaction is both consensus-invalid and non-standard, we sometimes treat it as a consensus failure and sometimes not. This commit adds additional cases to invalid_txs.py to clarify the distinction and highlight the behaviour change in the next commit. We treat a non-standard transaction with an invalid Script as non-standard (and not disconnect the peer), and always treat a standard transaction with an invalid Script (whether or not the Script itself is also non-standard) as consensus. (That is, as long as no non-standard Script is encountered before.)
f7dc2be
to
5557cbb
Compare
…nd standardness errors Change CScriptCheck() to return both the Script error type and the Script verification flag which caused the failure. This prevents having to run Script validation a second time upon failure. It does change behaviour slightly, in that a Script consensus failure after a Script standardness failure won't be detected anymore, and therefore won't lead to a disconnect. This is arguably by design though: this can only ever be detected by running Script validation without the standardness verification flags, which we are trying to avoid here.
5557cbb
to
648e5e2
Compare
Okay, but that just means the extra work isn't providing much value today either?
I'm suggesting we should not discourage for consensus invalid txs either, since we can't do that both consistently and efficiently -- ie, drop |
That doesn't seem too unreasonable to stop disconnecting on receiving a transaction which fails Script validation due to a consensus verification flag. However it's not clear that it's necessarily better. It doesn't help in adversarial situations, but may still avoid wasting resources on innocently misbehaving peers. With this PR, opportunistically detecting such peers and evicting them comes at virtually no cost (unlike the current situation). So i suppose the tradeoff is between "maintain the opportunistic behaviour" and "avoid the cost of reviewing and merging this PR". |
One example where the opportunistic disconnection could be useful is in the case of a hardfork coin with signature replay protection but still connecting to Bitcoin peers. It happened in the past but admittedly seems very unlikely nowadays. |
In that case you'll already get a network split when the hard fork side mines a block.
Disconnecting peers is only useful in adversarial situations -- keeping slightly buggy but otherwise honest peers in sync with our view of the world is a desirable outcome. A particular example of where this makes the network more robust is when we take currently standard txs and make them consensus invalid: if we ban on consensus invalid txs, then old nodes (that consider the tx standard) and new nodes (that consider the tx invalid) need to be bridged by intermediate nodes (that consider it valid but non-standard). If we simply ignore invalid txs, making the network more fragile.
Someone recently made the point that we already have a lot of code to maintain in bitcoin core, and that perhaps it would be beneficial to focus our attention on the parts that are providing the most benefit... |
A wise man! More seriously, i was merely pointing out the tradeoff here. Not saying this PR is the better alternative to #33050. I am out of rationalization for the existing behaviour anyways, so might as well take a red diff over a green one. |
🐙 This pull request conflicts with the target branch and needs rebase. |
Discussed this with AJ (and others). For reference, in this case the disconnection would happen after a second block being submitted on top a first invalid one. We would not disconnect upon a first invalid block being submitted through compact blocks. It turns out that this logic was not tested at all, so i opened #33083 which exercises various cases in |
…g an invalid compact block c157438 qa: test that we do disconnect upon a second invalid compact block being announced (Antoine Poinsot) fb2dcbb qa: test cached failure for compact block (Antoine Poinsot) f12d8b1 qa: test a compact block with an invalid transaction (Antoine Poinsot) d6c37b2 qa: remove unnecessary tx removal from compact block (Antoine Poinsot) Pull request description: In thinking about #33050 and #33012 (comment), i went through the code paths for peer disconnection upon submitting an invalid block. It turns out that the fact we exempt a peer from disconnection upon submitting an invalid compact block was not properly tested, as can be checked with these diffs: ```diff diff --git a/src/net_processing.cpp b/src/net_processing.cpp index https://github.com/bitcoin/bitcoin/commit/0c4a89c44cb6e9a61fdd486d576781e5b66c618c..d243fb88d4b 100644 --- a/src/net_processing.cpp +++ b/src/net_processing.cpp @@ -1805,10 +1805,10 @@ void PeerManagerImpl::MaybePunishNodeForBlock(NodeId nodeid, const BlockValidati // The node is providing invalid data: case BlockValidationResult::BLOCK_CONSENSUS: case BlockValidationResult::BLOCK_MUTATED: - if (!via_compact_block) { + //if (!via_compact_block) { if (peer) Misbehaving(*peer, message); return; - } + //} break; case BlockValidationResult::BLOCK_CACHED_INVALID: { ``` ```diff diff --git a/src/net_processing.cpp b/src/net_processing.cpp index 0c4a89c..e8e0c805367 100644 --- a/src/net_processing.cpp +++ b/src/net_processing.cpp @@ -1814,10 +1814,10 @@ void PeerManagerImpl::MaybePunishNodeForBlock(NodeId nodeid, const BlockValidati { // Discourage outbound (but not inbound) peers if on an invalid chain. // Exempt HB compact block peers. Manual connections are always protected from discouragement. - if (peer && !via_compact_block && !peer->m_is_inbound) { + //if (peer && !via_compact_block && !peer->m_is_inbound) { if (peer) Misbehaving(*peer, message); return; - } + //} break; } case BlockValidationResult::BLOCK_INVALID_HEADER: ``` We do have a test for this, but it actually uses a coinbase witness commitment error, which is checked much earlier in `FillBlock`. This PR adds coverage for the two exemptions in `MaybePunishNodeForBlock`. ACKs for top commit: kevkevinpal: ACK [c157438](c157438) nervana21: tACK [c157438](c157438) instagibbs: crACK c157438 stratospher: ACK c157438. Tree-SHA512: a77d5a9768c0d73f122b06db2e416e80d0b3c3fd261dae8e340ecec2ae92d947d31988894bc732cb6dad2e338b3c82f33e75eb3280f8b0933b285657cf3b212c
For 30.0 please consider reviewing #33050 instead of this PR. This can still be considered independently later on, but #33050 is a smaller patch to get the desired effects which should be easier to get in before feature freeze.
For unconfirmed transactions, we currently distinguish between Script validation failures due to standardness rules and those due to consensus rules. The distinction is used to decide whether to disconnect the peer which submitted this transaction.
This is currently achieved by repeating Script validation with only the consensus verification flags, after it failed with the standardness flags. It is wasteful, and potentially significantly more so since we have Taproot. This PR proposes to replace this detection by having Script validation return the verification flag responsible for the failure instead.
Note this is a slight behaviour change, as a consensus-related Script validation failure that happens after a standardness-related Script validation failure would not be treated as a consensus error anymore (and consequentially the peer not disconnected). I don't think this is an issue because we already do not treat all consensus-invalid transactions we get submitted as consensus errors. For instance, a non-standard transaction (for any reason other than its Scripts) with a consensus-invalid Script will be treated as a standardness error. Same for a transaction with a consensus-invalid Script in a later input than another with a non-standard Script. In these conditions, a Script that is consensus-invalid after being standardness-invalid is an edge case that seems unnecessary to detect. It also seems to me that the fact this edge case is detected at all is more an artifact of the existing logic (introduced in 97e7901 13 years ago) than a design decision. For all these reasons, this slight behaviour change seems fine to me (and definitely worth it when considering the upside of not having to run Script validation twice).
This change is quite verbose. If reviewers want i can try to make it less so, but i first wanted to open this as draft to get feedback on the broader approach of returning errors from Script validation directly instead of having to run it multiple times with different flags to distinguish the cause of an error.