-
Notifications
You must be signed in to change notification settings - Fork 37.8k
Reject transactions with excessive numbers of sigops #4150
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
// sigops, making it impossible to mine. Since the coinbase transaction | ||
// itself can contain sigops MAX_TX_SIGOPS is less than | ||
// MAX_BLOCK_SIGOPS; we still consider this an invalid rather than | ||
// merely non-standard transaction. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How is this considered invalid? This code path is inherently about non-IsStandard.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with @luke-jr here. INVALID is the wrong terminology. It doesn't fail any consensus validation check, we just deny it into the mempool because of DoS risk.
To save you some time testing: https://github.com/petertodd/tx-flood-attack |
I think we should probably introduce a new term for things which are not invalid but which no sane policy difference should change e.g.: E.g. version/nops/sane limits on SIGOPS count, canonization of pushes. We need to be careful about "invalid" since someone may accidentally incorporate it in a block validity rule. |
@gmaxwell Agreed, especially given that IsStandard() may very well turn into a very permissive whitelist allowing everything but some carefully controlled edge cases and NOP functionality used for safe soft-forking. |
I have previous suggested using a more generic "resource usage" metric, defined as max(bytes / max_block_bytes, sigops / max_block_sigops), and use that for prioritization and fee calculations instead of just size. That would naturally limit standardness of high-sigop transactions, make mining code require appropriate fees for it, and make it compete fairly with others for "sigop space" in blocks. |
+1 @sipa |
@petertodd Right; not a criticism on this pull request. As far as I can tell, it has the exact same effect, but on more than just mempool acceptance. |
@sipa Belt-and-suspenders; easy to write a resource usage metric that fails to notice a tx uses more resources than a block actually has! But anyway, rewriting IsStandard() to IsSoftForkSafe() is on my todo list, and will need a sigops resource usage metric thing. |
I have done the following testing on testnet only. The patch appears to work as intended.
I used @petertodd's tx-flood-attack script to generate the transactions. As an aside, -debug=mempool does not log an "AcceptToMemoryPool: accepted" message for transactions accepted via RPC, but it does when received via a network tx message, so this was a bit confusing at first. |
I like the concept, and we really need to address sigops. I don't like making an arbitrary limit which is technically below that which could be mined. Thus it is a soft fork, one likely to escape notice. mempool accept includes local TXs as well as remote ones. As such, leaning towards NAK. Want to find a better way to do the same thing. |
Huh? This isn't a soft-fork at all; AcceptToMemoryPool() isn't called in the block validation code. Anyway, it may be an arbitrary limit, but it's a very high limit. Remember that we're fixing an actual real-world vulnerability that is getting some exploitation attempts; we can create a better fix later with more permissive rules. The only transactions on mainnet that would have actually run into this limit are some data insertion ones from the wikileaks cables stuff. |
Note BTW the similarities to the other arbitrary limit on max transaction size. Again it's a limit almost no legit transaction runs into in practice, so essentially zero impact on actual users. |
It is indeed not a softfork or any fork - it doesn't touch the consensus code. It checks transactions before going into the mempool not transactions in blocks. IMO these kinds of belt-and-suspenders DoS limits make sense. ACK (except from returning INVALID which I think is confusing as it implies that a deeper validation limit is hit). |
@laanwj I can change the INVALID, although I'm out of the country again for another two weeks. If you want to do it yourself sooner feel free. |
Fixed s/INVALID/NON_STANDARD/ nit. Should be ready for merging. |
Automatic sanity-testing: PASSED, see http://jenkins.bluematt.me/pull-tester/p4150_4fad8e6d831729efa1965fa2034e7e51d3d0a1be/ for binaries and test log. |
4fad8e6 Reject transactions with excessive numbers of sigops (Peter Todd)
EPROCESS. This appears to have been merged with only one ACK. |
OK - reverted again |
Quickly poking people for ACKs on IRC seems like a reasonable alternative to reverting, if you want to go that route. |
posthumous ACK |
ut ACK |
Reverting was based on a misunderstanding, it appears. Github-Pull: #4150
This comment was added in commit 4fad8e6 (PR bitcoin#4150). The terminology used ("consider this an invalid rather than merely non-standard transaction") is incorrect, since the transaction is considered non-standard and not consensus invalid. MAX_BLOCK_SIGOPS was also replaced by MAX_BLOCK_SIGOPS_COST as part of segwit. Just remove this comment since it's confusing and the code speaks for itself.
What it says on the tin.