-
Notifications
You must be signed in to change notification settings - Fork 37.7k
net: Provide block templates to peers on request #33191
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
The following sections might be updated with supplementary metadata relevant to reviewers and maintainers. Code Coverage & BenchmarksFor details see: https://corecheck.dev/bitcoin/bitcoin/pulls/33191. ReviewsSee the guideline for information on the review process. ConflictsReviewers, this pull request conflicts with the following ones:
If you consider this pull request important, please also help to review the conflicting pull requests. Ideally, start with the one that should be merged first. |
This removes "CreateNewBlock" messages from logs by default. Also changes "-printpriority" argument to automatically enable -debug=miner logging.
89a83ac
to
355e0b2
Compare
🚧 At least one of the CI tasks failed. HintsTry to run the tests locally, according to the documentation. However, a CI failure may still
Leave a comment here, if you need help tracking down a confusing failure. |
I like this, but is there a way to avoid reintroducing the mempool message transaction spying vulnerability? Construct the block skipping txn that have not yet been announced? What are your thoughts on the size of template, I don't think it makes sense to just limit them to being one block in size, perhaps it would be reasonable to be 2 blocks in size so it always can run one block ahead? |
It may be hard to do that perfectly; for example, if a recent CPFP occurred that bumped a low-fee old parent up high enough to make it into the template, but the child has not been announced to the peer yet. The child can be skipped when sending, but the presence of the parent would stand out and reveal the sender knew about the child. If latency is not very important, there may be a simpler solution: compute the template, and schedule it for sending, but only send it as soon as all transactions in it have been announced. |
Each template is annotated with the mempool's Currently it will just immediately give an old template if the current template is too new; it would also be possible to delay providing the current template until
I think the above should also take care of this case already. I guess that reveals a little more ordering info: if you received two high fee txs (tx1 and tx2) in a single INV cycle, but generate a new template after receiving tx1 but before receiving tx2, then you'll reveal the ordering of tx1 and tx2 to your peers. So perhaps aligning template generation with INV messages to inbound peers would be a worthwhile improvement. That would perhaps be a slightly leak to outbounds, but nothing they couldn't discover just by making an inbound connection to you.
That seems pretty appealing; I didn't do anything along those lines here mostly to keep things simple, and because waiting for cluster mempool before building even bigger collections of packages seemed plausible. If miners were to enforce their own soft blocksize limits (perhaps to reduce blockchain growth and marginally improve IBD, or as a cartel-style way of limiting supply to push prices up), then a 4M weight template might already account for 2 or more blocks worth of transactions. (signet blocks are mostly limited to 1M weight, eg) A double-size block wouldn't give you an ideal predictor for the next block: it would miss timelocked transactions and transactions that exceed ancestor/descendant limits for a single block, though equally a sufficiently clever peer could just include those in a template anyway, especially if consensus/locktime checks weren't being performed. At least they could if they ever saw them in the first place, which is presumably doubtful. I'm not sure what level of template overlap is likely in the real world -- if it's just "everyone thinks these txs are valid, but some nodes don't think these are, and others don't think these other ones are", maybe that just adds up to ~6MB of memory total across all your peers (even if there are simultaneous surges across both unusual types of transactions) plus maybe 15% to account for churn. On the other hand, at times there might conceivably be a lot of double spends in the top of varius mempools, (perhaps posted to the network by chainanalysis-type nodes in order to try to discover the p2p network's structure to help figure out where txs came from or which nodes are most likely to be building templates for mining). If that's the case, it might not be sensible to try to keep all your peers' conflicting template txs in memory, and I think there's probably a variety of different approaches that might be worth trying there. |
The timelocks could be relaxed analogous to increasing the size however. With MTP we essentially know the lock criteria for one block out. I don't really think the memory usage is a concern, -- because there is just not a need to constantly do this with all peers in the face of memory pressure. You could add an additional message that lets peers say "I have X fees in Y weight in my template" -- if the max size they were targeting was standardized then this would be a pretty good indicator when someone you haven't been syncing against has something you might want to see. Perhaps multiple breakpoints, like a little fee/weight curve communicated in a dozen bytes. |
Nice idea. A couple of questions:
|
Maybe a stupid question and I'm not understanding something, but what is the point of this if we have to wait for all transactions to be announced? Isn't the whole idea to make sure our peers know about what we think will be the next block? |
There's no need to fully validate these transactions you are given. If they violate your own policy you just won't include them in your own mempool (or include them in a block template). |
The same way they build a block template for the
This doesn't aim to improve tx relay in most cases. The scenario this helps with is when a subset of the network has a relaxed policy compared to your node (eg, the librerelay nodes on the network vs you; or the core nodes on the network vs a default knots node; or nodes that have adopted newer policy/replacement rules (eg TRUC, lower min fee, pay to anchor, etc) vs an older/non-enabled node). In that case, if you happen to peer with one of those nodes that have a relaxed policy, and you request templates from that node, you'll be able to reconstruct blocks mined with that relaxed policy without needing a round trip. (The main case where it might improve relay is when a tx was evicted from many but not all mempools due to size limits, but eventually mempools clear and its eligible again, but has not already been widely rebroadcast for other reasons. In some circumstances it also might help with replacement cycling attacks, allowing the victim transaction to automatically be rebroadcast once the conflicting tx has been replaced)
Note that this PR does not include code for requesting templates, only providing them; so there's no change here from this PR alone. The proof of concept code linked from https://delvingbitcoin.org/t/sharing-block-templates/1906/ that does request templates grabs templates from outbound/manual peers, and validates any txs in templates that aren't in the mempool according to the usual standardness rules, adding them to the mempool if they pass. That's no more harmful (CPU-wise) than receiving the same txs via INV/GETDATA, and the memory usage is limited by only keeping one template per selected peer, and rejecting templates that are more than 1MvB. If you were to validate txs in templates against consensus rules only (without the additional standardness rules), then that could be more costly (you'd be missing the tx size limit and BIP 54 rules so might be hitting significant amounts of hashing), though. The proof of concept code doesn't do that.
If transactions in templates are ordered by (modified) fee rate, then you could keep the first transactions in a template, which might help you preserve transactions that were paid for out-of-band if you have a peer that's aware of the out-of-band payments. That could also work for things like mempool.space's accelerator product, where there's an API that will tell you about the prioritised transactions, provided a sufficient number of nodes use the API and prioritise transactions accordingly. |
If you include a transaction in a template you send to a peer, that implicitly announces the tx to that peer. The reason we delay announcing txs to a peer is to add some uncertainty about when precisely we received a transaction, and the order in which we received similarly recent transactions -- knowing when many nodes in the network first heard about transactions with precision allows you to make a very good guess about how the network is structured and who heard about a transaction first, which makes it easier to work out who created the transaction which is bad for privacy, and possibly identify weak points in the p2p network to better plan DoS attacks. So preserving the same timing uncertainty when adding a potentially new way to announce txs seems worthwhile. |
355e0b2
to
26dc385
Compare
Bumped the size of templates up to 2MvB, and added |
🚧 At least one of the CI tasks failed. HintsTry to run the tests locally, according to the documentation. However, a CI failure may still
Leave a comment here, if you need help tracking down a confusing failure. |
Constructing a BlockAssembler with ALLOW_OVERSIZED_BLOCKS allows oversized blocks.
This allows default initialisation of a PartiallyDownloadedBlock.
Also covers functional test setup.
This adds suport for SENDTEMPLATE, GETTEMPLATE, TEMPLATE p2p messages to generate and send block templates to our peers. It does not support requesting templates from our peers.
Allows extra transactions to be provided from structures other than a vector of CTransactionRefs.
Shared templates can never predict a future block's coinbase, so don't waste bandwidth including it; likewise don't include the first transaction in the compact block prefill when it's not a coinbase.
26dc385
to
75365cf
Compare
Implements the sending side of
SENDTEMPLATE
,GETTEMPLATE
,TEMPLATE
message scheme for sharing block templates with peers via compact block encoding/reconstruction.