Skip to content

net: Provide block templates to peers on request #33191

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 10 commits into
base: master
Choose a base branch
from

Conversation

ajtowns
Copy link
Contributor

@ajtowns ajtowns commented Aug 14, 2025

Implements the sending side of SENDTEMPLATE, GETTEMPLATE, TEMPLATE message scheme for sharing block templates with peers via compact block encoding/reconstruction.

@DrahtBot DrahtBot added the P2P label Aug 14, 2025
@DrahtBot
Copy link
Contributor

DrahtBot commented Aug 14, 2025

The following sections might be updated with supplementary metadata relevant to reviewers and maintainers.

Code Coverage & Benchmarks

For details see: https://corecheck.dev/bitcoin/bitcoin/pulls/33191.

Reviews

See the guideline for information on the review process.
A summary of reviews will appear here.

Conflicts

Reviewers, this pull request conflicts with the following ones:

  • #30595 (kernel: Introduce initial C header API by TheCharlatan)
  • #30277 ([DO NOT MERGE] Erlay: bandwidth-efficient transaction relay protocol (Full implementation) by sr-gi)
  • #29641 (scripted-diff: Use LogInfo over LogPrintf [WIP, NOMERGE, DRAFT] by maflcko)
  • #17783 (common: Disallow calling IsArgSet() on ALLOW_LIST options by ryanofsky)
  • #17581 (refactor: Remove settings merge reverse precedence code by ryanofsky)
  • #17580 (refactor: Add ALLOW_LIST flags and enforce usage in CheckArgFlags by ryanofsky)
  • #17493 (util: Forbid ambiguous multiple assignments in config file by ryanofsky)

If you consider this pull request important, please also help to review the conflicting pull requests. Ideally, start with the one that should be merged first.

This removes "CreateNewBlock" messages from logs by default. Also changes
"-printpriority" argument to automatically enable -debug=miner logging.
@DrahtBot
Copy link
Contributor

🚧 At least one of the CI tasks failed.
Task lint: https://github.com/bitcoin/bitcoin/runs/48132397637
LLM reason (✨ experimental): The CI failure is caused by a circular dependency detected during lint checks.

Hints

Try to run the tests locally, according to the documentation. However, a CI failure may still
happen due to a number of reasons, for example:

  • Possibly due to a silent merge conflict (the changes in this pull request being
    incompatible with the current code in the target branch). If so, make sure to rebase on the latest
    commit of the target branch.

  • A sanitizer issue, which can only be found by compiling with the sanitizer and running the
    affected test.

  • An intermittent issue.

Leave a comment here, if you need help tracking down a confusing failure.

@gmaxwell
Copy link
Contributor

I like this, but is there a way to avoid reintroducing the mempool message transaction spying vulnerability? Construct the block skipping txn that have not yet been announced?

What are your thoughts on the size of template, I don't think it makes sense to just limit them to being one block in size, perhaps it would be reasonable to be 2 blocks in size so it always can run one block ahead?

@sipa
Copy link
Member

sipa commented Aug 18, 2025

It may be hard to do that perfectly; for example, if a recent CPFP occurred that bumped a low-fee old parent up high enough to make it into the template, but the child has not been announced to the peer yet. The child can be skipped when sending, but the presence of the parent would stand out and reveal the sender knew about the child.

If latency is not very important, there may be a simpler solution: compute the template, and schedule it for sending, but only send it as soon as all transactions in it have been announced.

@ajtowns
Copy link
Contributor Author

ajtowns commented Aug 18, 2025

I like this, but is there a way to avoid reintroducing the mempool message transaction spying vulnerability? Construct the block skipping txn that have not yet been announced?

Each template is annotated with the mempool's GetSequence() and a template won't be presented to a peer if its m_last_inv_sequence isn't at least that value. Since we announce the highest fee txs first when updating m_last_in_sequence that should mean they'll have already seen any txs in the template via INV (or implicitly via INV'ing a child/descendant).

Currently it will just immediately give an old template if the current template is too new; it would also be possible to delay providing the current template until m_last_inv_sequence is bumped when the next INV is sent, but that would have been a little more complicated and I wanted to keep things as simple as possible.

It may be hard to do that perfectly; for example, if a recent CPFP occurred that bumped a low-fee old parent up high enough to make it into the template, but the child has not been announced to the peer yet.

I think the above should also take care of this case already.

I guess that reveals a little more ordering info: if you received two high fee txs (tx1 and tx2) in a single INV cycle, but generate a new template after receiving tx1 but before receiving tx2, then you'll reveal the ordering of tx1 and tx2 to your peers. So perhaps aligning template generation with INV messages to inbound peers would be a worthwhile improvement. That would perhaps be a slightly leak to outbounds, but nothing they couldn't discover just by making an inbound connection to you.

What are your thoughts on the size of template, I don't think it makes sense to just limit them to being one block in size, perhaps it would be reasonable to be 2 blocks in size so it always can run one block ahead?

That seems pretty appealing; I didn't do anything along those lines here mostly to keep things simple, and because waiting for cluster mempool before building even bigger collections of packages seemed plausible. If miners were to enforce their own soft blocksize limits (perhaps to reduce blockchain growth and marginally improve IBD, or as a cartel-style way of limiting supply to push prices up), then a 4M weight template might already account for 2 or more blocks worth of transactions. (signet blocks are mostly limited to 1M weight, eg)

A double-size block wouldn't give you an ideal predictor for the next block: it would miss timelocked transactions and transactions that exceed ancestor/descendant limits for a single block, though equally a sufficiently clever peer could just include those in a template anyway, especially if consensus/locktime checks weren't being performed. At least they could if they ever saw them in the first place, which is presumably doubtful.

I'm not sure what level of template overlap is likely in the real world -- if it's just "everyone thinks these txs are valid, but some nodes don't think these are, and others don't think these other ones are", maybe that just adds up to ~6MB of memory total across all your peers (even if there are simultaneous surges across both unusual types of transactions) plus maybe 15% to account for churn. On the other hand, at times there might conceivably be a lot of double spends in the top of varius mempools, (perhaps posted to the network by chainanalysis-type nodes in order to try to discover the p2p network's structure to help figure out where txs came from or which nodes are most likely to be building templates for mining). If that's the case, it might not be sensible to try to keep all your peers' conflicting template txs in memory, and I think there's probably a variety of different approaches that might be worth trying there.

@gmaxwell
Copy link
Contributor

gmaxwell commented Aug 18, 2025

The timelocks could be relaxed analogous to increasing the size however. With MTP we essentially know the lock criteria for one block out.

I don't really think the memory usage is a concern, -- because there is just not a need to constantly do this with all peers in the face of memory pressure.

You could add an additional message that lets peers say "I have X fees in Y weight in my template" -- if the max size they were targeting was standardized then this would be a pretty good indicator when someone you haven't been syncing against has something you might want to see. Perhaps multiple breakpoints, like a little fee/weight curve communicated in a dozen bytes.

@ismaelsadeeq
Copy link
Member

Nice idea.

A couple of questions:

  1. How do nodes build the block template they share with peers? When they use only their mempool to build those templates, I don’t think that the transactions with diverging mempool policy will relay through the network and improve the compact block reconstruction process via this mechanism. Only peers whose outbound connections are miners will benefit.

  2. We don’t accept transactions in the mempool that have divergent policy because we deem them non-standard. How does this proposal prevent a scenario where an adversary stuff transactions that are harmful to peers, forcing them to validate and save them? (I think this is where weak blocks are superior to this proposal because they show that work has been done previously.)

  3. Is there a limit to the memory used to store the transactions that violate a node’s mempool policy but are received from peers? How do you decide which to keep and which to evict when the limit is reached?
    One approach could be linearization based on incentive compatibility and then evicting low-fee transactions, although this has the drawback of then making us not saving transactions paid for via out-of-band.

@polespinasa
Copy link
Contributor

If latency is not very important, there may be a simpler solution: compute the template, and schedule it for sending, but only send it as soon as all transactions in it have been announced.

Maybe a stupid question and I'm not understanding something, but what is the point of this if we have to wait for all transactions to be announced? Isn't the whole idea to make sure our peers know about what we think will be the next block?

@instagibbs
Copy link
Member

@ismaelsadeeq

How does this proposal prevent a scenario where an adversary stuff transactions that are harmful to peers, forcing them to validate and save them

There's no need to fully validate these transactions you are given. If they violate your own policy you just won't include them in your own mempool (or include them in a block template).

@ajtowns
Copy link
Contributor Author

ajtowns commented Aug 21, 2025

  1. How do nodes build the block template they share with peers?

The same way they build a block template for the getblocktemplate RPC (well, except ignoring config options that might otherwise reduce the block size). See here.

When they use only their mempool to build those templates, I don’t think that the transactions with diverging mempool policy will relay through the network and improve the compact block reconstruction process via this mechanism.

This doesn't aim to improve tx relay in most cases. The scenario this helps with is when a subset of the network has a relaxed policy compared to your node (eg, the librerelay nodes on the network vs you; or the core nodes on the network vs a default knots node; or nodes that have adopted newer policy/replacement rules (eg TRUC, lower min fee, pay to anchor, etc) vs an older/non-enabled node). In that case, if you happen to peer with one of those nodes that have a relaxed policy, and you request templates from that node, you'll be able to reconstruct blocks mined with that relaxed policy without needing a round trip.

(The main case where it might improve relay is when a tx was evicted from many but not all mempools due to size limits, but eventually mempools clear and its eligible again, but has not already been widely rebroadcast for other reasons. In some circumstances it also might help with replacement cycling attacks, allowing the victim transaction to automatically be rebroadcast once the conflicting tx has been replaced)

  1. We don’t accept transactions in the mempool that have divergent policy because we deem them non-standard. How does this proposal prevent a scenario where an adversary stuff transactions that are harmful to peers, forcing them to validate and save them?

Note that this PR does not include code for requesting templates, only providing them; so there's no change here from this PR alone.

The proof of concept code linked from https://delvingbitcoin.org/t/sharing-block-templates/1906/ that does request templates grabs templates from outbound/manual peers, and validates any txs in templates that aren't in the mempool according to the usual standardness rules, adding them to the mempool if they pass. That's no more harmful (CPU-wise) than receiving the same txs via INV/GETDATA, and the memory usage is limited by only keeping one template per selected peer, and rejecting templates that are more than 1MvB.

If you were to validate txs in templates against consensus rules only (without the additional standardness rules), then that could be more costly (you'd be missing the tx size limit and BIP 54 rules so might be hitting significant amounts of hashing), though. The proof of concept code doesn't do that.

  1. Is there a limit to the memory used to store the transactions that violate a node’s mempool policy but are received from peers? How do you decide which to keep and which to evict when the limit is reached?
    One approach could be linearization based on incentive compatibility and then evicting low-fee transactions, although this has the drawback of then making us not saving transactions paid for via out-of-band.

If transactions in templates are ordered by (modified) fee rate, then you could keep the first transactions in a template, which might help you preserve transactions that were paid for out-of-band if you have a peer that's aware of the out-of-band payments. That could also work for things like mempool.space's accelerator product, where there's an API that will tell you about the prioritised transactions, provided a sufficient number of nodes use the API and prioritise transactions accordingly.

@ajtowns
Copy link
Contributor Author

ajtowns commented Aug 21, 2025

Maybe a stupid question and I'm not understanding something, but what is the point of this if we have to wait for all transactions to be announced? Isn't the whole idea to make sure our peers know about what we think will be the next block?

If you include a transaction in a template you send to a peer, that implicitly announces the tx to that peer.

The reason we delay announcing txs to a peer is to add some uncertainty about when precisely we received a transaction, and the order in which we received similarly recent transactions -- knowing when many nodes in the network first heard about transactions with precision allows you to make a very good guess about how the network is structured and who heard about a transaction first, which makes it easier to work out who created the transaction which is bad for privacy, and possibly identify weak points in the p2p network to better plan DoS attacks.

So preserving the same timing uncertainty when adding a potentially new way to announce txs seems worthwhile.

@ajtowns ajtowns force-pushed the 202508-sendtemplate1 branch from 355e0b2 to 26dc385 Compare August 21, 2025 05:50
@ajtowns
Copy link
Contributor Author

ajtowns commented Aug 21, 2025

Bumped the size of templates up to 2MvB, and added latest_template_weight and latest_template_tx to gettemplateinfo to report on the size of the current template.

@DrahtBot
Copy link
Contributor

🚧 At least one of the CI tasks failed.
Task ARM, unit tests, no functional tests: https://github.com/bitcoin/bitcoin/runs/48553661676
LLM reason (✨ experimental): The failure is caused by a missing initializer for the constexpr static member 'ALLOW_OVERSIZED_BLOCKS' in miner.h.

Hints

Try to run the tests locally, according to the documentation. However, a CI failure may still
happen due to a number of reasons, for example:

  • Possibly due to a silent merge conflict (the changes in this pull request being
    incompatible with the current code in the target branch). If so, make sure to rebase on the latest
    commit of the target branch.

  • A sanitizer issue, which can only be found by compiling with the sanitizer and running the
    affected test.

  • An intermittent issue.

Leave a comment here, if you need help tracking down a confusing failure.

Constructing a BlockAssembler with ALLOW_OVERSIZED_BLOCKS allows oversized blocks.
This allows default initialisation of a PartiallyDownloadedBlock.
Also covers functional test setup.
This adds suport for SENDTEMPLATE, GETTEMPLATE, TEMPLATE p2p messages to generate
and send block templates to our peers. It does not support requesting templates from
our peers.
Allows extra transactions to be provided from structures other than a
vector of CTransactionRefs.
Shared templates can never predict a future block's coinbase, so don't
waste bandwidth including it; likewise don't include the first transaction
in the compact block prefill when it's not a coinbase.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants