-
Notifications
You must be signed in to change notification settings - Fork 37.7k
validation: mempool validation and submission for packages of 1 child + parents #22674
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
8d1964c
to
db6451a
Compare
Concept ACK Just reviewing my notes from the L2 onchain support workshops. It was agreed that restricting to 1 child in a package would be sufficient for discussed L2 use cases (at least for now) and this PR implements multiple parents, 1 child packages rather than restricting to 1 parent, 1 child because of minimal increased implementation complexity. |
Thank you @michaelfolkson. Indeed you are correct, we discussed that multiple parent + 1 child would be best but 1 parent + 1 child sufficient for addressing L2 use cases. After speaking with @TheBlueMatt offline and getting further into the implementation, I realized that the multi-parent case would be significantly more helpful without being that much more complex. |
The following sections might be updated with supplementary metadata relevant to reviewers and maintainers. ConflictsReviewers, this pull request conflicts with the following ones:
If you consider this pull request important, please also help to review the conflicting pull requests. Ideally, start with the one that should be merged first. |
This came, in part, out of a discussion between @ariard and I about an implementation @ariard had to implement the lightning "anchor output" protocol in LDK[1], and later a discussion between @sdaftuar and I had about how practical it is to avoid some of the complexity @ariard suggested there. Specifically, in order to work around issues with potential package relay systems, @ariard suggested a somewhat complicated approach involving chaining spends and broadcasting conflicts to see which one confirms first. This avoids us having to implement the "one on-chain output for future feerate bumping per open channel" logic that others have, which is incredibly wasteful of on-chain space and also adds a lot of complexity for LDK's downstream users. If we can do replacement and multiple (no-onchain-parent) parents without a lot of complexity, it will save us a lot of heartache later. @sdaftuar suggested that it likely won't add a lot of complexity, so I suggested that to @glozow, who seemed to agree. |
No, I'm going to stick to the domino-bumping approach we previously discussed. Multiple parents aren't safe for lightning as it does allow a counterparty to delay the confirmation of a specific channel commitment by overbidding on any bundled commitment inside the same package. Let's say you broadcast the package A+B+C+D+E, where A,B,C,D are commitment transactions and E a common CPFP. Previously, your malicious counterparty have submitted a better-feerate package A'+F to the network mempools. When your package {A,B,C,D,E} is sumbitted for acceptance, it will failed on the better-feerate requirement : Line 830 in fdd80b0
(fwiw, not documented in bip125). As you don't know the failure reason from the honest LN node viewpoint, it's hard to decide on the next broadcast strategy. Either splitting the former package in X isolated one parent+child ones, because you suspect your counterparty to meddle with your transaction confirmation (similar to lightning/bolts#803). Or attribute the failure to feerate efficiency w.r.t to top of of the mempool transactions. Note, your counterparty doesn't have to be malicious for your multiple parents package to fail confirmation. A honest counterparty can just have decided to go on-chain concurrently, with a better fee-estimation than you. Of course, I think this unsafety is only concerning in case of time-sensitive confirmations. If all your commitments are devoid of HTLC outputs, it's not time-sensitive anymore, and that's okay to have confirmation failure on the edge cases, not funds are loss, beyond the timevalue of liquidity. Further, endorsing a Core dev hat, I would be pretty conservative with multiple-transaction p2p package. AFAIK, we already have DoS concerns to validate a single transaction that would finally lead to mempool acceptance failure (see #20277 (comment)). IIRC we have not improved on this area due to a lack of an agreement on an efficient anti-DoS strategy against malicious transaction submission (see #21224) If we either make progress in this direction and start to penalize or slow-down heavy-package propagation, we might in the future break LN softwares which would have made assumptions on multiple-parent support. Note, this can also in a disaster scenario if we discover DoS vulns we would have thought out and the Core project don't have a formalized process in case of internal wreckage with significant security implications for downstream projects. So back to a LN dev hat, I prefer to be conservative with the design of LDK anchor output backend and not make assumptions on potential multiple-parents support. For now, I think we already make far too assumptions on Core mempools behavior, without fully understanding all the implications. Apart of the multi-parents point, I think the change proposed are okay. Though I would recall once again, imho it would be better to present a BIP draft on the ml for this serie of PRs, once for all to agree on a consistent v0.1 package model, instead to all waste time on local discussions splitted across projects issues, slacks and irc... |
@ariard: Core could support multiple parents, 1 child packages whilst the Lightning protocol could only support 1 parent, 1 child packages (assuming your safety assessment above for Lightning is correct)? If Lightning needs only a subset of what Core offers there is generally less of a problem. It is when Lightning needs a superset of what Core offers where there is a problem. This would need a separation of validation and submission with the Lightning protocol/implementations enforcing different submission rules to Core's submission rules. Unless you are worried about a rogue Lightning implementation not following a future Lightning spec and creating a multiple parent, 1 child package when it shouldn't?
I agree on the BIP draft. |
Yes I think I've made the comment already in #16401, about the batching use-case for LN. I think it's okay as long as you don't have time-sensitive unilateral closure, where a counterparty might interfere with the spending of one funding output also covered by the package to decrease the confirmation odds of all other channels closure.
Please, would be great to have more eyes on this point. Note, iirc "anchor output" is only deployed by lnd so far and the last version (
Note, the BOLTs loosely specify the safe propagation of LN time-sensitive transactions. It's left to the knowledge of implementors for the best part (dust threshold, standard script flags, min tx size, ...) What I'm worried of is uninformed Lightning or L2 developers extending even further than today the scope of assumptions on mempool/p2p Core's behavior with safety implications for their protocols. Especially when it's a behavior potentially frail that we might heavily revise in the future such as handling of DoSy transactions. Again, with the lack of a clear project philosophy on the valid set of assumptions one can make on the stability of what Core offers at the mempool/p2p level, I'm learning the lesson, better to be conservative as an upper layer dev! |
If you, Lightning devs etc said to @glozow, Core etc that Lightning can't cope with multiple parent, 1 child packages and it can't enforce the restriction of 1 parent, 1 child packages at Layer 2 (assuming Core goes with multiple parent, 1 child) I am pretty sure Core will implement 1 parent, 1 child. This package relay project afaiu is primarily for Lightning (although there are other use cases). There are clearly some tricky things as we move between layers but I think this particular one is easy to navigate assuming Lightning devs can assess what it is comfortable with Core doing. |
db6451a
to
ac3e422
Compare
ac3e422
to
b8f2769
Compare
Rebased since #22675 was merged. I've reduced this PR's size a little bit. It no longer adds the submitrawpackage RPC, but has unit tests. Mailing list post is up. |
Central place for putting package-related info. This document or parts of it can also be easily ported to other places if deemed appropriate.
Note that this code path is not ever executed yet, because ProcessNewPackage asserts test_accept=true.
As node operators are free to set their mempool policies however they please, it's possible for package transaction(s) to already be in the mempool. We definitely don't want to reject the entire package in that case (as that could be a censorship vector). We should still return the successful result to the caller, so add another result type to MempoolAcceptResult.
af76228
to
046e8ff
Compare
Thanks for the reviews! I've addressed comments and added the commit handling transactions already in mempool since it was the answer to a lot of review questions. Also added some tests and a doc/policy/README.md + link in doc/README.md. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's better either to revert e12fafd (or past reviewers to look on it)
This commit is introducing a good chunk of context with implications for the rest of package relay support, it was not included in the branches previously reviewed and it has only been ACK'ed by 1 reviewer before merge.
// different wtxid) already exists in the mempool. | ||
// | ||
// We don't allow replacement transactions right now, so just swap the package | ||
// transaction for the mempool one. Note that we are ignoring the validity of the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is a behavior unsafe for the unwarranted user.
In the context of multi-party protocol, a malicious counterparty broadcast a transaction A' with an inflated witness. When the package A+B issued by a honest participant is entering into the mempool, A' is swapped in place of A. However, A has a far smallest witness, thus swapping for A' effectively downgrades the package feerate. This breaks the fee-bumping expectation of the honest participant, as a malleated package is now propagating on the network, instead of the higher one.
As of today, I don't know deployed protocols which would be affected by this package feerate-downgrade issue. That said, with the activation of Taproot it's now far easier to introduce applications/protocols vulnerable, as shared-owners of a utxo might have witnesses unfairly asymmetric in weight due to the script path spend control block.
I assume this behavior should be corrected if witness replacement is implemented. However, I think we shouldn't make assumptions it's done before package relay. In the meantime, I think it's better to not introduce unsafe or obscure behaviors, of which the curation might be missed by future reviewers. I would say it's better to just fail package acceptance if a same txid, different wtxid already exists in the mempool (same behavior than single-transaction acceptance in PreChecks
).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the context of multi-party protocol, a malicious counterparty broadcast a transaction A' with an inflated witness. When the package A+B issued by a honest participant is entering into the mempool, A' is swapped in place of A. However, A has a far smallest witness, thus swapping for A' effectively downgrades the package feerate.
As specified in the proposal: "We use the package feerate of the package after deduplication" which is intended to avoid an issue like this, and others. More rationale is included in the proposal.
This breaks the fee-bumping expectation of the honest participant, as a malleated package is now propagating on the network, instead of the higher one.
Given that the feerate of A' was sufficient to enter the mempool, this is made no worse or better with the introduction of package mempool accept, as we're treating A' vs A identically to how we treat individual transactions. We will deduplicate A from the package and assess B individually; its package feerate only includes the transactions that are not already in the mempool. Of course this means the ancestor score of B once everything is submitted is lower, but the danger has not been made worse by package mempool acceptance, as we would have the same result if B was received as an individual transaction.
I assume this behavior should be corrected if witness replacement is implemented.
Yes exactly. I still stipulate that package RBF and witness replacements can be worked on in parallel; both are improvements but neither requires the other. I've tried to document the limitation (known to both single transactions and packages) in the code, as seen in this diff.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would say it's better to just fail package acceptance if a same txid, different wtxid already exists in the mempool (same behavior than single-transaction acceptance in PreChecks).
I think this is worse than the bloated-witness problem acknowledged above. If A' is already in the mempool and:
- A and B are received separately as individual transactions -> A is rejected and B is accepted
- A+B is received as a package -> package fails, B is rejected.
This makes package acceptance more strict than individual transaction acceptance, which is something we want to avoid.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with @glozow's proposed behavior to use the existing mempool transaction (for now) in case it has a different wtxid. From what I see, the worse that can happen if that the sender proposed a package with feerate F1, and it ends up having feerate F2 < F1.
A sender that doesn't see its transaction included should keep bumping the child, so it should eventually confirm, it will just cost more than expected, but I don't think it introduces a fundamental attack vector (at least I currently don't see one).
And we can later implement replacement to choose between A and A' the one with the smallest weight (which is incentives compatible as it pays a higher feerate).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As specified in the proposal: "We use the package feerate of the package after deduplication" which is intended to avoid an issue like this, and others. More rationale is included in the proposal.
Yes, I've browsed against the gist and it doesn't precise in details how the deduplication is operated, at least on the witness replacement case (I think ?).
Given that the feerate of A' was sufficient to enter the mempool, this is made no worse or better with the introduction of package mempool accept, as we're treating A' vs A identically to how we treat individual transactions. We will deduplicate A from the package and assess B individually; its package feerate only includes the transactions that are not already in the mempool. Of course this means the ancestor score of B once everything is submitted is lower, but the danger has not been made worse by package mempool acceptance, as we would have the same result if B was received as an individual transaction.
Yes, I agree package mempool accept hasn't worsen that case compared to the individual transaction processing.
Note, my comment was about the future when package relay is supported, if we forget to operate incentives compatibles witness replacement in the case of incoming A' smaller than already in-mempool A. If we p2p relay the with-dedup-transactions package, that would constitute an attack vector as A + B have a smaller feerate than A' + B.
Currently, this is the code behavior though I think we all agree to correct that in the future. I think it's nice to either update the gist or leave a more detailed comment in the code, and thus ease future review/implementation work.
Yes exactly. I still stipulate that package RBF and witness replacements can be worked on in parallel
I don't fundamentally disagree on working them in parallel, beyond the scarcity of review bandwidth.
This makes package acceptance more strict than individual transaction acceptance, which is something we want to avoid.
Actually, I think package acceptance is stricter than individual transaction acceptance, at least at package limits evaluation (CheckPackageLimits
). User-wise, I still think it's better to fail loudly than silently accept a mutant, though if we can't agree here, maybe at least warn back the user about the wtxid swap ?
// Nothing to do if the entire package has already been submitted. | ||
if (txns_new.empty()) return PackageMempoolAcceptResult(package_state, std::move(results)); | ||
// Validate the (deduplicated) transactions as a package. | ||
auto submission_result = AcceptMultipleTransactions(txns_new, args); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As future package components should be evaluated on the union of the fees instead of the transaction ones to decide acceptance, uncareful deduplication could provoke unexpected package submission failures.
Let's say you have the package A+B+C, where A and B are parents of C, A and C are low-feerate, B is high-feerate. If B is already in the mempool, when the package A+B+C is received, B is dedup, then the union of fees of A+C is realized, it's too low to satisfy the mempool min fee, the packages is rejected.
To avoid this behavior, we should compute the union of fees pre-dedup in AcceptPackage
, then pass down the result to AcceptMultipleTransactions
. It's open if future testmempoolaccept
of multiple transaction should be also based on the union of fees, or we think it's an acceptable divergence w.r.t package submission. I think we should have a TODO pointing to that context.
Further, I think deduplication is also introducing issues w.r.t ancestors/descendants limits. If we decide to relay the package on the positive result from AcceptMultipleTransactions
, a dedup package might have lower ancestors/descendants limits than the complete one we announce to our peers.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's say you have the package A+B+C, where A and B are parents of C, A and C are low-feerate, B is high-feerate. If B is already in the mempool, when the package A+B+C is received, B is dedup, then the union of fees of A+C is realized, it's too low to satisfy the mempool min fee, the packages is rejected.
To avoid this behavior, we should compute the union of fees pre-dedup in AcceptPackage, then pass down the result to AcceptMultipleTransactions.
It is actually incentive-compatible to reject this package. B's fees should not supplement A's fees, because B does not depend on A; including B in the package feerate would be "sibling-pays-for-sibling" behavior. I've illustrated this in examples Q1 and Q2.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Further, I think deduplication is also introducing issues w.r.t ancestors/descendants limits. If we decide to relay the package on the positive result from AcceptMultipleTransactions, a dedup package might have lower ancestors/descendants limits than the complete one we announce to our peers.
No, we'll still enforce ancestor/descendant limits on the package in the same way. The package transactions that are already in the mempool will be included in ancestor counts. The nice thing about a child-with-parents package is that all package transactions are ancestors of the child transaction so we won't under/overestimate :).
Since we always have all of the unconfirmed parents in the package, mempool contents won't affect whether our package is within the limits specified by the protocol.
For example, if we can't have more than 25 in a package but somebody has a package where the child has 30 in-mempool ancestors, regardless of the size of the package after deduplication, the package is defined as child-with-all-unconfirmed-parents. Thus, since it has 30 unconfirmed ancestors, the defined package has a size of 31 and is too big.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is actually incentive-compatible to reject this package. B's fees should not supplement A's fees, because B does not depend on A; including B in the package feerate would be "sibling-pays-for-sibling" behavior.
I agree with that. @ariard if you also agree with that, is the rest of your comment resolved? Or is there a part I'm missing?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is actually incentive-compatible to reject this package. B's fees should not supplement A's fees, because B does not depend on A; including B in the package feerate would be "sibling-pays-for-sibling" behavior. I've illustrated this in examples Q1 and Q2.
You're right, that wouldn't be incentive-compatible. It also points that a naive implementation of fee union would be flawed, and that packages fees should be computed in topological descending order.
No, we'll still enforce ancestor/descendant limits on the package in the same way. The package transactions that are already in the mempool will be included in ancestor counts. The nice thing about a child-with-parents package is that all package transactions are ancestors of the child transaction so we won't under/overestimate :).
Right, at the individual transaction level. At the package-level, in CheckPackageLimits
, which is processed after deduplication, a parent transaction is accounted both as the child ancestor and as a package member in the check against limitAncestorCount
(L271, in src/txmempool.cpp
). If we prune the package size at deduplication, we might pass that check, which would be fail by a peer with a mempool, lacking the dedup elements ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
At the package-level, in
CheckPackageLimits
, which is processed after deduplication, a parent transaction is accounted both as the child ancestor and as a package member in the check againstlimitAncestorCount
(L271, insrc/txmempool.cpp
). If we prune the package size at deduplication, we might pass that check, which would be fail by a peer with a mempool, lacking the dedup elements ?
Nope, this is a fair concern, but I made sure to make it impossible for a package transaction to be double-counted after de-duplication. See this assertion that transactions passed in to CheckPackageLimits()
must not be in the mempool:
Lines 931 to 936 in 8c0bd87
// CheckPackageLimits expects the package transactions to not already be in the mempool. | |
assert(std::all_of(txns.cbegin(), txns.cend(), [this](const auto& tx) | |
{ return !m_pool.exists(GenTxid::Txid(tx->GetHash()));})); | |
std::string err_string; | |
if (!m_pool.CheckPackageLimits(txns, m_limit_ancestors, m_limit_ancestor_size, m_limit_descendants, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes my concern is if we relay the de-dup transactions as part of the p2p package we announce to our peer. A package A is evaluated as valid by Alice because she is dedup'ing half-of-the-package due to already in-mempool transaction. The whole package is relayed to Bob, who has an empty mempool. The whole package is evaluated and fails against CheckPackageLimits
.
That said, I'm making assumptions on the p2p package mechanism, which has not been yet formalized. I'm making a mental note and I would say let's think about it more when p2p is introduced. We can still correct AcceptPackage
back then.
Thanks @ariard for the prompt re-review!
(And re: IRC convos): I personally think e12fafd made sense for this PR and would have included it in the next one anyway, but I'm obviously biased as the author. I definitely welcome more review on it and can hold off on opening the next package mempool accept PR until it seems people are comfortable. If you have time @jnewbery @t-bast @stickies-v @achow101, we're requesting a look at e12fafd which was added between your ACKs and the merge. Thanks! :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
e12fafd looks good to me, I'm just unsure about the change in the tx results key set...
// Package testmempoolaccept doesn't allow transactions to already be in the mempool. | ||
CHECK_NONFATAL(tx_result.m_result_type != MempoolAcceptResult::ResultType::MEMPOOL_ENTRY); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How is this enforced? More specifically, what happens if testmempoolaccept
is called with multiple transactions where one or more is already in the mempool? It looks to me like the assert in PackageMempoolChecks()
could be triggered.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
testmempoolaccept doesn't allow already-in-mempool transactions, so it will be rejected
3cd7f69 [unit test] package parents are a mix (glozow) de075a9 [validation] better handle errors in SubmitPackage (glozow) 9d88853 AcceptPackage fixups (glozow) 2db77cd [unit test] different witness in package submission (glozow) 9ad211c [doc] more detailed explanation for deduplication (glozow) 83d4fb7 [packages] return DIFFERENT_WITNESS for same-txid-different-witness tx (glozow) Pull request description: This addresses some comments from review on e12fafd from #22674. - Improve documentation about de-duplication: [comment](https://github.com/bitcoin/bitcoin/pull/22674/files#r770156708) - Fix code looking up same-txid-different-witness transaction in mempool: [comment](https://github.com/bitcoin/bitcoin/pull/22674/files#r770804029) - Improve the interface for when a same-txid-different-witness transaction is swapped: [comment](https://github.com/bitcoin/bitcoin/pull/22674/files#r770782822) - Add a test for witness swapping: [comment](https://github.com/bitcoin/bitcoin/pull/22674/files#r770804029) - Add a test for packages with a mix of duplicate/different witness/new parents: [comment](#22674 (comment)) - Fix issue with not notifying `CValidationInterface` when there's a partial submission due to fail-fast: [comment](#22674 (comment)) ACKs for top commit: achow101: ACK 3cd7f69 t-bast: LGTM, ACK 3cd7f69 instagibbs: ACK 3cd7f69 ariard: ACK 3cd7f69 Tree-SHA512: a5d86ca86edab80a5a05fcbb828901c058b3f2fa2552912ea52f2871e29c3cf4cc34020e7aac2217959c9c3a01856f4bd3d631d844635b98144f212f76c2f3ef
…ages 3cd7f69 [unit test] package parents are a mix (glozow) de075a9 [validation] better handle errors in SubmitPackage (glozow) 9d88853 AcceptPackage fixups (glozow) 2db77cd [unit test] different witness in package submission (glozow) 9ad211c [doc] more detailed explanation for deduplication (glozow) 83d4fb7 [packages] return DIFFERENT_WITNESS for same-txid-different-witness tx (glozow) Pull request description: This addresses some comments from review on e12fafd from bitcoin#22674. - Improve documentation about de-duplication: [comment](https://github.com/bitcoin/bitcoin/pull/22674/files#r770156708) - Fix code looking up same-txid-different-witness transaction in mempool: [comment](https://github.com/bitcoin/bitcoin/pull/22674/files#r770804029) - Improve the interface for when a same-txid-different-witness transaction is swapped: [comment](https://github.com/bitcoin/bitcoin/pull/22674/files#r770782822) - Add a test for witness swapping: [comment](https://github.com/bitcoin/bitcoin/pull/22674/files#r770804029) - Add a test for packages with a mix of duplicate/different witness/new parents: [comment](bitcoin#22674 (comment)) - Fix issue with not notifying `CValidationInterface` when there's a partial submission due to fail-fast: [comment](bitcoin#22674 (comment)) ACKs for top commit: achow101: ACK 3cd7f69 t-bast: LGTM, ACK bitcoin@3cd7f69 instagibbs: ACK 3cd7f69 ariard: ACK 3cd7f69 Tree-SHA512: a5d86ca86edab80a5a05fcbb828901c058b3f2fa2552912ea52f2871e29c3cf4cc34020e7aac2217959c9c3a01856f4bd3d631d844635b98144f212f76c2f3ef
77202f0 [doc] package deduplication (glozow) d35a3cb [doc] clarify inaccurate comment about replacements paying higher feerate (glozow) 5ae187f [validation] look up transaction by txid (glozow) Pull request description: - Use txid, not wtxid, for `mempool.GetIter()`: bitcoin/bitcoin#22674 (comment) - Fix a historically inaccurate comment about RBF during the refactors: bitcoin/bitcoin#22855 (comment) - Add a section about package deduplication to policy/packages.md: bitcoin/bitcoin#24152 (comment) and bitcoin/bitcoin#24152 (comment) (I'm intending for this to be in v23 since it's fixups for things that are already merged, which is why I split it from #24152) ACKs for top commit: t-bast: LGTM, ACK bitcoin/bitcoin@77202f0 darosior: ACK 77202f0 LarryRuane: ACK 77202f0 Tree-SHA512: a428e791dfa59c359d3ccc67e8d3a4c1239815d2f6b29898e129700079271c00b3a45f091f70b65a6e54aa00a3d5b678b6da29d2a76b6cd6f946eaa7082ea696
77202f0 [doc] package deduplication (glozow) d35a3cb [doc] clarify inaccurate comment about replacements paying higher feerate (glozow) 5ae187f [validation] look up transaction by txid (glozow) Pull request description: - Use txid, not wtxid, for `mempool.GetIter()`: bitcoin#22674 (comment) - Fix a historically inaccurate comment about RBF during the refactors: bitcoin#22855 (comment) - Add a section about package deduplication to policy/packages.md: bitcoin#24152 (comment) and bitcoin#24152 (comment) (I'm intending for this to be in v23 since it's fixups for things that are already merged, which is why I split it from bitcoin#24152) ACKs for top commit: t-bast: LGTM, ACK bitcoin@77202f0 darosior: ACK 77202f0 LarryRuane: ACK 77202f0 Tree-SHA512: a428e791dfa59c359d3ccc67e8d3a4c1239815d2f6b29898e129700079271c00b3a45f091f70b65a6e54aa00a3d5b678b6da29d2a76b6cd6f946eaa7082ea696
Summary: This is a partial backport of [[bitcoin/bitcoin#22674 | core#22674]] bitcoin/bitcoin@9b2fdca bitcoin/bitcoin@ba26169 Test Plan: `ninja && ninja check`` Reviewers: #bitcoin_abc, Fabien Reviewed By: #bitcoin_abc, Fabien Subscribers: Fabien Differential Revision: https://reviews.bitcoinabc.org/D12474
Summary: Central place for putting package-related info. This document or parts of it can also be easily ported to other places if deemed appropriate. This is a backport of [[bitcoin/bitcoin#22674 | core#22674]] bitcoin/bitcoin@d59ddc5 Test Plan: proofreading Reviewers: #bitcoin_abc, Fabien Reviewed By: #bitcoin_abc, Fabien Subscribers: Fabien Differential Revision: https://reviews.bitcoinabc.org/D12479
… be child-with-unconfirmed-parents Summary: > [policy] require submitted packages to be child-with-unconfirmed-parents bitcoin/bitcoin@144a290 ---- > [validation] full package accept + mempool submission bitcoin/bitcoin@be3ff15 ---- > [packages] add sanity checks for package vs mempool limits bitcoin/bitcoin@8310d94 ---- > [unit test] package submission bitcoin/bitcoin@046e8ff Note: the last test for Already-in-mempool transactions is skipped and will be with in the next commit. This is a partial backport of [[bitcoin/bitcoin#22674 | core#22674]] Test Plan: `ninja all check-all` Reviewers: #bitcoin_abc, Fabien Reviewed By: #bitcoin_abc, Fabien Subscribers: Fabien Differential Revision: https://reviews.bitcoinabc.org/D12480
Summary: As node operators are free to set their mempool policies however they please, it's possible for package transaction(s) to already be in the mempool. We definitely don't want to reject the entire package in that case (as that could be a censorship vector). We should still return the successful result to the caller, so add another result type to MempoolAcceptResult. This concludes backport of [[bitcoin/bitcoin#22674 | core#22674]] bitcoin/bitcoin@e12fafd bitcoin/bitcoin@046e8ff (partial) Depends on D12480 Notes: - because eCash does not have RBF, our constructor for `MempoolAcceptResult` (success case) would have the exact same signature as the new constructor (already-in-mempool). So in stead of adding a new constructor I made the existing one more generic to handle both cases. The rest of the codebase does not directly call this constructor, it always uses either `MempoolAcceptResult::MempoolTx` or `MempoolAcceptResult::Success`. - We do not need to handle the same-txid-diff-wtxid case, so the code in `AcceptPackage` is simplified for Bitcoin ABC. Test Plan: `ninja all check-all` Reviewers: #bitcoin_abc, Fabien Reviewed By: #bitcoin_abc, Fabien Subscribers: Fabien Differential Revision: https://reviews.bitcoinabc.org/D12481
This is 1 chunk of Package Mempool Accept; it restricts packages to 1 child with its parents, doesn't allow conflicts, and doesn't have CPFP (yet). Future PRs (see #22290) will add RBF and CPFP within packages.