-
Notifications
You must be signed in to change notification settings - Fork 37.7k
validation: Move package acceptance size limit from KvB to WU #22097
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
The following sections might be updated with supplementary metadata relevant to reviewers and maintainers. ConflictsReviewers, this pull request conflicts with the following ones:
If you consider this pull request important, please also help to review the conflicting pull requests. Ideally, start with the one that should be merged first. |
Did you mean 4,002 bytes of witness fields? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Concept ACK
ping @rustyrussell |
Wouldn't this cause the package acceptance code to accept packages that wouldn't be accepted as a series of successively-relayed transactions? E.g., Alice submits a 400,001 weight transaction (100,001 vbytes) followed by a 3999 weight child transaction (1,000 vbytes). I don't think implementing this (with a bug fix for the above, if I'm correct) would be a disaster, but I think we should try to be consistent in the units we use in our external interfaces---so we either use vbytes everywhere or we use weight everywhere. When choosing among those two units, we eventually bump into the feerate interfaces where we have a choice between trying to retrain a large number of users who currently think about fees in "bytes" per BTC (or sats), or instead training a a few dozen developers[1] that, to get vbytes from weight units, they need to put a [1] Most wallets should probably be using a library that provides low-level functions for calculating sizes and feerates, so it's only the authors of those libraries who should need to read BIP141's simple and clear description of how to convert weight into vbytes. |
Good point, although I am wondering whether backend code should generally move towards weight units everywhere even if the feerate interfaces remain denominated in vbytes for the userbase. |
2687e0c
to
4fa72ee
Compare
Yep, good catch, corrected in PR description.
I think the parent transaction would be rejected by our current check against
W.r.t to our mempool limits interface, I think we're already using weight (
But do we have the leisury to avoid retraining users, or at least a subset of the userspace, to think fees in terms of "weights" ? With the upcoming deployment of Taproot, you might have the choice among multiples tapscripts, offering differing feerates/interactivity trade-offs. Sticking to vbytes, such wallets would be misleading when branches differs in weights but are displayed as "equal" in vbytes. Further, LN operators and users are going to think in term of weights, as it's far more granularity for liquidity operations (e.g participating in dual-funded channel). For the rest of the users, I believe they're going to care more about the cost of basic operations like spend from a P2WSH to one party, spend from a P2SH to two parties, a 2-of-3 multisig, rather than the unit into which it's displayed. That said, I'm not a UX wallet expert, and I think this change is motivated enough from a mempool acceptance interface viewpoint rather than wandering into a feerate interface discussion. |
Oh, of course. The example would then be: In [1]: x = 400000; y = 3001; z = 999
In [2]: x+y+z <= 404000
Out[2]: True
In [3]: ceil(x/4)+ceil(y/4)+ceil(z/4) <= 101000
Out[3]: False
Yeah, but it would be nice to follow the same rules when there's no significant benefit to changing them. That said, I don't really care about tiny rounding issues that don't impact our DoS protections at all; it just seems incongruous to me.
Yes, of course we do. Your argument in this paragraph creates an artificial dilemma between vbytes and weight units based on granularity, but it's possible to use vbytes with more granularity when making comparisons. Developers writing their own low-level functions need to know that vbytes at the transaction level is
The user-visible interface for mempool acceptance limits is currently defined in "kilobytes":
Using weight units in the code seems to create unnecessary confusion, e.g. the example at the top of this post. If you instead propose to change the user-facing size parameter for package size, that would imply also changing other size parameters to use weight, which brings us to the fee interface. I think there's a very simple rule that's previously been applied to the code, which is that weight units are for consensus checks and vbytes are for everything else. I'm not sure we should be changing that because significant parts of the early LN protocol specification drafts were written by someone trying to make sipa an eponym for weight, and so now they're stuck using feerate units nobody else uses. Personally, my response to #13283 isn't "downstream developer had a problem, we should spend a bunch of time refactoring our code (and maybe our interfaces) to fix it" but rather "downstream developer did something weird, we should buy them a drink and tell them a story about a time we also did something weird and suffered for it." |
Yes your last example works.
Beyond the reduction of rounding-bugs risks in critical paths of downstream codebases, I think there is a relevant benefit to changing them. To make my argument clear, I believe that using vbytes units for our mempool acceptance limits isn't economically optimal. Assuming two absolute-fee equal packages A and B, respectively of sizes 100_999 vbytes and 101_001 vbytes. The first one is currently accepted by our package logic, the second one rejected. Scaled up to weight units, it turns out package A has a size of 300_999 WU and package B has a size of 104_004 WU. Package B is far more interesting than A for block inclusion, however it won't be present in our mempool with currently vbytes expressed limits.
Of course, you can use vbytes with far more granularity when making comparisons if your wallet developer is aware of all the differing accounting scales to use among transaction components, i.e when to display float(x/4). And this accounting complexity might extend in the future, if we rely on taproot annex to artificially increase the cost of new opcodes or if we introduce new consensus fields. Beyond, I would argue that reasoning on integers is far easier to do than floats, which kind of reduce the end-user cognitive burden from a UX standpoint.
If my reasoning above about economical optimality holds, yes I think that logically implies changing other size parameters to weight. W.r.t to mempool acceptance limits, that's a minor change, which can be done without overhaul of mempool internal interface from sat/KvB to sat/kWU. I agree that this later mentioned change would imply a lot of careful refactoring which might restrain us to do it in the short-term. But we don't have this concern with newer interfaces.
At least we have one instance where this rule isn't consistently applied,
I hear you there, but please note I think we're both advocating package-relay to make downstream projects support better, which is extensively time-consuming as this discussion and other related-package PRs are illustrating it. If you want to abandon this serie of works in the name of downstream weirdness, I think you'll have a lot of drinks to buy to the LN ecosystem :) Beyond the joke, if you have a strong opinion against this specific PR, please mark a clear Concept NACK, |
@ariard thanks for your detailed response and sorry for bike shedding. I'm concept NACK on using different rules for packages of transactions than we would use for the same transactions submitted separately unless there's a significant benefit motivating the difference. I don't think a less-than 0.001% difference in economic rationality (which only matters in rare edge cases) is anywhere significant enough motivation. I don't think downstream developers should be performing relay-related calculations using weight units---they should be using vbytes---so I don't think there's any benefit at all for them to us using weight units here; indeed, it might be harmful in giving them mistaken impression that Bitcoin Core is going to eventually switch to using weight units in public interfaces. If this PR were changed to eliminate the behavior change, I'd have no conceptual objection to it. As for the general idea of using weight units in internal code, I think that's a decision best made by the people most frequently working on that code; I'll try to keep my nose out of it. But I will object strongly to any later attempts to change the public interfaces to use weight units based on the argument that the internal code now uses weight units. If vbytes are to die and weight units are to become part of how everyday users measure feerates, please let that change be established in other popular software first before we consider it.
I think I already have quite a few obligations in that area from other messes I've made, some of which you've be instrumental in discovering. :-) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code wise, please note that the virtual size isn't computed as a simple /4 (rounded up) from the weight units but also factors in the signature OPs cost. As implemented a specially-crafted transaction's size could largely diverge between the WU size there and the virtual size in the rest of the code.
Conceptually, i'm a big fan of moving to using WU in Bitcoin Core but i think it would need to be an invasive change through the codebase to keep everything consistent. One approach i had considered to minimize the change was to overload CFeeRate
to optionally take the transaction size as WUs.
FWIW to respond to the previous conceptual concerns, most (all?) of the downstream projects i know of use WUs and then use some hacks to avoid the caveat of bitcoind
's rounding due to its use of virtual bytes.
I don't think that's accurate. The number of sigops in a transaction has no direct effect on its weight or vsize; there's just a rule that says a transaction is invalid if the number of sigops per weight exceeds a certain budget, e.g. as defined in BIP342. |
The transaction virtual size is increased if its number of sigops is abnormally high compared to its size: Lines 280 to 283 in da69d99
This is how the size is computed in ATMP. However you are right that for packages we give a number of sigops (and cost) of |
AFAIU, sigops and weight are two independent limits per consensus. The treatment of transactions with high sigops as having a high weight is a simplification to avoid a multidimensional optimization problem and only applies to policy checks. |
Thanks for looking up and linking me to the code. I was able to follow it back to #8356 where this was added and so I think I now understand the source of confusion: We allow users to exceed the policy limits on sigops by pretending their transactions are the same vsize as transactions that would have the same number of sigops but an appropriate amount of weight to contain those sigops. As long as developers are creating transactions in a normal way, they shouldn't run into this problem and can just use the standard vbytes formula from BIP141.
Indeed. I think we still need the above function for legacy and v0 segwit transactions, but it should always produce the same result as simple BIP141 vbytes for any valid taproot transaction. |
Yes, of course. But my point there was just to be careful to not drop the DOS protection rule with a refactoring to WU (but this patch specifically doesn't drop it as it was not present in the first). |
🐙 This pull request conflicts with the target branch and needs rebase. Want to unsubscribe from rebase notifications on this pull request? Just convert this pull request to a "draft". |
There hasn't been much activity lately and the patch still needs rebase. What is the status here?
|
I'm closing this PR. While I still think moving from KvB to WU numerous policy checks would remove some non-propagation risk for second-layers and also improve the economic efficiency of the mempool acceptance flow, it would be a lot of engineering and communication work across the ecosystem to make WU the standard unit of transaction size. I still think it could be an interesting research project to make a WU-based mempool acceptance code and run simulations out of that from historical data to observe if the feerate yielded is better. If you're a miner in 2032 reading that and you're looking for tricks to improve the average fees reward from your mined blocks over your competitors you might be interested. Can be tagged "up to grabs" if someone is interested to work on it modulo conceptual caveats. |
Just remembered this - the same change has been picked up in #28471 if reviewers here want to review that one |
Class of second-layers applications interested by the package acceptance API are mostly relying on trustless chain of transactions which are necessarily made of segwit input. As such, most of them are making their internal propagation sanity checks and feerate computations already in weight units (at least I can confirm for Lightning, it's mandatory in the spec to announce
feerate_per_kw
to your counterparty).To reduce odds of bugs in second-layers implementations, such as the one mentioned in #13283, this PR changes the package acceptance size limit from kilo-bytes to weight units.
E.g an internal Lightning onchain logic with an already-computed package weight of 404_002 (100_000 bytes of regular fields, 4002 bytes of witness fields), dividing by WITNESS_SCALE_FACTOR, obtaining 101_000 virtual bytes, checking against MAX_PACKAGE_SIZE and then wrongly evaluating the package as valid for mempool acceptance.
It also matches our max transaction size check already expressed in weight units (
MAX_STANDARD_TX_WEIGHT
).