-
Notifications
You must be signed in to change notification settings - Fork 37.8k
Policy: Lower default limits for tx chains #6771
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Concept ACK. |
Are there any other use cases than excessive gambling and spam that require more than 3 unconfirmed ancestors? |
So if you're only saving 2.6%, why bother? Why not increase the max block size to 1.026Mb instead? @Mirobit Who are we to decide how bitcoin is used? What do you want to 'fight' next - porn? Donations to unsavoury nations? People who we don't agree with? |
@NanoAkron, to be clear, the goal here is not to restrict transactions. I wish the 2.6% number was 0.0%. Nor is the goal to prevent any specific type or kind of transaction, spam or otherwise. The goal is only to prevent particularly constructed chains of transactions which would serve to break critical functionality of the network. Such chains are only likely constructed for the particular purpose of attacking the entire Bitcoin system, however, the limits proposed to prevent these attacks may inadvertently restrict non-attack transactions. I'm just trying to be as transparent as possible by acknowledging that 2.6% of recent transactions would either have to have been delayed until some of their ancestors were included in blocks or constructed from different utxos if these limits had already been in place. |
/** Default for -limitancestorsize, maximum kilobytes of tx + all in-mempool ancestors */ | ||
static const unsigned int DEFAULT_ANCESTOR_SIZE_LIMIT = 900; | ||
static const unsigned int DEFAULT_ANCESTOR_SIZE_LIMIT = 100; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note that this means a max size tx - 100KB - can't be respent. This could cause problems, e.g. with CPFP.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@petertodd I was assuming that if people cared about that they could limit their transactions to 99.5KB and still chain a child or two off of it to pay extra fees. I suppose alternatively we could raise the default to 101?
The size limits would only have restricted approx 0.01% of transactions that passed the count limits over the last several months.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would agree with @petertodd here, we should set the size limit to something like 101 or 105.
Example use case - bitcoin business(es) that depend on tx chains: https://twitter.com/taariqlewis/status/651160362629230592 (this is notably not an endorsement of that activity, as you'll gather from reading responses) |
The largest number of unconfirmed transaction on a single address we've seen at BlockCypher was 164. That particular customer had a burst of activity from its end-users during a period of full blocks and long block gaps.Those 164 transactions were unevenly distributed on 3 UTXOs, leading to a ~100 transactions chain. While I can't name them, a fair amount of people here have used this service, so definitely a legitimate use case. Based on this I'd keep the ancestor package at current limits and reduce the descendant package only by a factory of ~10x instead of ~40x. At a higher level, while I believe this is a useful metric to have in Core, I'd avoid clamping down too hard on it. I'm worried it's an over-optimization without considering pool side as a whole. |
You stated that you saw a long transaction chain in production, but no reason was given as to why your customer had to chain transactions together instead of using other utxos? Are you suggesting that this customer did not otherwise have funds available to pay out when they needed to and, thus, had to chain their own transactions? Calling this a "useful metric" is a bit of an understatement. These limits, together, directly govern the complexity of mempool limiting algorithms and, to a slightly lesser extent, their effectiveness. On October 7, 2015 7:56:36 PM PDT, Matthieu Riou notifications@github.com wrote:
|
Ideally they'd have shredded their UTXOs a little better but didn't. Funds were available but people don't anticipate worst case situations well. They could have been more diligent but on the other hand, the more limits people can hit on transacting, the more complex developing around these limits become. |
Hmm, so if they had available UTXOs, were they spending UTXOs which they received from others or just making their own chains? The Bitcoin Core tx selection of "first pick txn which are already confirmed" is generally a good idea for hygene. If they were spending UTXOs others had sent them then it's something we should actively discourage anyway. On October 7, 2015 9:10:40 PM PDT, Matthieu Riou notifications@github.com wrote:
|
We, at Serica and DigitalTangible actively use unspent tx chains to allow our customers to speed their bitcoin user-experience without the need for them to wait on blockchain confirmations. These transactions are usually sequential and must happen between our customers and our marketplace of merchants and other customers. For example, a user agrees to place an order to purchase bitcoin and then make a bitcoin payment, for a product or services, with that bitcoin, should their desired price be met while they are unable to actively monitor their transaction. We currently do not have a need to chain more than 5 unspents given our current use cases for onboarding new customers into the bitcoin ecosystem. Given this PR, we agree with its principle, since the proposal aims to limit to MAX_ANCESTORS = 25 and MAX_DESCENDANTS = 25, which we think is reasonable. We have not yet seen a use case for more than 25 chains of unconfirmed in our ecosystem. However, we would like to publish our viewpoint that we wish to avoid a slippery slope of restrictions in unspents to fall from from 25 to 2 or even 0. The limits imposed should not prevent, at minimum, 5 step chains of transactions that are needed to give a customer temporary control over funds that they would otherwise not have access to unless they waited for a confirmation before conducting another transaction. In these situations, where an instant purchase must be made with customer control, that btc must be sent to a customers address and then be quickly relayed to a merchant or another party in a transaction to create a seamless experience. All of this must happen within 0 confs because our customers will not wait for a whole confirm and we do not wish to lose the opportunity to make Bitcoin more available and useful to a broader audience with higher performance demands. Zero confirmations, when done properly, help increase adoption of Bitcoin and make it more competitive against other forms of payments. However, we do think it's good to prevent abuse of the system with reasonable constraints for the current ecosystem of applications and wallets. |
@TheBlueMatt their own chain. And confirmed first was used, just a faster stream of incoming than confirming with not enough UTXOs in that period of time. |
@taariq yea, I wouldnt be worried about that. No one wants to set a package size limit, its only a requirement for the mempool limiting and tracking algorithms to have reasonable upper limits on runtime. Still, I think 5 is more than anyone else had seen a use-case for, so thanks for the feeback :) |
@matthieu Ahh, ok, thanks for the clarification. That is quite...annoying. I'm not sure I'd trust the code with much higher limits yet, but maybe it warrants more study instead of just a belt-and-suspenders limit, then. |
@matthieu it sounds like they could have been using sendmany |
@pstratem there are several things they could have done better. How much do you want people to know to be able to transact without bad surprises tho? Making it more complex is good for services like ours that swallow the complexity, but not for the ecosystem in general. Odd that I should be making that point. |
@matthieu eh? the sendmany rpc call is right there... |
I think the point was they're not using Bitcoin Core. On October 7, 2015 11:48:06 PM PDT, Patrick Strateman notifications@github.com wrote:
|
The point is also they had no prior knowledge they would either need to batch their transactions or ever hit a transaction peak that high. Send as it comes is what everyone starts with, batching can get tricky fast. |
It would be helpful to know whether there are other use cases like this that would be hampered. To Matt's point of the reasoning behind this change: it's also an understatement to say its just needed to govern complexity and effectiveness of mempool limiting. There are attacks which can cause the mempools of full nodes to become almost permanently unusuable, which would prevent transaction relay and basically bring the network to a halt. How low we have to make these limits to prevent that may not be an exact science. The good thing is that if we do decide to start with a higher limit than 25, and these attacks are seen in the wild, its only a command line argument to change it. But as we've seen with the minimum relay fee and the stress tests, its still an issue if people have to bring down their nodes and restart them with different parameters to respond to the attack. |
NACK based on my reply below.
You can't bundle customer's requests of transactions which have to be sent out immediately.
If you don't have the resources to run a node with the default settings, you can still use the [1]: |
@MarcoFalke Just to reemphasize this point. This is not only about resources. Very powerful nodes can still be attacked if they have a mempool with some size limit. I do agree that better wallet UI is needed, but that problem is not introduced by this pull. If we don't make the default small enough to be safe, then most people will have to configure their own policy. I suspect that will lead to a great many nodes limiting chains to much smaller counts. |
@morcos Given this isn't a consensus change, any issue with lowering it gradually (over a few releases) rather than cranking it down all of a sudden? Try with 50 ancestors and 100 descendants for a bit and then go lower if no real difficulty has been found? |
+1 to Taqriq's use case. I also have a transaction system implemented which involves multi-step transactions which spend from UTXOs that are known to us but may yet be unconfirmed by the blockchain. Completely independent of taariq's use case, but similar in that chains of txs are created and submitted to the mempool together without waiting for sequential block confirmation of each tx in the chain. Due to signature constraints, our multiple steps cannot be coalesced into a single transaction for publication. (Coalescing could be possible with improvements to sigHash capabilities) Therefore, we end up with multiple transactions in a chain which are all submitted to the peer network together. In our current implementation, these chains can be 3 to 4 levels deep per "user transaction". However, if the blockchain is taking 20 minutes to produce the next block, a user initiating multiple "user transactions" (buy x, then buy y, then buy z) in that 20 minutes could build on that unspent chain. The additional operations are not strictly linear - subsequent operations would spend the change outputs of the previous operation's root transaction, forming a tree of chained txs. A set of 3 operations would form a tree with a max depth of 7 and a total node count of 12. The "sendmany" RPC function is of no use to me because it only works within the limits of the reference implementation wallet - single signature outputs. Our system is built on multisignature wallets. We run off-the-shelf bitcoind full nodes, but we never use the wallet or address functions. |
@dthorpe Can you explain more what that application actually is? Are you sure you're not vulnerable to malleability? |
Note that for the case where you can't use 'sendmany' because you need to send funds immediately, you could instead use replace-by-fee to rewrite the txout set, splitting the change txout into the new payee and change. This style of payment uses significantly less blockchain space than chained transactions. |
@petertodd I'd be happy to discuss in more detail offline. first.last@gmail.com |
@matthieu I agree it would be good to give people time to adjust their applications to a lower limit, but that will happen naturally as more and more of the network upgrades to 0.12. But yes, if there is too much resistance to 25/25, we might have to start with higher limits. I think node performance should be reasonable with 50/100 but I worry about attacks. Maybe there is some middle ground where some nodes run with higher limits and some lower. Highly chained txs would only be relayed through the nodes with higher limits, but in the event of the attack the lower limit nodes would keep relay open for the vast majority of txs. |
@dthorpe I can't take on any more consulting for the next few weeks, but you might find this link post about replace-by-fee use-cases useful: http://www.mail-archive.com/bitcoin-development@lists.sourceforge.net/msg07813.html |
ACK. I don't like doing this because it does make some things more annoying for developers, but, honestly, as long as they retransmit transactions as their chains are confirmed, 25 txn/block isn't so bad. On October 14, 2015 11:33:31 AM PDT, Suhas Daftuar notifications@github.com wrote:
|
utACK #6871 gives high-volume applications a good way to avoid long chains. |
concept ACK |
@MarcoFalke this pull was primarily done to make #6722 have stricter CPU-usage bounds to prevent DoS, so, yes, this is still something we want after #6722. |
Needs rebase. |
Reduce the default limits on maximum number of transactions and the cumulative size of those transactions in both ancestor and descendant packages to 25 txs and 101kb total size.
sorry, rebased |
I've generated a lot of historical data on blocks to look at the length of chains we see within blocks. For each block I looked at, I calculated for every transaction the number of in-block ancestors and in-block descendants, along with size of those ancestors and size of those descendants. Detailed data is available here: https://www.dropbox.com/s/kkh0g4kl9qwt1cx/alldata2.gz?dl=0 Top line of the file is a header identifying each column. (The blocks are not in any strict ordering, it's based on the order they're written to my blk*.dat files in the data dir on the machine I ran on, and this includes all blocks including ones that may have been reorged out.) |
Is there an estimation that gives the CPU-usage as a function of the limits? Is it non-linear? |
utACK |
@dthorpe Didn't see your above comment until now "I'm counting on BIP62 to significantly reduce that risk." BIP62 cannot remove malleability from multisignature transactions; and I believe it is unlikely that BIP62 will ever be deployed on the Bitcoin network due to it's invasiveness and necessary incompleteness. (Well, pedantically, BIP62 as is will not be deployed as its outdated; a revised version could be, but I'm doubtful of it due to the limitations in the approach). Though I'm hopeful about more complete things being done in the future, they wouldn't be BIP62. I bring it up not to invite a debate over it, but just to level set-- you shouldn't be expecting anything to just happen there. |
@gmaxwell understood, thanks. |
Since you were struggling to find legitimate usecases, I'd like to contribute to the conversation that in Wasabi Wallet's coinjoin chains I'm starting to hit these limits in various ways. BackgroundBefore I discuss the limits individually I'd like to provide some background: Wasabi launched 3 months ago and the coinjoin volume is exponentially growing since then. An average Wasabi CJ today is Individual Limit Discussions
QuestionAlso it is not clear to me if the descendant limits apply to the sum of all descendants of all inputs's prevouts or just individual descendants of each input's prevout. The error message seems to imply the latter "too many descendants for tx ..." Which one is it? |
2105947 Implement helper class for CTxMemPoolEntry constructor (Alex Morcos) 1cef905 Make -checkmempool=1 not fail through int32 overflow (Pieter Wuille) 0f72ff2 Support -checkmempool=N, which runs checks on average once every N transactions (Pieter Wuille) 89483d0 [Bug] Make operator() a const function in CompareTxMemPoolEntryByX (random-zebra) a50ad77 Lower default policy limits (random-zebra) 03f7152 fix locking issue with new mempool limiting (random-zebra) 1598961 Fix stale comment in CTxMemPool::TrimToSize. (random-zebra) 98d0d68 Undo GetMinFee-requires-extra-call-to-hit-0 (random-zebra) 6ad6ee6 Add reasonable test case for mempool trimming (random-zebra) 8dcbb7e Only call TrimToSize once per reorg/blocks disconnect (random-zebra) c20cd38 Implement on-the-fly mempool size limitation. (random-zebra) aee2e17 Print mempool size in KB when adding txn (random-zebra) f7c85fd Add CFeeRate += operator (random-zebra) 5bd2a00 Track (and define) ::minRelayTxFee in CTxMemPool (random-zebra) 0b50f6c Add Mempool Expire function to remove old transactions (random-zebra) d26f5e0 Fix calling mempool directly, instead of pool, in ATMP (random-zebra) fc5eddb Reverse the sort on the mempool's feerate index (random-zebra) 0ce1df0 [BUG] Fix CTxMemPool::check excluding zerocoins from children checks (random-zebra) 1f7bd52 Track transaction packages in CTxMemPoolEntry (random-zebra) 1fd406b TxMemPool: Change mapTx to a boost::multi_index_container (random-zebra) Pull request description: built on top of - [x] #1645 This PR pulls some updates from upstream in the mempool area, adding the required adjustments for legacy zerocoin txes and updating the functional test suite. Specifically, here we: - track mempool descendants (in-mempool transactions that depend on other mempool transactions) - turn `mapTx` into a `boost::multi_index_container` that sorts the mempool on 3 criteria: - transaction hash - fee rate - time in the mempool - Add a max size for the mempool (throwing away the cheapest txs and bumping the min relay fee, when full) - Implement on-the-fly mempool size limit with the flag `-maxmempool` - Implement `-checkmempool=N` to customize the frequency of the mempool check - Implement helper for `CTxMemPoolEntry` for the unit tests. Backports: - bitcoin#6654 - bitcoin#6722 [`*`] - bitcoin#6889 - bitcoin#6771 - bitcoin#6776 - bitcoin#6896 - bitcoin#7020 [`*`] excluding bitcoin@9e93640 as our default minimum tx fee rate of 10k satoshis is only 0,00003 USD at the time of writing. ACKs for top commit: Fuzzbawls: utACK 2105947 furszy: Re utACK 2105947 and merging this nice upgrade :) . Tree-SHA512: 51a7d75bd52f7646d461252c78f0dd9d7e8b5c1c66c22944120bfe293b28f5d48135de339ebf3d8a5b4c61ca5452383ed1b10c417be06dc4a335ac645842ea14
Reduce the default limits on maximum number of transactions and the cumulative size of those transactions in both ancestor and descendant packages to 25 txs and 100kb total size.
Please see email to bitcoin-dev for background.
I looked at some recent data and 2.6% of transactions in my node's mempool in the first 5 days of October would have been restricted by these new lower limits.