Skip to content

Conversation

ajtowns
Copy link
Contributor

@ajtowns ajtowns commented May 10, 2023

Couple of performance improvements when draining the inventory-to-send queue:

  • drop txs that have already been evicted from the mempool (or included in a block) immediately, rather than at the end of processing
  • marginally increase outgoing trickle rate during spikes in tx volume

@DrahtBot
Copy link
Contributor

DrahtBot commented May 10, 2023

The following sections might be updated with supplementary metadata relevant to reviewers and maintainers.

Reviews

See the guideline for information on the review process.

Type Reviewers
ACK darosior, willcl-ark, instagibbs, glozow, dergoegge
Concept ACK 0xB10C, pinheadmz

If your review is incorrectly listed, please react with 👎 to this comment and the bot will ignore it on the next update.

ajtowns added 2 commits May 10, 2023 10:51
We use CompareDepthAndScore to choose an order of txs to inv. Rather
than sorting txs that have been evicted from the mempool at the end
of the list, sort them at the beginning so they are removed from
the queue immediately.
If transactions are being added to the mempool at a rate faster than 7tx/s
(INVENTORY_BROADCAST_PER_SECOND) then peers' inventory_to_send queue can
become relatively large. If this happens, increase the number of txids
we include in an INV message (normally capped at 35) by 5 for each 1000
txids in the queue.

This will tend to clear a temporary excess out reasonably quickly; an
excess of 4000 invs to send will be cleared down to 1000 in about 30
minutes, while an excess of 20000 invs would be cleared down to 1000 in
about 60 minutes.
Copy link
Member

@darosior darosior left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

utACK 5b34060

@0xB10C
Copy link
Contributor

0xB10C commented May 10, 2023

Concept ACK

@willcl-ark
Copy link
Member

ACK 5b34060

Currently running this cherry-picked on top of v24.0.1 on mainnet and seeing reduced resource usage.

@instagibbs
Copy link
Member

instagibbs commented May 10, 2023

ACK 5b34060

Significant reduction in CPU usage when influx of transactions is high and sustained. Allows an additional INV to trickle per additional 200 INV backlog, capped at 1k.

@0xB10C
Copy link
Contributor

0xB10C commented May 10, 2023

Compared where the time is spent in the b-msghand thread on a mainnet master and a mainnet 5b34060 node that were both running for a while. Followed eklitzke's flamegraph.md to create the flamegraphs below. This seems to be indeed a nice performance improvement. The second flamegraph looks healthier.

master:

flamegraph-master

5b34060:

flamegraph-202305-invtosend

@@ -5666,7 +5666,9 @@ bool PeerManagerImpl::SendMessages(CNode* pto)
// especially since we have many peers and some will draw much shorter delays.
unsigned int nRelayedTransactions = 0;
LOCK(tx_relay->m_bloom_filter_mutex);
while (!vInvTx.empty() && nRelayedTransactions < INVENTORY_BROADCAST_MAX) {
size_t broadcast_max{INVENTORY_BROADCAST_MAX + (tx_relay->m_tx_inventory_to_send.size()/1000)*5};
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could this increase the number of transactions we're announcing beyond INVENTORY_MAX_RECENT_RELAY = 3500 so things fall out of m_recently_announced_invs (essentially what you said in #27602 (comment))?

Maybe we want to increase INVENTORY_MAX_RECENT_RELAY, e.g. equal to MAX_PEER_TX_ANNOUNCEMENTS = 5000 (the max requesting coming from the receiver end if they're Bitcoin Core - imo it could make sense for these to be symmetrical)?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The 3500 figure is calculated based on outbound peers, which already have 2.5x the INV rate of inbound peers. An inbound peer would need to be at 65 or 70 txids per INV message to have the same 1-in-a-million chance of a problem, implying a tx rate of 13 or 14 tx/s. To get to a 1-in-100 chance of a problem, you'd need to hit 90-to-95 txids per INV (and sustain that for ~40 INVs, and hit the 1-in-100 chance of sending those 40 INVs in a 2 minute period). Those seem okay to me.

Copy link
Member

@Sjors Sjors May 15, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note to others: we may send outbound peers 7 * 2.5 * 60 * 2 ~= 2100 transaction ids per 2 minutes. Depending on random variation it can be more. UNCONDITIONAL_RELAY_DELAY = 2min; is the period of time that we have to remember what we sent to a specific peer. For privacy reason we don't want to reveal these transactions to other (spy) peers yet. We do this tracking with a bloom filter m_recently_announced_invs that has a 1 in a million chance of messing up. I forgot if it's false positive (we give the spy a transaction we shouldn't have) or negative (we deny them a transaction we just announced).

Now my question:
This change increases the max rate for outbound rate too, so I'm still confused why we don't have to increase the 3500. This PR increases the maximum rate to outbound peers from 7 * 2.5 = 17.5 tx/s to 1000 / 2 = 500 tx/s (in theory, presumably not in practice), so a 4 second burst would fill it up.

From below:

(Could make sense to limit it to 90 for inbounds and 45 for outbounds as (if sustained for two minutes) those values would hit roughly a 1-in-100 chance of overflowing INVENTORY_MAX_RECENT_RELAY)

That would make more sense to me as well, plus:

static asserts or something would be even better

Copy link
Member

@pinheadmz pinheadmz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Concept ACK

A few questions below.

I have a non-debug mainnet node with a 100% b-msghandler thread. I'm going to deploy this branch today and monitor

@@ -5666,7 +5666,9 @@ bool PeerManagerImpl::SendMessages(CNode* pto)
// especially since we have many peers and some will draw much shorter delays.
unsigned int nRelayedTransactions = 0;
LOCK(tx_relay->m_bloom_filter_mutex);
while (!vInvTx.empty() && nRelayedTransactions < INVENTORY_BROADCAST_MAX) {
size_t broadcast_max{INVENTORY_BROADCAST_MAX + (tx_relay->m_tx_inventory_to_send.size()/1000)*5};
broadcast_max = std::min<size_t>(1000, broadcast_max);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why stop at 1000? If it's the same rationale could you use this constant?

/** Limit to avoid sending big packets. Not used in processing incoming GETDATA for compatibility */
static const unsigned int MAX_GETDATA_SZ = 1000;

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's more related to MAX_PEER_TX_ANNOUNCEMENTS=5000 (we don't want to send more than we would be willing to receive) and INVENTORY_MAX_RECENT_RELAY=3500 (we don't want to advertise txids and then refuse to relay them because we've forgotten that we advertised them).

(Could make sense to limit it to 90 for inbounds and 45 for outbounds as (if sustained for two minutes) those values would hit roughly a 1-in-100 chance of overflowing INVENTORY_MAX_RECENT_RELAY)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Explaining that in a comment would be nice

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

static asserts or something would be even better

Copy link
Member

@glozow glozow left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

code review ACK 5b34060

Copy link
Member

@dergoegge dergoegge left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

utACK 5b34060

The code and approach looks good to me.

(my comments can be addressed in a follow-up)

@@ -5666,7 +5666,9 @@ bool PeerManagerImpl::SendMessages(CNode* pto)
// especially since we have many peers and some will draw much shorter delays.
unsigned int nRelayedTransactions = 0;
LOCK(tx_relay->m_bloom_filter_mutex);
while (!vInvTx.empty() && nRelayedTransactions < INVENTORY_BROADCAST_MAX) {
size_t broadcast_max{INVENTORY_BROADCAST_MAX + (tx_relay->m_tx_inventory_to_send.size()/1000)*5};
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • INVENTORY_BROADCAST_MAX should be renamed or have its comment amended
  • clang-format?
  • Maybe add the commit description as a comment here

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dergoegge INVENTORY_BROADCAST_MAX has been poorly named for a while. It applies to inbound peers, but to outbounds we send 5 / 2 more.

@@ -5666,7 +5666,9 @@ bool PeerManagerImpl::SendMessages(CNode* pto)
// especially since we have many peers and some will draw much shorter delays.
unsigned int nRelayedTransactions = 0;
LOCK(tx_relay->m_bloom_filter_mutex);
while (!vInvTx.empty() && nRelayedTransactions < INVENTORY_BROADCAST_MAX) {
size_t broadcast_max{INVENTORY_BROADCAST_MAX + (tx_relay->m_tx_inventory_to_send.size()/1000)*5};
broadcast_max = std::min<size_t>(1000, broadcast_max);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Explaining that in a comment would be nice

@@ -5666,7 +5666,9 @@ bool PeerManagerImpl::SendMessages(CNode* pto)
// especially since we have many peers and some will draw much shorter delays.
unsigned int nRelayedTransactions = 0;
LOCK(tx_relay->m_bloom_filter_mutex);
while (!vInvTx.empty() && nRelayedTransactions < INVENTORY_BROADCAST_MAX) {
size_t broadcast_max{INVENTORY_BROADCAST_MAX + (tx_relay->m_tx_inventory_to_send.size()/1000)*5};
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
size_t broadcast_max{INVENTORY_BROADCAST_MAX + (tx_relay->m_tx_inventory_to_send.size()/1000)*5};
size_t broadcast_max{INVENTORY_BROADCAST_MAX + (tx_relay->m_tx_inventory_to_send.size() / 1000) * count_seconds(INBOUND_INVENTORY_BROADCAST_INTERVAL)};

@fanquake fanquake merged commit 137a98c into bitcoin:master May 11, 2023
Copy link
Contributor

@brunoerg brunoerg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

post-merge crACK 5b34060

@@ -755,11 +755,16 @@ void CTxMemPool::check(const CCoinsViewCache& active_coins_tip, int64_t spendhei

bool CTxMemPool::CompareDepthAndScore(const uint256& hasha, const uint256& hashb, bool wtxid)
{
/* Return `true` if hasha should be considered sooner than hashb. Namely when:
* a is not in the mempool, but b is
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we have these cases covered by a unit test?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool, @glozow.

fanquake pushed a commit to fanquake/bitcoin that referenced this pull request May 11, 2023
We use CompareDepthAndScore to choose an order of txs to inv. Rather
than sorting txs that have been evicted from the mempool at the end
of the list, sort them at the beginning so they are removed from
the queue immediately.

Github-Pull: bitcoin#27610
Rebased-From: 228e920
fanquake pushed a commit to fanquake/bitcoin that referenced this pull request May 11, 2023
If transactions are being added to the mempool at a rate faster than 7tx/s
(INVENTORY_BROADCAST_PER_SECOND) then peers' inventory_to_send queue can
become relatively large. If this happens, increase the number of txids
we include in an INV message (normally capped at 35) by 5 for each 1000
txids in the queue.

This will tend to clear a temporary excess out reasonably quickly; an
excess of 4000 invs to send will be cleared down to 1000 in about 30
minutes, while an excess of 20000 invs would be cleared down to 1000 in
about 60 minutes.

Github-Pull: bitcoin#27610
Rebased-From: 5b34060
fanquake pushed a commit to fanquake/bitcoin that referenced this pull request May 11, 2023
We use CompareDepthAndScore to choose an order of txs to inv. Rather
than sorting txs that have been evicted from the mempool at the end
of the list, sort them at the beginning so they are removed from
the queue immediately.

Github-Pull: bitcoin#27610
Rebased-From: 228e920
fanquake pushed a commit to fanquake/bitcoin that referenced this pull request May 11, 2023
If transactions are being added to the mempool at a rate faster than 7tx/s
(INVENTORY_BROADCAST_PER_SECOND) then peers' inventory_to_send queue can
become relatively large. If this happens, increase the number of txids
we include in an INV message (normally capped at 35) by 5 for each 1000
txids in the queue.

This will tend to clear a temporary excess out reasonably quickly; an
excess of 4000 invs to send will be cleared down to 1000 in about 30
minutes, while an excess of 20000 invs would be cleared down to 1000 in
about 60 minutes.

Github-Pull: bitcoin#27610
Rebased-From: 5b34060
Copy link
Member

@maflcko maflcko left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

first commit lgtm. Left one question. Didn't look too long at the second commit.

indexed_transaction_set::const_iterator j = wtxid ? get_iter_from_wtxid(hashb) : mapTx.find(hashb);
if (j == mapTx.end()) return true;
if (j == mapTx.end()) return false;
indexed_transaction_set::const_iterator i = wtxid ? get_iter_from_wtxid(hasha) : mapTx.find(hasha);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

question (feel free to ignore): I tried to figure out if there was a reason why this was re-ordered, but couldn't find one. An equivalent patch of simply changing the literals false to true and true to false in this function should have achieved the same, right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The CompareTxMemPoolEntryByScore comparator returns false on equality (ie, it's "less than" not "less than or equal"), so this preserves that behaviour.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, I should have just read the documentation: https://en.cppreference.com/w/cpp/named_req/Compare , which would have answered this.

achow101 added a commit that referenced this pull request May 11, 2023
97f5e28 doc: update release notes for 24.1rc3 (fanquake)
7e9c7ae doc: update manual pages for v24.1rc3 (fanquake)
abb9fa0 build: bump version to v24.1rc3 (fanquake)
128da6e net_processing: Boost inv trickle rate (Anthony Towns)
a9a861a txmempool: have CompareDepthAndScore sort missing txs first (Anthony Towns)
ec7cd33 p2p: Avoid prematurely clearing download state for other peers (Suhas Daftuar)

Pull request description:

  Backports for rc3. Currently:
  * #27608
  * #27610

ACKs for top commit:
  josibake:
    ACK 97f5e28
  dergoegge:
    ACK 97f5e28
  achow101:
    ACK 97f5e28
  glozow:
    ACK 97f5e28
  brunoerg:
    ACK 97f5e28
  hebasto:
    ACK 97f5e28, commits were backported locally, got zero diff.

Tree-SHA512: 09572285ed1e8169d7e77d12ec438586dab54c86064de85d0e743564e601686f884bf74f2bf8ed1be73bddcd7db6da4277c6dd6b9732e7eca383e108f8f37d58
Copy link
Contributor

@mzumsande mzumsande left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Post-Merge ACK.

I've been running this patch (plus added reporting in getpeerinfo, mzumsande@6932997) for a few days, and have neither seen extremely high backlogs > 5000 in m_tx_inventory_to_send nor enhanced CPU use.

In order to better monitor the efficacy of this, it might make sense to expose the size of m_tx_inventory_to_send via RPC permanently?

fanquake pushed a commit to fanquake/bitcoin that referenced this pull request May 11, 2023
We use CompareDepthAndScore to choose an order of txs to inv. Rather
than sorting txs that have been evicted from the mempool at the end
of the list, sort them at the beginning so they are removed from
the queue immediately.

Github-Pull: bitcoin#27610
Rebased-From: 228e920
fanquake pushed a commit to fanquake/bitcoin that referenced this pull request May 11, 2023
If transactions are being added to the mempool at a rate faster than 7tx/s
(INVENTORY_BROADCAST_PER_SECOND) then peers' inventory_to_send queue can
become relatively large. If this happens, increase the number of txids
we include in an INV message (normally capped at 35) by 5 for each 1000
txids in the queue.

This will tend to clear a temporary excess out reasonably quickly; an
excess of 4000 invs to send will be cleared down to 1000 in about 30
minutes, while an excess of 20000 invs would be cleared down to 1000 in
about 60 minutes.

Github-Pull: bitcoin#27610
Rebased-From: 5b34060
fanquake added a commit that referenced this pull request May 11, 2023
49a2d66 doc: update manual pages for v25.0rc2 (fanquake)
3ea4a11 build: bump version to v25.0rc2 (fanquake)
7ef71e3 net_processing: Boost inv trickle rate (Anthony Towns)
1adbcd3 txmempool: have CompareDepthAndScore sort missing txs first (Anthony Towns)
9a23079 p2p: Avoid prematurely clearing download state for other peers (Suhas Daftuar)

Pull request description:

  Backports for rc2. Currently:
  * #27608
  * #27610

ACKs for top commit:
  achow101:
    ACK 49a2d66

Tree-SHA512: a1a7678e16136636ec8a232d12630529639bae3b577769b5a5fd204dda234a5e588f3d4dfebf4d7abe7111d13cc0714f9ccdea0a858fe821a7146e6a697308d3
sidhujag pushed a commit to syscoin/syscoin that referenced this pull request May 11, 2023
5b34060 net_processing: Boost inv trickle rate (Anthony Towns)
228e920 txmempool: have CompareDepthAndScore sort missing txs first (Anthony Towns)

Pull request description:

  Couple of performance improvements when draining the inventory-to-send queue:

   * drop txs that have already been evicted from the mempool (or included in a block) immediately, rather than at the end of processing
   * marginally increase outgoing trickle rate during spikes in tx volume

ACKs for top commit:
  willcl-ark:
    ACK 5b34060
  instagibbs:
    ACK bitcoin@5b34060
  darosior:
    utACK 5b34060
  glozow:
    code review ACK 5b34060
  dergoegge:
    utACK 5b34060

Tree-SHA512: 155cd3b5d150ba3417c1cd126f2be734497742e85358a19c9d365f4f97c555ff9e846405bbeada13c3575b3713c3a7eb2f780879a828cbbf032ad9a6e5416b30
@@ -755,11 +755,16 @@ void CTxMemPool::check(const CCoinsViewCache& active_coins_tip, int64_t spendhei

bool CTxMemPool::CompareDepthAndScore(const uint256& hasha, const uint256& hashb, bool wtxid)
{
/* Return `true` if hasha should be considered sooner than hashb. Namely when:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For a followup it would be could to clarify "should be considered" for what. It's somewhat confusing to reason about because the call site reverses a and b.

The first case in the comment is "a is not in the mempool, but b is", but the line of code checks if b is not in the mempool.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The first case in the comment is "a is not in the mempool, but b is", but the line of code checks if b is not in the mempool.

If b is not in the mempool, the function must return false, see #27610 (comment)

If b is in the mempool, the function will continue over the early return false, and checks to see if a is missing, in which case it should be considered sooner.

Copy link
Member

@Sjors Sjors May 12, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"should be considered" for what

For processing, which begins by dropping a transaction from m_tx_inventory_to_send before deciding whether to send it. Since we stop processing after having sent broadcast_max transactions, it makes sense to do the cleaning first.

A return value of true means hasha is lower, i.e. would be at the bottom of the (max)heap and processed last. But because we swap a and b when calling this, it works. That was introduced in the second commit of #7840.

We could prune non-mempool transactions even before making a heap, but that would double the mempool lookups, and the flame graph suggests that make_heap is plenty fast already now.

fanquake added a commit that referenced this pull request May 12, 2023
a26ff20 doc: add initial release notes for v23.2 (fanquake)
60edfd5 doc: update manual pages for v23.2rc1 (fanquake)
b93814b doc: update version in bips.md to v23.2 (fanquake)
67bbe6d build: bump version to v23.2rc1 (fanquake)
06731d1 net_processing: Boost inv trickle rate (Anthony Towns)
d0a2c87 txmempool: have CompareDepthAndScore sort missing txs first (Anthony Towns)
ce8f812 p2p: Avoid prematurely clearing download state for other peers (Suhas Daftuar)

Pull request description:

  Final backports for `rc1`. Currently:
  * #27608 (not a clean cherry-pick)
  * #27610 (second commit is not clean)

ACKs for top commit:
  achow101:
    ACK a26ff20
  dergoegge:
    ACK a26ff20
  ajtowns:
    utACK a26ff20

Tree-SHA512: 59e43ec4d5004b3543d5c0366c9dc8c5f8a6a777b147628ebc0c03aeb0846312a7780376ebf40f389e3403e4501ba2b70bb97925479670bee13c89e5b6925137
@Sjors
Copy link
Member

Sjors commented May 12, 2023

Partial utACK, for 228e920 (i.e. the changes in CompareDepthAndScore). Took me a while to wrap my head around it.

@pinheadmz
Copy link
Member

Screen Shot 2023-05-17 at 10 06 51 AM

I've been running this branch for 7 days on a VPS and noticed this morning the CPU on b-msghand is back up to 100%. It was about that high running v24 release but dropped to 30% or so when I first switched to this branch and restarted.

@willcl-ark
Copy link
Member

@pinheadmz I am also still running this patch, but I still see pretty stable utilisation in the range of ~2-12%, currently with 88 inbound peers.

image

@0xB10C
Copy link
Contributor

0xB10C commented May 18, 2023

I've been running this branch for 7 days on a VPS and noticed this morning the CPU on b-msghand is back up to 100%. It was about that high running v24 release but dropped to 30% or so when I first switched to this branch and restarted.

@pinheadmz I've been running the patch on multiple nodes for a week now and haven't seen 100% CPU usage in the b-msghand thread again. If you haven't restarted or if it happens again, it would be helpful to see which functions are slow. perf top -p $(pidof bitcoind) should do the trick.

@maflcko
Copy link
Member

maflcko commented May 18, 2023

Could also make sense to double check the debug.log to ensure you restarted bitcoind after compiling? 😅

@pinheadmz
Copy link
Member

Could also make sense to double check the debug.log to ensure you restarted bitcoind after compiling? 😅

Phew! That would've been embarassing

2023-05-10T15:39:40Z Bitcoin Core version v25.99.0-5b3406094f26 (debug build)

perf top -p $(pidof bitcoind) should do the trick.

Not familiar with this tool but it looks cool! This is at the top. Every other line in the output is < 0.10%

  99.56%  bitcoind             [.] boost::multi_index::detail::safe_iterator_base::detach

@maflcko
Copy link
Member

maflcko commented May 18, 2023

(debug build)

See #27700 (comment) ?

@pinheadmz
Copy link
Member

ah thanks I forgot I configured that way, will follow the other threads

losh11 added a commit to litecoin-project/litecoin that referenced this pull request Mar 3, 2024
losh11 added a commit to litecoin-project/litecoin that referenced this pull request Mar 6, 2024
losh11 added a commit to litecoin-project/litecoin that referenced this pull request Mar 6, 2024
arnout pushed a commit to buildroot/buildroot that referenced this pull request Mar 11, 2024
For all release notes since 0.21.2, see [1].

This commit also:
- removes the package patches, all included in this release,
- removes the patch entry in ".checkpackageignore",
- adds a comment about pgp signature check in the hash file,
- updates the license hash, due to year update,
- adds the requirement on gcc >= 9, see [2],
- removes the dependency on !BR2_TOOLCHAIN_HAS_GCC_BUG_64735 which
  is no longer needed, due to the gcc version requirement.

Fixes:
- [3]

Note: this CVE-2023-33297 is not explicitly mentioned in the Bitcoin
core release notes. It is not either appearing in CVE list at [4].
The change corresponding to this vulnerability is "#27610 Improve
performance of p2p inv to send queues" in the version 24.1 release
note. See [5] and [6].

[1] https://github.com/bitcoin/bitcoin/tree/v26.0/doc/release-notes
[2] https://github.com/bitcoin/bitcoin/blob/v26.0/doc/dependencies.md
[3] https://nvd.nist.gov/vuln/detail/CVE-2023-33297
[4] https://en.bitcoin.it/wiki/Common_Vulnerabilities_and_Exposures
[5] https://github.com/bitcoin/bitcoin/blob/v26.0/doc/release-notes/release-notes-24.1.md?plain=1#L45
[6] bitcoin/bitcoin#27610

Signed-off-by: Julien Olivain <ju.o@free.fr>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
arnout pushed a commit to buildroot/buildroot that referenced this pull request Mar 21, 2024
For all release notes since 0.21.2, see [1].

This commit also:
- removes the package patches, all included in this release,
- removes the patch entry in ".checkpackageignore",
- adds a comment about pgp signature check in the hash file,
- updates the license hash, due to year update,
- adds the requirement on gcc >= 9, see [2],
- removes the dependency on !BR2_TOOLCHAIN_HAS_GCC_BUG_64735 which
  is no longer needed, due to the gcc version requirement.

Fixes:
- [3]

Note: this CVE-2023-33297 is not explicitly mentioned in the Bitcoin
core release notes. It is not either appearing in CVE list at [4].
The change corresponding to this vulnerability is "#27610 Improve
performance of p2p inv to send queues" in the version 24.1 release
note. See [5] and [6].

[1] https://github.com/bitcoin/bitcoin/tree/v26.0/doc/release-notes
[2] https://github.com/bitcoin/bitcoin/blob/v26.0/doc/dependencies.md
[3] https://nvd.nist.gov/vuln/detail/CVE-2023-33297
[4] https://en.bitcoin.it/wiki/Common_Vulnerabilities_and_Exposures
[5] https://github.com/bitcoin/bitcoin/blob/v26.0/doc/release-notes/release-notes-24.1.md?plain=1#L45
[6] bitcoin/bitcoin#27610

Signed-off-by: Julien Olivain <ju.o@free.fr>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
(cherry picked from commit a3a88ff)
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
arnout pushed a commit to buildroot/buildroot that referenced this pull request Mar 21, 2024
For all release notes since 0.21.2, see [1].

This commit also:
- removes the package patches, all included in this release,
- removes the patch entry in ".checkpackageignore",
- adds a comment about pgp signature check in the hash file,
- updates the license hash, due to year update,
- adds the requirement on gcc >= 9, see [2],
- removes the dependency on !BR2_TOOLCHAIN_HAS_GCC_BUG_64735 which
  is no longer needed, due to the gcc version requirement.

Fixes:
- [3]

Note: this CVE-2023-33297 is not explicitly mentioned in the Bitcoin
core release notes. It is not either appearing in CVE list at [4].
The change corresponding to this vulnerability is "#27610 Improve
performance of p2p inv to send queues" in the version 24.1 release
note. See [5] and [6].

[1] https://github.com/bitcoin/bitcoin/tree/v26.0/doc/release-notes
[2] https://github.com/bitcoin/bitcoin/blob/v26.0/doc/dependencies.md
[3] https://nvd.nist.gov/vuln/detail/CVE-2023-33297
[4] https://en.bitcoin.it/wiki/Common_Vulnerabilities_and_Exposures
[5] https://github.com/bitcoin/bitcoin/blob/v26.0/doc/release-notes/release-notes-24.1.md?plain=1#L45
[6] bitcoin/bitcoin#27610

Signed-off-by: Julien Olivain <ju.o@free.fr>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
(cherry picked from commit a3a88ff)
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
arnout pushed a commit to buildroot/buildroot that referenced this pull request Mar 21, 2024
For all release notes since 0.21.2, see [1].

This commit also:
- removes the package patches, all included in this release,
- removes the patch entry in ".checkpackageignore",
- adds a comment about pgp signature check in the hash file,
- updates the license hash, due to year update,
- adds the requirement on gcc >= 9, see [2],
- removes the dependency on !BR2_TOOLCHAIN_HAS_GCC_BUG_64735 which
  is no longer needed, due to the gcc version requirement.

Fixes:
- [3]

Note: this CVE-2023-33297 is not explicitly mentioned in the Bitcoin
core release notes. It is not either appearing in CVE list at [4].
The change corresponding to this vulnerability is "#27610 Improve
performance of p2p inv to send queues" in the version 24.1 release
note. See [5] and [6].

[1] https://github.com/bitcoin/bitcoin/tree/v26.0/doc/release-notes
[2] https://github.com/bitcoin/bitcoin/blob/v26.0/doc/dependencies.md
[3] https://nvd.nist.gov/vuln/detail/CVE-2023-33297
[4] https://en.bitcoin.it/wiki/Common_Vulnerabilities_and_Exposures
[5] https://github.com/bitcoin/bitcoin/blob/v26.0/doc/release-notes/release-notes-24.1.md?plain=1#L45
[6] bitcoin/bitcoin#27610

Signed-off-by: Julien Olivain <ju.o@free.fr>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
(cherry picked from commit a3a88ff)
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
digirayc added a commit to digirayc/cyberyen that referenced this pull request Apr 1, 2024
@bitcoin bitcoin locked and limited conversation to collaborators May 17, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.