-
Notifications
You must be signed in to change notification settings - Fork 37.7k
headerssync: Correct unrealistic unit test behavior #32579
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
The following sections might be updated with supplementary metadata relevant to reviewers and maintainers. Code Coverage & BenchmarksFor details see: https://corecheck.dev/bitcoin/bitcoin/pulls/32579. ReviewsSee the guideline for information on the review process.
If your review is incorrectly listed, please react with 👎 to this comment and the bot will ignore it on the next update. ConflictsReviewers, this pull request conflicts with the following ones:
If you consider this pull request important, please also help to review the conflicting pull requests. Ideally, start with the one that should be merged first. |
954971a
to
315736d
Compare
🚧 At least one of the CI tasks failed. HintsTry to run the tests locally, according to the documentation. However, a CI failure may still
Leave a comment here, if you need help tracking down a confusing failure. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Concept ACK – this PR replaces the imminently outdated hard-coded test target of 15'000 headers with a value derived from the live REDOWNLOAD_BUFFER_SIZE
constant.
Moving both header-sync tuning constants into a shared header, and making the unit test reference them, keeps the test exercising the 'buffer-full' path as the parameters grow each release.
Recommendations:
- split the series into more commits, separating low-risk refactors from ones that change behaviour: strict move-only; refactor / cleanup with no behavioural change; extra assertions; wiring the test to the new parameter header;
- in the test file, wrap each independent
ProcessNextHeaders
call inside{}
blocks and use structured bindings so temporary variables aren’t reused accidentally; - rename the new file to
headerssync_params.h
(underscore instead of-
) for consistency with other similar headers; - update
contrib/devtools/README.md
to reference the header instead of the .cpp; - Tweak a few comments whose wording is unclear or slightly misleading.
Concept ACK |
315736d
to
486e8ca
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for your feedback so far @l0rinc! Took most of your suggestions (#32579 (review)).
486e8ca
to
bbe5609
Compare
Concept ACK |
bbe5609
to
42e7460
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Latest push (42e7460, compare):
- Relaxes the hardcoding of the header commitment period and the redownload buffer size in
HeadersSyncState
.- This makes things more consistent with the constructor already taking
Consensus::Params
(implying different chains are supported). - Moves specification of the above constants from the top of headerssync.cpp into kernel/chainparams.cpp (similar to many other constants).
- Returns the chainwork test suite back to only using an integer literal for
TARGET_BLOCKS
(instead of depending on an #included constant). Also now uses test-local values for commitment period and redownload buffer size.
- This makes things more consistent with the constructor already taking
- Adds a warning in
HeadersSyncState::PopHeadersReadyForAcceptance()
for the case when we reach minimum PoW before exceeding the redownload buffer size together with tests.
(Updated PR description).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I might have gone a bit overboard with the test review, but I think we can make it a lot less noisy - here's the patch of many of my suggestions for convenience:
Patch
diff --git a/src/kernel/chainparams.cpp b/src/kernel/chainparams.cpp
index d64dd43aba..658ea7f2c3 100644
--- a/src/kernel/chainparams.cpp
+++ b/src/kernel/chainparams.cpp
@@ -193,8 +193,7 @@ public:
// Generated by headerssync-params.py on 2025-03-04.
m_headers_sync_params = HeadersSyncParams{
.commitment_period = 624,
- .redownload_buffer_size = 14827, // 14827/624 = ~23.8 commitments
- };
+ .redownload_buffer_size = 14827}; // ~24 commitments
}
};
diff --git a/src/test/headers_sync_chainwork_tests.cpp b/src/test/headers_sync_chainwork_tests.cpp
index 1f85c69838..ccbbc1be50 100644
--- a/src/test/headers_sync_chainwork_tests.cpp
+++ b/src/test/headers_sync_chainwork_tests.cpp
@@ -13,10 +13,13 @@
#include <validation.h>
#include <cstddef>
+#include <optional>
#include <vector>
#include <boost/test/unit_test.hpp>
+using State = HeadersSyncState::State;
+
constexpr size_t TARGET_BLOCKS{15'000};
constexpr arith_uint256 CHAIN_WORK{TARGET_BLOCKS * 2};
constexpr size_t REDOWNLOAD_BUFFER_SIZE{TARGET_BLOCKS - (MAX_HEADERS_RESULTS + 123)};
@@ -37,18 +40,18 @@ static void FindProofOfWork(CBlockHeader& starting_header)
* prev_time, and with a fixed merkle root hash.
*/
static void GenerateHeaders(std::vector<CBlockHeader>& headers,
- size_t count, const uint256& starting_hash, const int nVersion, int prev_time,
- const uint256& merkle_root, const uint32_t nBits)
+ size_t count, const uint256& starting_hash, int nVersion, int prev_time,
+ const uint256& merkle_root, uint32_t nBits)
{
- uint256 prev_hash = starting_hash;
+ uint256 prev_hash{starting_hash};
while (headers.size() < count) {
headers.emplace_back();
- CBlockHeader& next_header = headers.back();
+ CBlockHeader& next_header{headers.back()};
next_header.nVersion = nVersion;
next_header.hashPrevBlock = prev_hash;
next_header.hashMerkleRoot = merkle_root;
- next_header.nTime = prev_time+1;
+ next_header.nTime = prev_time + 1;
next_header.nBits = nBits;
FindProofOfWork(next_header);
@@ -57,220 +60,167 @@ static void GenerateHeaders(std::vector<CBlockHeader>& headers,
}
}
-static HeadersSyncState CreateState(const CBlockIndex* chain_start, const CChainParams& chain_params)
+static HeadersSyncState CreateState(const CChainParams& chain_params, const HeadersSyncParams& sync_params, const CBlockIndex* chain_start)
{
return {/*id=*/0,
chain_params.GetConsensus(),
- HeadersSyncParams{
- .commitment_period = COMMITMENT_PERIOD,
- .redownload_buffer_size = REDOWNLOAD_BUFFER_SIZE,
- },
+ sync_params,
chain_start,
- /*minimum_required_work=*/CHAIN_WORK};
+ /*minimum_required_work=*/CHAIN_WORK};
}
-BOOST_FIXTURE_TEST_SUITE(headers_sync_chainwork_tests, RegTestingSetup)
-
-// In this test, we construct two sets of headers from genesis, one with
-// sufficient proof of work and one without.
-// 1. We deliver the first set of headers and verify that the headers sync state
-// updates to the REDOWNLOAD phase successfully.
-// 2. Then we deliver the second set of headers and verify that they fail
-// processing (presumably due to commitments not matching).
-static void SneakyRedownload(const CBlockIndex* chain_start,
- const std::vector<CBlockHeader>& first_chain,
- const std::vector<CBlockHeader>& second_chain);
-// 3. Verify that repeating with the first set of headers in both phases is
-// successful.
-static void HappyPath(const CBlockIndex* chain_start,
- const std::vector<CBlockHeader>& first_chain);
-// 4. Finally, repeat the second set of headers in both phases to demonstrate
-// behavior when the chain a peer provides has too little work.
-static void TooLittleWork(const CBlockIndex* chain_start,
- const std::vector<CBlockHeader>& second_chain);
-
-static void TooBigBuffer(const CBlockIndex* chain_start,
- const std::vector<CBlockHeader>& first_chain);
-
-BOOST_AUTO_TEST_CASE(headers_sync_state)
+// Standard set of checks common to all scenarios; macro keeps failure lines at the call-site.
+#define CHECK_RESULT(res, state, exp_state, exp_success, exp_request_more, exp_headers_size) \
+ do { \
+ const auto& [res_headers, res_success, res_request_more]{res}; \
+ BOOST_REQUIRE_EQUAL(state.GetState(), exp_state); \
+ BOOST_CHECK_EQUAL(res_headers.size(), exp_headers_size); \
+ BOOST_CHECK_EQUAL(res_success, exp_success); \
+ BOOST_CHECK_EQUAL(res_request_more, exp_request_more); \
+ } while (false)
+
+// Deliver the first set of headers from genesis to reach REDOWNLOAD,
+// then deliver the second set and verify that commitments don’t match.
+static void SneakyRedownload(const CBlockIndex* chain_start, const std::vector<CBlockHeader>& chain1, const std::vector<CBlockHeader>& chain2)
{
- std::vector<CBlockHeader> first_chain;
- std::vector<CBlockHeader> second_chain;
-
- const auto genesis{Params().GenesisBlock()};
-
- // Generate headers for two different chains (using differing merkle roots
- // to ensure the headers are different).
- GenerateHeaders(first_chain, TARGET_BLOCKS - 1, genesis.GetHash(), genesis.nVersion,
- genesis.nTime, /*merkle_root=*/uint256::ZERO, genesis.nBits);
- GenerateHeaders(second_chain, TARGET_BLOCKS - 2, genesis.GetHash(), genesis.nVersion,
- genesis.nTime, /*merkle_root=*/uint256::ONE, genesis.nBits);
-
- const CBlockIndex* chain_start = WITH_LOCK(::cs_main, return m_node.chainman->m_blockman.LookupBlockIndex(genesis.GetHash()));
+ HeadersSyncState state{CreateState(Params(), HeadersSyncParams{COMMITMENT_PERIOD, REDOWNLOAD_BUFFER_SIZE}, chain_start)};
+ const auto genesis_hash{Params().GenesisBlock().GetHash()};
- SneakyRedownload(chain_start, first_chain, second_chain);
- HappyPath(chain_start, first_chain);
- TooLittleWork(chain_start, second_chain);
- TooBigBuffer(chain_start, first_chain);
+ const std::span headers{chain1};
+ CHECK_RESULT(state.ProcessNextHeaders(headers.first(1), /*full_headers_message=*/true),
+ state, State::PRESYNC,
+ /*exp_success=*/true,
+ /*exp_request_more=*/true,
+ /*exp_headers_size=*/0);
+ BOOST_CHECK_EQUAL(state.NextHeadersRequestLocator().vHave.front(), chain1.front().GetHash());
+
+ // Pretend the message is still "full", so we don't abort.
+ // This chain should look valid, and we should have met the proof-of-work
+ // requirement during PRESYNC and transitioned to REDOWNLOAD.
+ CHECK_RESULT(state.ProcessNextHeaders(headers.last(headers.size() - 1), /*full_headers_message=*/true),
+ state, State::REDOWNLOAD,
+ /*exp_success=*/true,
+ /*exp_request_more=*/true,
+ /*exp_headers_size=*/0);
+ BOOST_CHECK_EQUAL(state.NextHeadersRequestLocator().vHave.front(), genesis_hash);
+
+ // Try to sneakily feed back the second chain during REDOWNLOAD.
+ CHECK_RESULT(state.ProcessNextHeaders(chain2, /*full_headers_message=*/true),
+ state, State::FINAL,
+ /*exp_success=*/false,
+ /*exp_request_more=*/false,
+ /*exp_headers_size=*/0);
}
-static void SneakyRedownload(const CBlockIndex* chain_start,
- const std::vector<CBlockHeader>& first_chain,
- const std::vector<CBlockHeader>& second_chain)
+// Verify that repeating with the first set of headers in both phases is successful.
+static void HappyPath(const CBlockIndex* chain_start, const std::vector<CBlockHeader>& chain1)
{
- // Feed the first chain to HeadersSyncState, by delivering 1 header
- // initially and then the rest.
- HeadersSyncState hss{CreateState(chain_start, Params())};
- {
- // Just feed one header and check state.
- auto result{hss.ProcessNextHeaders(std::span{first_chain.begin(), 1}, true)};
- BOOST_REQUIRE_EQUAL(hss.GetState(), HeadersSyncState::State::PRESYNC);
- BOOST_CHECK(result.success);
- BOOST_CHECK(result.request_more);
- BOOST_CHECK_EQUAL(result.pow_validated_headers.size(), 0);
- BOOST_CHECK_EQUAL(hss.NextHeadersRequestLocator().vHave.front(), first_chain.front().GetHash());
- }
-
- {
- // Pretend the message is still "full", so we don't abort.
- auto result{hss.ProcessNextHeaders(std::span{first_chain.begin() + 1, first_chain.end()}, true)};
- // This chain should look valid, and we should have met the proof-of-work
- // requirement during PRESYNC and transitioned to REDOWNLOAD.
- BOOST_REQUIRE_EQUAL(hss.GetState(), HeadersSyncState::State::REDOWNLOAD);
- BOOST_CHECK(result.success);
- BOOST_CHECK(result.request_more);
- BOOST_CHECK_EQUAL(result.pow_validated_headers.size(), 0);
- // The locator should reset to genesis.
- BOOST_CHECK_EQUAL(hss.NextHeadersRequestLocator().vHave.front(), Params().GenesisBlock().GetHash());
- }
+ // Feed the first chain twice.
+ HeadersSyncState state{CreateState(Params(), HeadersSyncParams{COMMITMENT_PERIOD, REDOWNLOAD_BUFFER_SIZE}, chain_start)};
+ const auto genesis_hash{Params().GenesisBlock().GetHash()};
+ const std::span headers{chain1};
- {
- // Try to sneakily feed back the second chain during REDOWNLOAD.
- auto result{hss.ProcessNextHeaders(second_chain, true)};
- BOOST_REQUIRE_EQUAL(hss.GetState(), HeadersSyncState::State::FINAL);
- BOOST_CHECK(!result.success); // Foiled! We detected mismatching headers.
- BOOST_CHECK(!result.request_more);
- BOOST_CHECK_EQUAL(result.pow_validated_headers.size(), 0);
- }
+ // During normal operation we shouldn't get the redownload buffer size warning.
+ ASSERT_NO_DEBUG_LOG(BUFFER_SIZE_WARNING);
+
+ // Switched from PRESYNC to REDOWNLOAD after reaching sufficient work:
+ CHECK_RESULT(state.ProcessNextHeaders(headers, /*full_headers_message=*/true),
+ state, State::REDOWNLOAD,
+ /*exp_success=*/true,
+ /*exp_request_more=*/true,
+ /*exp_headers_size=*/0);
+ BOOST_CHECK_EQUAL(state.NextHeadersRequestLocator().vHave.front(), genesis_hash);
+
+ // Process only so that the internal threshold isn't met, meaning validated headers shouldn't be returned yet:
+ CHECK_RESULT(state.ProcessNextHeaders(headers.first(REDOWNLOAD_BUFFER_SIZE), /*full_headers_message=*/true),
+ state, State::REDOWNLOAD,
+ /*exp_success=*/true,
+ /*exp_request_more=*/true,
+ /*exp_headers_size=*/0);
+ BOOST_CHECK_EQUAL(state.NextHeadersRequestLocator().vHave.front(), chain1[REDOWNLOAD_BUFFER_SIZE - 1].GetHash());
+
+ // Next header should make us exceed the threshold, but still not be done:
+ const auto res_threshold{state.ProcessNextHeaders(headers.subspan(REDOWNLOAD_BUFFER_SIZE, 1), /*full_headers_message=*/true)};
+ CHECK_RESULT(res_threshold,
+ state, State::REDOWNLOAD,
+ /*exp_success=*/true,
+ /*exp_request_more=*/true,
+ /*exp_headers_size=*/1);
+ BOOST_CHECK_EQUAL(state.NextHeadersRequestLocator().vHave.front(), chain1[REDOWNLOAD_BUFFER_SIZE].GetHash());
+
+ // Feed in remaining headers, meeting the work threshold again and completing the REDOWNLOAD phase:
+ CHECK_RESULT(state.ProcessNextHeaders(headers.subspan(REDOWNLOAD_BUFFER_SIZE + 1), /*full_headers_message=*/true),
+ state, State::FINAL,
+ /*exp_success=*/true,
+ /*exp_request_more=*/false,
+ /*exp_headers_size=*/chain1.size() - 1);
+ BOOST_CHECK_EQUAL(res_threshold.pow_validated_headers.front().hashPrevBlock, genesis_hash);
}
-static void HappyPath(const CBlockIndex* chain_start,
- const std::vector<CBlockHeader>& first_chain)
+// Repeat the second set of headers in both phases to demonstrate
+// behavior when the chain a peer provides has too little work.
+static void TooLittleWork(const CBlockIndex* chain_start, const std::vector<CBlockHeader>& chain2)
{
- // This time we feed the first chain twice.
- HeadersSyncState hss{CreateState(chain_start, Params())};
-
- // During normal operation we shouldn't get the redownload buffer size warning.
- DebugLogHelper forbidden{BUFFER_SIZE_WARNING, [](const std::string* str) {
- if (str == nullptr) {
- return false; // Disable exception for not finding a match.
- } else {
- throw std::runtime_error{strprintf("'%s' should not exist in debug log\n", str)};
- }
- }};
-
- const auto genesis_hash{Params().GenesisBlock().GetHash()};
- {
- auto result{hss.ProcessNextHeaders(first_chain, true)};
- // Switched from PRESYNC to REDOWNLOAD after reaching sufficient work:
- BOOST_REQUIRE_EQUAL(hss.GetState(), HeadersSyncState::State::REDOWNLOAD);
- BOOST_CHECK(result.success);
- BOOST_CHECK(result.request_more);
- BOOST_CHECK_EQUAL(result.pow_validated_headers.size(), 0);
- // The locator should reset to genesis.
- BOOST_CHECK_EQUAL(hss.NextHeadersRequestLocator().vHave.front(), genesis_hash);
- }
-
- {
- // Process only so that the internal threshold isn't met, meaning validated
- // headers shouldn't be returned yet:
- auto result{hss.ProcessNextHeaders(std::span{first_chain.begin(), REDOWNLOAD_BUFFER_SIZE}, true)};
- // Not done:
- BOOST_REQUIRE_EQUAL(hss.GetState(), HeadersSyncState::State::REDOWNLOAD);
- BOOST_CHECK(result.success);
- BOOST_CHECK(result.request_more);
- BOOST_CHECK_EQUAL(result.pow_validated_headers.size(), 0);
- BOOST_CHECK_EQUAL(hss.NextHeadersRequestLocator().vHave.front(), first_chain[REDOWNLOAD_BUFFER_SIZE - 1].GetHash());
- }
-
- CBlockHeader first_after_genesis;
- {
- // Next header should make us exceed the threshold, but still not be done:
- auto result{hss.ProcessNextHeaders(std::span{first_chain.begin() + REDOWNLOAD_BUFFER_SIZE, 1}, true)};
- BOOST_REQUIRE_EQUAL(hss.GetState(), HeadersSyncState::State::REDOWNLOAD);
- BOOST_CHECK(result.success);
- BOOST_CHECK(result.request_more);
- BOOST_REQUIRE_EQUAL(result.pow_validated_headers.size(), 1);
- BOOST_CHECK_EQUAL(hss.NextHeadersRequestLocator().vHave.front(), first_chain[REDOWNLOAD_BUFFER_SIZE].GetHash());
- first_after_genesis = result.pow_validated_headers.front();
- BOOST_CHECK_EQUAL(first_after_genesis.hashPrevBlock, genesis_hash);
- }
-
- {
- // Feed in remaining headers, meeting the work threshold again and
- // completing the REDOWNLOAD phase.
- auto result{hss.ProcessNextHeaders(std::span{first_chain.begin() + REDOWNLOAD_BUFFER_SIZE + 1, first_chain.end()}, true)};
- // Nothing left for the sync logic to do:
- BOOST_REQUIRE_EQUAL(hss.GetState(), HeadersSyncState::State::FINAL);
- BOOST_CHECK(result.success);
- BOOST_CHECK(!result.request_more);
- // All headers except the one already returned above:
- BOOST_REQUIRE_EQUAL(result.pow_validated_headers.size(), first_chain.size() - 1);
- BOOST_CHECK_EQUAL(result.pow_validated_headers.front().hashPrevBlock, first_after_genesis.GetHash());
- }
+ // Verify that just trying to process the second chain would not succeed (too little work).
+ HeadersSyncState state{CreateState(Params(), HeadersSyncParams{COMMITMENT_PERIOD, REDOWNLOAD_BUFFER_SIZE}, chain_start)};
+ const std::span headers{chain2};
+
+ // Pretend just the first message is "full", so we don't abort.
+ CHECK_RESULT(state.ProcessNextHeaders(headers.first(1), /*full_headers_message=*/true),
+ state, State::PRESYNC,
+ /*exp_success=*/true,
+ /*exp_request_more=*/true,
+ /*exp_headers_size=*/0);
+
+ // Non-full message causes sync to end with no headers accepted.
+ CHECK_RESULT(state.ProcessNextHeaders(headers.subspan(1), /*full_headers_message=*/false),
+ state, State::FINAL,
+ /*exp_success=*/true,
+ /*exp_request_more=*/false,
+ /*exp_headers_size=*/0);
}
-static void TooLittleWork(const CBlockIndex* chain_start,
- const std::vector<CBlockHeader>& second_chain)
+// Scenario that intentionally uses an oversized buffer to trigger the warning.
+static void TooBigBuffer(const CBlockIndex* chain_start, const std::vector<CBlockHeader>& chain1)
{
- // Verify that just trying to process the second chain would not succeed
- // (too little work).
- HeadersSyncState hss{CreateState(chain_start, Params())};
- BOOST_REQUIRE_EQUAL(hss.GetState(), HeadersSyncState::State::PRESYNC);
- {
- // Pretend just the first message is "full", so we don't abort.
- auto result{hss.ProcessNextHeaders(std::span{second_chain.begin(), 1}, true)};
- BOOST_REQUIRE_EQUAL(hss.GetState(), HeadersSyncState::State::PRESYNC);
- BOOST_CHECK(result.success);
- BOOST_CHECK(result.request_more);
- BOOST_CHECK_EQUAL(result.pow_validated_headers.size(), 0);
- }
-
- {
- // Tell the sync logic that the headers message was not full, implying no
- // more headers can be requested. For a low-work-chain, this should cause
- // the sync to end with no headers for acceptance.
- auto result{hss.ProcessNextHeaders(std::span{second_chain.begin() + 1, second_chain.end()}, false)};
- BOOST_REQUIRE_EQUAL(hss.GetState(), HeadersSyncState::State::FINAL);
- BOOST_CHECK(!result.request_more);
- BOOST_CHECK_EQUAL(result.pow_validated_headers.size(), 0);
- // Nevertheless, no validation errors should have been detected with the
- // chain:
- BOOST_CHECK(result.success);
- }
+ // Intentionally too big redownload buffer in order to trigger warning.
+ HeadersSyncState state{CreateState(Params(), HeadersSyncParams{COMMITMENT_PERIOD, chain1.size()}, chain_start)};
+
+ CHECK_RESULT(state.ProcessNextHeaders(chain1, /*full_headers_message=*/true),
+ state, State::REDOWNLOAD,
+ /*exp_success=*/true,
+ /*exp_request_more=*/true,
+ /*exp_headers_size=*/0);
+
+ ASSERT_DEBUG_LOG(BUFFER_SIZE_WARNING);
+ CHECK_RESULT(state.ProcessNextHeaders(chain1, /*full_headers_message=*/true),
+ state, State::FINAL,
+ /*exp_success=*/true,
+ /*exp_request_more=*/false,
+ /*exp_headers_size=*/chain1.size());
}
-static void TooBigBuffer(const CBlockIndex* chain_start,
- const std::vector<CBlockHeader>& first_chain)
+BOOST_FIXTURE_TEST_SUITE(headers_sync_chainwork_tests, RegTestingSetup)
+
+BOOST_AUTO_TEST_CASE(headers_sync_state)
{
- // Intentionally too big redownload buffer in order to trigger warning.
- HeadersSyncState hss{/*id=*/0,
- Params().GetConsensus(),
- HeadersSyncParams{
- .commitment_period = COMMITMENT_PERIOD,
- .redownload_buffer_size = first_chain.size(),
- },
- chain_start,
- /*minimum_required_work=*/CHAIN_WORK};
- (void)hss.ProcessNextHeaders(first_chain, true);
- BOOST_REQUIRE_EQUAL(hss.GetState(), HeadersSyncState::State::REDOWNLOAD);
-
- {
- ASSERT_DEBUG_LOG(BUFFER_SIZE_WARNING);
- auto result{hss.ProcessNextHeaders(first_chain, true)};
- BOOST_REQUIRE_EQUAL(hss.GetState(), HeadersSyncState::State::FINAL);
- BOOST_CHECK(result.success);
- }
+ const auto& genesis{Params().GenesisBlock()};
+ const uint256 genesis_hash{genesis.GetHash()};
+
+ // Generate headers for two different chains (using differing merkle roots
+ // to ensure the headers are different).
+ std::vector<CBlockHeader> chain1{};
+ GenerateHeaders(chain1, TARGET_BLOCKS - 1, genesis_hash, genesis.nVersion, genesis.nTime,
+ /*merkle_root=*/uint256::ZERO, genesis.nBits);
+ std::vector<CBlockHeader> chain2{};
+ GenerateHeaders(chain2, TARGET_BLOCKS - 2, genesis_hash, genesis.nVersion, genesis.nTime,
+ /*merkle_root=*/uint256::ONE, genesis.nBits);
+
+ const CBlockIndex* chain_start{WITH_LOCK(::cs_main, return m_node.chainman->m_blockman.LookupBlockIndex(genesis_hash))};
+ SneakyRedownload(chain_start, chain1, chain2);
+ HappyPath(chain_start, chain1);
+ TooLittleWork(chain_start, chain2);
+ TooBigBuffer(chain_start, chain1);
}
BOOST_AUTO_TEST_SUITE_END()
diff --git a/src/test/util/logging.h b/src/test/util/logging.h
index 73ac23825f..622ff58938 100644
--- a/src/test/util/logging.h
+++ b/src/test/util/logging.h
@@ -36,6 +36,16 @@ public:
~DebugLogHelper() { check_found(); }
};
+// Test fails if the pattern *DOES NOT* show up.
#define ASSERT_DEBUG_LOG(message) DebugLogHelper UNIQUE_NAME(debugloghelper)(message)
+// Test fails if the pattern *DOES* show up.
+#define ASSERT_NO_DEBUG_LOG(message) \
+ DebugLogHelper UNIQUE_NAME(nologhelper){ \
+ message, \
+ [](const std::string* line) { \
+ if (line) throw std::runtime_error("unexpected log line: " + *line); \
+ return false; /* suppress default 'not found' failure */ \
+ } \
+ }
#endif // BITCOIN_TEST_UTIL_LOGGING_H
b006f4b
to
9c4a045
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the extensive review @l0rinc! Implemented many of the suggestions and answered the other points.
Removed the commit breaking out scopes in favor of the suggested CHECK_RESULT
macro (#32579 (comment)). Broke out the comment improvements into their own commit ahead of that since it gives a "skeleton" for the next commit's diff to hang off of.
9c4a045
to
14e933a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I find the current layout of tiny, focused commits a lot easier to follow, thanks!
Left a few comments, after that I will test it again locally and I can ACK.
7fe7341
to
c3bc47c
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review ACK 53341ea
I reviewed the code and checked that tests were passing locally, but didn't do any extensive testing.
This PR makes the HEADER_COMMITMENT_PERIOD
and REDOWNLOAD_BUFFER_SIZE
configurable by the tests, instead of using an hardcoded value, to avoid ending up testing an unrealistic behavior once REDOWNLOAD_BUFFER_SIZE
surpasses 15_000 in 0.30.
(I like how we can test what could happen using the first commit and investigating the weird behavior!)
Additionally, the PR makes it easier to follow the headers_sync_chainwork_test
, by cleaning up code and expanding comments. It also makes the test more extensive, by making sure to check every parameter of ProcessingResult using the CHECK_RESULT
macro.
crACK 53341ea The test changes here look good to me. I left some nits that can be addressed if following-up or rebasing. I sanity checked that a fresh node doing headers syncing for mainnet, testnet, and signet work on my machine with this branch. I think it would be good to have input from one of the authors/reviewers of #25717 on the non-test changes to headers sync in this PR (4aea5fa and 1771e59) before merging. |
@Sjors or @sdaftuar: would you mind doing some light review of 1771e59? It represents the meat of this PR, making the headers sync state params that affect memory usage able to be configured for tests/other chains instead of hard-coded to mainnet conditions. (davidgumberg also suggested (#32579 (comment)) reviewing 4aea5fa which also touches the (Made a list of author+reviewers from #25717 and used a RNG to select 2 to ping). |
I generally disagree with the approach in this PR. It's doing a lot of refactoring that feels too much like refactoring for the sake of refactoring. I think this can be dropped from the milestone. |
Commit orderI understand that given the current order of commits, the test refactorings seem less motivated. Order used to be refactorings + main commit + illustrative test + additional log test (3740523~9...3740523). Having all the refactoring first got the tests in better shape so that later behavior changes could be expressed clearer. @fanquake wrote on IRC 2 weeks ago (https://bitcoin-irc.chaincode.com/bitcoin-core-dev/2025-08-07#1142169;):
That made me think it would be better to have an independent commit at the beginning of the PR which breaks once the Context of headers sync changesI also must admit that the refactoring began in the context of my other WIP changes for Refactoring directionIf you think the test refactorings go in arbitrary directions, I would be curious to hear which parts you still disagree with now that I've used multiple test cases as you suggested (#32579 (comment)). Finishing thoughtsIf you think that the changes in the main commit 1771e59 are also refactoring for the sake of refactoring, rather than parameters ending up where they should be to enable better testing, that's a different conversation. |
(As I said on IRC, I respect the decision to drop it from the milestone. There's enough moving parts in the release process. It would've been neat to fix ahead of bumping |
Given that the current PR already demonstrates the production code is still correct, I tend to agree that we don't have to rush. Would have been cleaner, but it's not urgent. I do think however that the refactorings are necessary - but I'd keep the commits before the final fix, and if other reviewers think it's too risky, maybe we can do it in a separate PR - unifying it with @danielabrozzoni's change and your other related proposal either through a tracking issue or pushing the rest as drafts. I am fine with doing the refactorings here. Edit: note that |
…unctions Made arith_uint256 constexpr-constructible so it can be used for compile time constants. Co-authored-by: Lőrinc <pap.lorinc@gmail.com>
More lightweight than vectors which needed to be copied in tests. Also good to get rid of headers_batch-vector before breaking up test.
Helps logically separate the scenarios being tested. Also adds missing comment for part 4. (unique_ptrs and ProcessingResults will be cleaned up in next commit).
+ new assert helping explain why CHAIN_WORK == TARGET_BLOCKS * 2.
Introduces CHECK_RESULT for consistently validating ProcessingResult. * Verifies HeadersSyncState::State directly after ProcessNextHeaders(). * Uses BOOST_REQUIRE_EQUAL for HeadersSyncState::State - Nicer failure output and prevents continuing test in nonsensical state. * Encourages checking Locator and result.pow_validated_headers. Changes happy_path to test both full & non-full headers messages.
Adding these checks necessitates increasing the length of the generated test chains so that we can properly exceed the REDOWNLOAD_BUFFER_SIZE during the test. One can check out this commit and locally revert the TARGET_BLOCKS value change to prove the need for tests being able to control the buffer size, as is done by the next commit. Beyond the current REDOWNLOAD_BUFFER_SIZE of 15'009 we need 3 extra - 15'012 TARGET_BLOCKS: * 1 for the genesis block. * 1 for the test wanting to check that we start receiving headers for permanent storage *before* the final header (first_chain.back()). * 1 to exceed REDOWNLOAD_BUFFER_SIZE in HeadersSyncState::PopHeadersReadyForAcceptance(). (The release process includes an occasional increase of the REDOWNLOAD_BUFFER_SIZE value, see release-process.md and history of headerssync.cpp).
53341ea
to
eb0f41e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changes since last push:
- Rebased to resolve conflict with #33274.
- Moved checks for buffering-behavior during redownload from the first commit to after the refactoring commits.
- Since #33274 updated the
REDOWNLOAD_BUFFER_SIZE
constant to surpassTARGET_BLOCKS
(as predicted), we now have to temporarily increaseTARGET_BLOCKS
for these new checks to pass. - Switched from temporarily introducing
g_latest_result
and having commit at the end to remove it (53341ea), to introducing a local copy of theREDOWNLOAD_BUFFER_SIZE
constant. This means that there is less churn on the lines. - Added comment about probability of spurious test failure in
sneaky_redownload
-case (#32579 (comment)). - Moved comment
Pretend the message is still "full", so we don't abort.
withinsneaky_redownload
-case (#32579 (comment)) - Added coverage for non-full headers messages in
happy_path
-case (#32579 (comment)) - Changed
max_seconds_since_start
-constant to use initialization-braces instead of assignment (reducing column width by 2 chars). - Updated
HeadersSyncParams
for the various chains based upon #33274 and re-running headerssync-params.py.
A sense of how the resulting end state of the PR has changed (excluding the rebase) can be viewed by something like:
meld <(git diff 53341ea10dc2f7df371b416060863bbc094b8773~9..53341ea10dc2f7df371b416060863bbc094b8773) <(git diff eb0f41ee1ed9ea54019fcaa4e3ce33481da4459d~8..eb0f41ee1ed9ea54019fcaa4e3ce33481da4459d)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ACK eb0f41e
I have checked the diffs after rebase, regenerated and verified the mainchain params, looked for leftovers of decommissioned constants and went over the code again quickly to make sure I still agree with everything after the new changes.
@@ -53,7 +53,7 @@ Release Process | |||
- Set `MINCHAINWORK_HEADERS` to the height used for the `nMinimumChainWork` calculation above. | |||
- Check that the other variables still look reasonable. | |||
- Run the script. It works fine in CPython, but PyPy is much faster (seconds instead of minutes): `pypy3 contrib/devtools/headerssync-params.py`. | |||
- Paste the output defining `HEADER_COMMITMENT_PERIOD` and `REDOWNLOAD_BUFFER_SIZE` into the top of [`src/headerssync.cpp`](/src/headerssync.cpp). | |||
- Paste the output defining the header `commitment_period` and `redownload_buffer_size` into the mainnet section of [`src/kernel/chainparams.cpp`](/src/kernel/chainparams.cpp). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does
Line 183 in eb0f41e
* (h % HEADER_COMMITMENT_PERIOD) == m_commit_offset. */ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for catching this! Pushed fix.
@@ -137,7 +137,7 @@ BUILDDIR=$PWD/my-build-dir contrib/devtools/gen-manpages.py | |||
headerssync-params.py | |||
===================== | |||
|
|||
A script to generate optimal parameters for the headerssync module (src/headerssync.cpp). It takes no command-line | |||
A script to generate optimal parameters for the headerssync module (stored in src/kernel/chainparams.cpp). It takes no command-line | |||
options, as all its configuration is set at the top of the file. It runs many times faster inside PyPy. Invocation: | |||
|
|||
```bash |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The new value after the rebase is:
- Initial: period=626, buffer=14854, mem=710.549 KiB
- New best: period=634, buffer=15061, mem=706.160 KiB
- New best: period=632, buffer=15009, mem=703.804 KiB
Given current min chainwork headers of 912683, the optimal parameters for low
memory usage on mainchain for release until 2028-04-02 is:
// Generated by headerssync-params.py on 2025-09-04.
m_headers_sync_params = HeadersSyncParams{
.commitment_period = 632,
.redownload_buffer_size = 15009, // 15009/632 = ~23.7 commitments
};
Properties:
- Per-peer memory for mainchain sync: 703.723 KiB
- Per-peer memory for timewarp attack: 703.804 KiB
- Attack rate: 30460.2 attacks for 1 header of memory growth
(where each attack costs 70.503 MiB bandwidth)
which matches
bitcoin/src/kernel/chainparams.cpp
Lines 200 to 203 in eb0f41e
m_headers_sync_params = HeadersSyncParams{ | |
.commitment_period = 632, | |
.redownload_buffer_size = 15009, // 15009/632 = ~23.7 commitments | |
}; |
(I haven't checked the other chains)
m_commit_offset(FastRandomContext().randrange<unsigned>(HEADER_COMMITMENT_PERIOD)), | ||
const HeadersSyncParams& params, const CBlockIndex* chain_start, | ||
const arith_uint256& minimum_required_work) : | ||
m_commit_offset((assert(params.commitment_period > 0), // HeadersSyncParams field must be initialized to non-zero. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm still not comfortable with this assert here, can we assign it in the body or do an immediately-invoked lambda instead?
m_commit_offset((assert(params.commitment_period > 0), // HeadersSyncParams field must be initialized to non-zero. | |
m_commit_offset{[&]() { | |
assert(params.commitment_period > 0); | |
return FastRandomContext().randrange(params.commitment_period); | |
}()}, |
Note that in this case the comment can also be eliminated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What makes a 2-statement lambda better than https://en.cppreference.com/w/cpp/language/operator_other.html#Built-in_comma_operator in this case?
Also not sure why it changes the need for the comment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can resolve it if you feel strongly about it, it's not a blocker from my side
Move calculated constants from the top of src/headerssync.cpp into src/kernel/chainparams.cpp. Instead of being hardcoded to mainnet parameters, HeadersSyncState can now vary depending on chain or test. (This means we can reset TARGET_BLOCKS back to the nice round number of 15'000). Signet and testnets got new HeadersSyncParams constants through temporarily altering headerssync-params.py with corresponding GENESIS_TIME and MINCHAINWORK_HEADERS (based off defaultAssumeValid block height comments, corresponding to nMinimumChainWork). Regtest doesn't have a default assume valid block height, so the values are copied from Testnet 4. Since the constants only affect memory usage, and have very low impact unless dealing with a largely malicious chain, it's not that critical to keep updating them for non-mainnet chains. GENESIS_TIMEs (UTC): Testnet3: 1296688602 = datetime(2011, 2, 2) Testnet4: 1714777860 = datetime(2024, 5, 3) Signet: 1598918400 = datetime(2020, 9, 1)
eb0f41e
to
33d550d
Compare
Background
As part of the release process we often run contrib/devtools/headerssync-params.py and increase the values of the constants
HEADER_COMMITMENT_PERIOD
andREDOWNLOAD_BUFFER_SIZE
in src/headerssync.cpp as per doc/release-process.md (example: 11a2d3a). This helps fine tune the memory consumption perHeadersSyncState
-instance in the face of malicious peers.(The
REDOWNLOAD_BUFFER_SIZE
/HEADER_COMMITMENT_PERIOD
ratio determines how many Headers Sync commitment bits must match between PRESYNC & REDOWNLOAD phases before we start permanently storing headers from a peer. For more details see comments in src/headerssync.h and contrib/devtools/headerssync-params.py).Problem: Not feeding back headers until completing sync
During v30 release process #33274 made
REDOWNLOAD_BUFFER_SIZE
exceed thetarget_blocks
constant used to control the length of chains generated for testing Headers Sync (15000
, headers_sync_chainwork_tests.cpp).The
HeadersSyncState::m_redownloaded_headers
-buffer now does not reach theREDOWNLOAD_BUFFER_SIZE
-threshold during those unit tests. As a consequenceHeadersSyncState::PopHeadersReadyForAcceptance()
will not start feeding back headers until the PoW threshold has been met. While this will not cause the unit test to start failing on master, it means we have gone from testing behavior that resembles mainnet (way more thanREDOWNLOAD_BUFFER_SIZE
headers to reach the PoW limit), to behavior that is not possible/expected there.Solution
Avoid testing this unrealistic condition of completing Headers Sync before reaching
REDOWNLOAD_BUFFER_SIZE
by making tests able to define their own values through the newHeadersSyncParams
instead of having them hard-coded for all chains & tests.Commits
REDOWNLOAD_BUFFER_SIZE
threshold.HeadersSyncState
no longer hard-coded to mainnet.Notes
This PR used to be called "headerssync: Preempt unrealistic unit test behavior".