-
Notifications
You must be signed in to change notification settings - Fork 37.7k
headerssync: Preempt unrealistic unit test behavior #32579
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
The following sections might be updated with supplementary metadata relevant to reviewers and maintainers. Code Coverage & BenchmarksFor details see: https://corecheck.dev/bitcoin/bitcoin/pulls/32579. ReviewsSee the guideline for information on the review process.
If your review is incorrectly listed, please react with 👎 to this comment and the bot will ignore it on the next update. ConflictsReviewers, this pull request conflicts with the following ones:
If you consider this pull request important, please also help to review the conflicting pull requests. Ideally, start with the one that should be merged first. |
954971a
to
315736d
Compare
🚧 At least one of the CI tasks failed. HintsTry to run the tests locally, according to the documentation. However, a CI failure may still
Leave a comment here, if you need help tracking down a confusing failure. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Concept ACK – this PR replaces the imminently outdated hard-coded test target of 15'000 headers with a value derived from the live REDOWNLOAD_BUFFER_SIZE
constant.
Moving both header-sync tuning constants into a shared header, and making the unit test reference them, keeps the test exercising the 'buffer-full' path as the parameters grow each release.
Recommendations:
- split the series into more commits, separating low-risk refactors from ones that change behaviour: strict move-only; refactor / cleanup with no behavioural change; extra assertions; wiring the test to the new parameter header;
- in the test file, wrap each independent
ProcessNextHeaders
call inside{}
blocks and use structured bindings so temporary variables aren’t reused accidentally; - rename the new file to
headerssync_params.h
(underscore instead of-
) for consistency with other similar headers; - update
contrib/devtools/README.md
to reference the header instead of the .cpp; - Tweak a few comments whose wording is unclear or slightly misleading.
Concept ACK |
315736d
to
486e8ca
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for your feedback so far @l0rinc! Took most of your suggestions (#32579 (review)).
486e8ca
to
bbe5609
Compare
Concept ACK |
bbe5609
to
42e7460
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Latest push (42e7460, compare):
- Relaxes the hardcoding of the header commitment period and the redownload buffer size in
HeadersSyncState
.- This makes things more consistent with the constructor already taking
Consensus::Params
(implying different chains are supported). - Moves specification of the above constants from the top of headerssync.cpp into kernel/chainparams.cpp (similar to many other constants).
- Returns the chainwork test suite back to only using an integer literal for
TARGET_BLOCKS
(instead of depending on an #included constant). Also now uses test-local values for commitment period and redownload buffer size.
- This makes things more consistent with the constructor already taking
- Adds a warning in
HeadersSyncState::PopHeadersReadyForAcceptance()
for the case when we reach minimum PoW before exceeding the redownload buffer size together with tests.
(Updated PR description).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I might have gone a bit overboard with the test review, but I think we can make it a lot less noisy - here's the patch of many of my suggestions for convenience:
Patch
diff --git a/src/kernel/chainparams.cpp b/src/kernel/chainparams.cpp
index d64dd43aba..658ea7f2c3 100644
--- a/src/kernel/chainparams.cpp
+++ b/src/kernel/chainparams.cpp
@@ -193,8 +193,7 @@ public:
// Generated by headerssync-params.py on 2025-03-04.
m_headers_sync_params = HeadersSyncParams{
.commitment_period = 624,
- .redownload_buffer_size = 14827, // 14827/624 = ~23.8 commitments
- };
+ .redownload_buffer_size = 14827}; // ~24 commitments
}
};
diff --git a/src/test/headers_sync_chainwork_tests.cpp b/src/test/headers_sync_chainwork_tests.cpp
index 1f85c69838..ccbbc1be50 100644
--- a/src/test/headers_sync_chainwork_tests.cpp
+++ b/src/test/headers_sync_chainwork_tests.cpp
@@ -13,10 +13,13 @@
#include <validation.h>
#include <cstddef>
+#include <optional>
#include <vector>
#include <boost/test/unit_test.hpp>
+using State = HeadersSyncState::State;
+
constexpr size_t TARGET_BLOCKS{15'000};
constexpr arith_uint256 CHAIN_WORK{TARGET_BLOCKS * 2};
constexpr size_t REDOWNLOAD_BUFFER_SIZE{TARGET_BLOCKS - (MAX_HEADERS_RESULTS + 123)};
@@ -37,18 +40,18 @@ static void FindProofOfWork(CBlockHeader& starting_header)
* prev_time, and with a fixed merkle root hash.
*/
static void GenerateHeaders(std::vector<CBlockHeader>& headers,
- size_t count, const uint256& starting_hash, const int nVersion, int prev_time,
- const uint256& merkle_root, const uint32_t nBits)
+ size_t count, const uint256& starting_hash, int nVersion, int prev_time,
+ const uint256& merkle_root, uint32_t nBits)
{
- uint256 prev_hash = starting_hash;
+ uint256 prev_hash{starting_hash};
while (headers.size() < count) {
headers.emplace_back();
- CBlockHeader& next_header = headers.back();
+ CBlockHeader& next_header{headers.back()};
next_header.nVersion = nVersion;
next_header.hashPrevBlock = prev_hash;
next_header.hashMerkleRoot = merkle_root;
- next_header.nTime = prev_time+1;
+ next_header.nTime = prev_time + 1;
next_header.nBits = nBits;
FindProofOfWork(next_header);
@@ -57,220 +60,167 @@ static void GenerateHeaders(std::vector<CBlockHeader>& headers,
}
}
-static HeadersSyncState CreateState(const CBlockIndex* chain_start, const CChainParams& chain_params)
+static HeadersSyncState CreateState(const CChainParams& chain_params, const HeadersSyncParams& sync_params, const CBlockIndex* chain_start)
{
return {/*id=*/0,
chain_params.GetConsensus(),
- HeadersSyncParams{
- .commitment_period = COMMITMENT_PERIOD,
- .redownload_buffer_size = REDOWNLOAD_BUFFER_SIZE,
- },
+ sync_params,
chain_start,
- /*minimum_required_work=*/CHAIN_WORK};
+ /*minimum_required_work=*/CHAIN_WORK};
}
-BOOST_FIXTURE_TEST_SUITE(headers_sync_chainwork_tests, RegTestingSetup)
-
-// In this test, we construct two sets of headers from genesis, one with
-// sufficient proof of work and one without.
-// 1. We deliver the first set of headers and verify that the headers sync state
-// updates to the REDOWNLOAD phase successfully.
-// 2. Then we deliver the second set of headers and verify that they fail
-// processing (presumably due to commitments not matching).
-static void SneakyRedownload(const CBlockIndex* chain_start,
- const std::vector<CBlockHeader>& first_chain,
- const std::vector<CBlockHeader>& second_chain);
-// 3. Verify that repeating with the first set of headers in both phases is
-// successful.
-static void HappyPath(const CBlockIndex* chain_start,
- const std::vector<CBlockHeader>& first_chain);
-// 4. Finally, repeat the second set of headers in both phases to demonstrate
-// behavior when the chain a peer provides has too little work.
-static void TooLittleWork(const CBlockIndex* chain_start,
- const std::vector<CBlockHeader>& second_chain);
-
-static void TooBigBuffer(const CBlockIndex* chain_start,
- const std::vector<CBlockHeader>& first_chain);
-
-BOOST_AUTO_TEST_CASE(headers_sync_state)
+// Standard set of checks common to all scenarios; macro keeps failure lines at the call-site.
+#define CHECK_RESULT(res, state, exp_state, exp_success, exp_request_more, exp_headers_size) \
+ do { \
+ const auto& [res_headers, res_success, res_request_more]{res}; \
+ BOOST_REQUIRE_EQUAL(state.GetState(), exp_state); \
+ BOOST_CHECK_EQUAL(res_headers.size(), exp_headers_size); \
+ BOOST_CHECK_EQUAL(res_success, exp_success); \
+ BOOST_CHECK_EQUAL(res_request_more, exp_request_more); \
+ } while (false)
+
+// Deliver the first set of headers from genesis to reach REDOWNLOAD,
+// then deliver the second set and verify that commitments don’t match.
+static void SneakyRedownload(const CBlockIndex* chain_start, const std::vector<CBlockHeader>& chain1, const std::vector<CBlockHeader>& chain2)
{
- std::vector<CBlockHeader> first_chain;
- std::vector<CBlockHeader> second_chain;
-
- const auto genesis{Params().GenesisBlock()};
-
- // Generate headers for two different chains (using differing merkle roots
- // to ensure the headers are different).
- GenerateHeaders(first_chain, TARGET_BLOCKS - 1, genesis.GetHash(), genesis.nVersion,
- genesis.nTime, /*merkle_root=*/uint256::ZERO, genesis.nBits);
- GenerateHeaders(second_chain, TARGET_BLOCKS - 2, genesis.GetHash(), genesis.nVersion,
- genesis.nTime, /*merkle_root=*/uint256::ONE, genesis.nBits);
-
- const CBlockIndex* chain_start = WITH_LOCK(::cs_main, return m_node.chainman->m_blockman.LookupBlockIndex(genesis.GetHash()));
+ HeadersSyncState state{CreateState(Params(), HeadersSyncParams{COMMITMENT_PERIOD, REDOWNLOAD_BUFFER_SIZE}, chain_start)};
+ const auto genesis_hash{Params().GenesisBlock().GetHash()};
- SneakyRedownload(chain_start, first_chain, second_chain);
- HappyPath(chain_start, first_chain);
- TooLittleWork(chain_start, second_chain);
- TooBigBuffer(chain_start, first_chain);
+ const std::span headers{chain1};
+ CHECK_RESULT(state.ProcessNextHeaders(headers.first(1), /*full_headers_message=*/true),
+ state, State::PRESYNC,
+ /*exp_success=*/true,
+ /*exp_request_more=*/true,
+ /*exp_headers_size=*/0);
+ BOOST_CHECK_EQUAL(state.NextHeadersRequestLocator().vHave.front(), chain1.front().GetHash());
+
+ // Pretend the message is still "full", so we don't abort.
+ // This chain should look valid, and we should have met the proof-of-work
+ // requirement during PRESYNC and transitioned to REDOWNLOAD.
+ CHECK_RESULT(state.ProcessNextHeaders(headers.last(headers.size() - 1), /*full_headers_message=*/true),
+ state, State::REDOWNLOAD,
+ /*exp_success=*/true,
+ /*exp_request_more=*/true,
+ /*exp_headers_size=*/0);
+ BOOST_CHECK_EQUAL(state.NextHeadersRequestLocator().vHave.front(), genesis_hash);
+
+ // Try to sneakily feed back the second chain during REDOWNLOAD.
+ CHECK_RESULT(state.ProcessNextHeaders(chain2, /*full_headers_message=*/true),
+ state, State::FINAL,
+ /*exp_success=*/false,
+ /*exp_request_more=*/false,
+ /*exp_headers_size=*/0);
}
-static void SneakyRedownload(const CBlockIndex* chain_start,
- const std::vector<CBlockHeader>& first_chain,
- const std::vector<CBlockHeader>& second_chain)
+// Verify that repeating with the first set of headers in both phases is successful.
+static void HappyPath(const CBlockIndex* chain_start, const std::vector<CBlockHeader>& chain1)
{
- // Feed the first chain to HeadersSyncState, by delivering 1 header
- // initially and then the rest.
- HeadersSyncState hss{CreateState(chain_start, Params())};
- {
- // Just feed one header and check state.
- auto result{hss.ProcessNextHeaders(std::span{first_chain.begin(), 1}, true)};
- BOOST_REQUIRE_EQUAL(hss.GetState(), HeadersSyncState::State::PRESYNC);
- BOOST_CHECK(result.success);
- BOOST_CHECK(result.request_more);
- BOOST_CHECK_EQUAL(result.pow_validated_headers.size(), 0);
- BOOST_CHECK_EQUAL(hss.NextHeadersRequestLocator().vHave.front(), first_chain.front().GetHash());
- }
-
- {
- // Pretend the message is still "full", so we don't abort.
- auto result{hss.ProcessNextHeaders(std::span{first_chain.begin() + 1, first_chain.end()}, true)};
- // This chain should look valid, and we should have met the proof-of-work
- // requirement during PRESYNC and transitioned to REDOWNLOAD.
- BOOST_REQUIRE_EQUAL(hss.GetState(), HeadersSyncState::State::REDOWNLOAD);
- BOOST_CHECK(result.success);
- BOOST_CHECK(result.request_more);
- BOOST_CHECK_EQUAL(result.pow_validated_headers.size(), 0);
- // The locator should reset to genesis.
- BOOST_CHECK_EQUAL(hss.NextHeadersRequestLocator().vHave.front(), Params().GenesisBlock().GetHash());
- }
+ // Feed the first chain twice.
+ HeadersSyncState state{CreateState(Params(), HeadersSyncParams{COMMITMENT_PERIOD, REDOWNLOAD_BUFFER_SIZE}, chain_start)};
+ const auto genesis_hash{Params().GenesisBlock().GetHash()};
+ const std::span headers{chain1};
- {
- // Try to sneakily feed back the second chain during REDOWNLOAD.
- auto result{hss.ProcessNextHeaders(second_chain, true)};
- BOOST_REQUIRE_EQUAL(hss.GetState(), HeadersSyncState::State::FINAL);
- BOOST_CHECK(!result.success); // Foiled! We detected mismatching headers.
- BOOST_CHECK(!result.request_more);
- BOOST_CHECK_EQUAL(result.pow_validated_headers.size(), 0);
- }
+ // During normal operation we shouldn't get the redownload buffer size warning.
+ ASSERT_NO_DEBUG_LOG(BUFFER_SIZE_WARNING);
+
+ // Switched from PRESYNC to REDOWNLOAD after reaching sufficient work:
+ CHECK_RESULT(state.ProcessNextHeaders(headers, /*full_headers_message=*/true),
+ state, State::REDOWNLOAD,
+ /*exp_success=*/true,
+ /*exp_request_more=*/true,
+ /*exp_headers_size=*/0);
+ BOOST_CHECK_EQUAL(state.NextHeadersRequestLocator().vHave.front(), genesis_hash);
+
+ // Process only so that the internal threshold isn't met, meaning validated headers shouldn't be returned yet:
+ CHECK_RESULT(state.ProcessNextHeaders(headers.first(REDOWNLOAD_BUFFER_SIZE), /*full_headers_message=*/true),
+ state, State::REDOWNLOAD,
+ /*exp_success=*/true,
+ /*exp_request_more=*/true,
+ /*exp_headers_size=*/0);
+ BOOST_CHECK_EQUAL(state.NextHeadersRequestLocator().vHave.front(), chain1[REDOWNLOAD_BUFFER_SIZE - 1].GetHash());
+
+ // Next header should make us exceed the threshold, but still not be done:
+ const auto res_threshold{state.ProcessNextHeaders(headers.subspan(REDOWNLOAD_BUFFER_SIZE, 1), /*full_headers_message=*/true)};
+ CHECK_RESULT(res_threshold,
+ state, State::REDOWNLOAD,
+ /*exp_success=*/true,
+ /*exp_request_more=*/true,
+ /*exp_headers_size=*/1);
+ BOOST_CHECK_EQUAL(state.NextHeadersRequestLocator().vHave.front(), chain1[REDOWNLOAD_BUFFER_SIZE].GetHash());
+
+ // Feed in remaining headers, meeting the work threshold again and completing the REDOWNLOAD phase:
+ CHECK_RESULT(state.ProcessNextHeaders(headers.subspan(REDOWNLOAD_BUFFER_SIZE + 1), /*full_headers_message=*/true),
+ state, State::FINAL,
+ /*exp_success=*/true,
+ /*exp_request_more=*/false,
+ /*exp_headers_size=*/chain1.size() - 1);
+ BOOST_CHECK_EQUAL(res_threshold.pow_validated_headers.front().hashPrevBlock, genesis_hash);
}
-static void HappyPath(const CBlockIndex* chain_start,
- const std::vector<CBlockHeader>& first_chain)
+// Repeat the second set of headers in both phases to demonstrate
+// behavior when the chain a peer provides has too little work.
+static void TooLittleWork(const CBlockIndex* chain_start, const std::vector<CBlockHeader>& chain2)
{
- // This time we feed the first chain twice.
- HeadersSyncState hss{CreateState(chain_start, Params())};
-
- // During normal operation we shouldn't get the redownload buffer size warning.
- DebugLogHelper forbidden{BUFFER_SIZE_WARNING, [](const std::string* str) {
- if (str == nullptr) {
- return false; // Disable exception for not finding a match.
- } else {
- throw std::runtime_error{strprintf("'%s' should not exist in debug log\n", str)};
- }
- }};
-
- const auto genesis_hash{Params().GenesisBlock().GetHash()};
- {
- auto result{hss.ProcessNextHeaders(first_chain, true)};
- // Switched from PRESYNC to REDOWNLOAD after reaching sufficient work:
- BOOST_REQUIRE_EQUAL(hss.GetState(), HeadersSyncState::State::REDOWNLOAD);
- BOOST_CHECK(result.success);
- BOOST_CHECK(result.request_more);
- BOOST_CHECK_EQUAL(result.pow_validated_headers.size(), 0);
- // The locator should reset to genesis.
- BOOST_CHECK_EQUAL(hss.NextHeadersRequestLocator().vHave.front(), genesis_hash);
- }
-
- {
- // Process only so that the internal threshold isn't met, meaning validated
- // headers shouldn't be returned yet:
- auto result{hss.ProcessNextHeaders(std::span{first_chain.begin(), REDOWNLOAD_BUFFER_SIZE}, true)};
- // Not done:
- BOOST_REQUIRE_EQUAL(hss.GetState(), HeadersSyncState::State::REDOWNLOAD);
- BOOST_CHECK(result.success);
- BOOST_CHECK(result.request_more);
- BOOST_CHECK_EQUAL(result.pow_validated_headers.size(), 0);
- BOOST_CHECK_EQUAL(hss.NextHeadersRequestLocator().vHave.front(), first_chain[REDOWNLOAD_BUFFER_SIZE - 1].GetHash());
- }
-
- CBlockHeader first_after_genesis;
- {
- // Next header should make us exceed the threshold, but still not be done:
- auto result{hss.ProcessNextHeaders(std::span{first_chain.begin() + REDOWNLOAD_BUFFER_SIZE, 1}, true)};
- BOOST_REQUIRE_EQUAL(hss.GetState(), HeadersSyncState::State::REDOWNLOAD);
- BOOST_CHECK(result.success);
- BOOST_CHECK(result.request_more);
- BOOST_REQUIRE_EQUAL(result.pow_validated_headers.size(), 1);
- BOOST_CHECK_EQUAL(hss.NextHeadersRequestLocator().vHave.front(), first_chain[REDOWNLOAD_BUFFER_SIZE].GetHash());
- first_after_genesis = result.pow_validated_headers.front();
- BOOST_CHECK_EQUAL(first_after_genesis.hashPrevBlock, genesis_hash);
- }
-
- {
- // Feed in remaining headers, meeting the work threshold again and
- // completing the REDOWNLOAD phase.
- auto result{hss.ProcessNextHeaders(std::span{first_chain.begin() + REDOWNLOAD_BUFFER_SIZE + 1, first_chain.end()}, true)};
- // Nothing left for the sync logic to do:
- BOOST_REQUIRE_EQUAL(hss.GetState(), HeadersSyncState::State::FINAL);
- BOOST_CHECK(result.success);
- BOOST_CHECK(!result.request_more);
- // All headers except the one already returned above:
- BOOST_REQUIRE_EQUAL(result.pow_validated_headers.size(), first_chain.size() - 1);
- BOOST_CHECK_EQUAL(result.pow_validated_headers.front().hashPrevBlock, first_after_genesis.GetHash());
- }
+ // Verify that just trying to process the second chain would not succeed (too little work).
+ HeadersSyncState state{CreateState(Params(), HeadersSyncParams{COMMITMENT_PERIOD, REDOWNLOAD_BUFFER_SIZE}, chain_start)};
+ const std::span headers{chain2};
+
+ // Pretend just the first message is "full", so we don't abort.
+ CHECK_RESULT(state.ProcessNextHeaders(headers.first(1), /*full_headers_message=*/true),
+ state, State::PRESYNC,
+ /*exp_success=*/true,
+ /*exp_request_more=*/true,
+ /*exp_headers_size=*/0);
+
+ // Non-full message causes sync to end with no headers accepted.
+ CHECK_RESULT(state.ProcessNextHeaders(headers.subspan(1), /*full_headers_message=*/false),
+ state, State::FINAL,
+ /*exp_success=*/true,
+ /*exp_request_more=*/false,
+ /*exp_headers_size=*/0);
}
-static void TooLittleWork(const CBlockIndex* chain_start,
- const std::vector<CBlockHeader>& second_chain)
+// Scenario that intentionally uses an oversized buffer to trigger the warning.
+static void TooBigBuffer(const CBlockIndex* chain_start, const std::vector<CBlockHeader>& chain1)
{
- // Verify that just trying to process the second chain would not succeed
- // (too little work).
- HeadersSyncState hss{CreateState(chain_start, Params())};
- BOOST_REQUIRE_EQUAL(hss.GetState(), HeadersSyncState::State::PRESYNC);
- {
- // Pretend just the first message is "full", so we don't abort.
- auto result{hss.ProcessNextHeaders(std::span{second_chain.begin(), 1}, true)};
- BOOST_REQUIRE_EQUAL(hss.GetState(), HeadersSyncState::State::PRESYNC);
- BOOST_CHECK(result.success);
- BOOST_CHECK(result.request_more);
- BOOST_CHECK_EQUAL(result.pow_validated_headers.size(), 0);
- }
-
- {
- // Tell the sync logic that the headers message was not full, implying no
- // more headers can be requested. For a low-work-chain, this should cause
- // the sync to end with no headers for acceptance.
- auto result{hss.ProcessNextHeaders(std::span{second_chain.begin() + 1, second_chain.end()}, false)};
- BOOST_REQUIRE_EQUAL(hss.GetState(), HeadersSyncState::State::FINAL);
- BOOST_CHECK(!result.request_more);
- BOOST_CHECK_EQUAL(result.pow_validated_headers.size(), 0);
- // Nevertheless, no validation errors should have been detected with the
- // chain:
- BOOST_CHECK(result.success);
- }
+ // Intentionally too big redownload buffer in order to trigger warning.
+ HeadersSyncState state{CreateState(Params(), HeadersSyncParams{COMMITMENT_PERIOD, chain1.size()}, chain_start)};
+
+ CHECK_RESULT(state.ProcessNextHeaders(chain1, /*full_headers_message=*/true),
+ state, State::REDOWNLOAD,
+ /*exp_success=*/true,
+ /*exp_request_more=*/true,
+ /*exp_headers_size=*/0);
+
+ ASSERT_DEBUG_LOG(BUFFER_SIZE_WARNING);
+ CHECK_RESULT(state.ProcessNextHeaders(chain1, /*full_headers_message=*/true),
+ state, State::FINAL,
+ /*exp_success=*/true,
+ /*exp_request_more=*/false,
+ /*exp_headers_size=*/chain1.size());
}
-static void TooBigBuffer(const CBlockIndex* chain_start,
- const std::vector<CBlockHeader>& first_chain)
+BOOST_FIXTURE_TEST_SUITE(headers_sync_chainwork_tests, RegTestingSetup)
+
+BOOST_AUTO_TEST_CASE(headers_sync_state)
{
- // Intentionally too big redownload buffer in order to trigger warning.
- HeadersSyncState hss{/*id=*/0,
- Params().GetConsensus(),
- HeadersSyncParams{
- .commitment_period = COMMITMENT_PERIOD,
- .redownload_buffer_size = first_chain.size(),
- },
- chain_start,
- /*minimum_required_work=*/CHAIN_WORK};
- (void)hss.ProcessNextHeaders(first_chain, true);
- BOOST_REQUIRE_EQUAL(hss.GetState(), HeadersSyncState::State::REDOWNLOAD);
-
- {
- ASSERT_DEBUG_LOG(BUFFER_SIZE_WARNING);
- auto result{hss.ProcessNextHeaders(first_chain, true)};
- BOOST_REQUIRE_EQUAL(hss.GetState(), HeadersSyncState::State::FINAL);
- BOOST_CHECK(result.success);
- }
+ const auto& genesis{Params().GenesisBlock()};
+ const uint256 genesis_hash{genesis.GetHash()};
+
+ // Generate headers for two different chains (using differing merkle roots
+ // to ensure the headers are different).
+ std::vector<CBlockHeader> chain1{};
+ GenerateHeaders(chain1, TARGET_BLOCKS - 1, genesis_hash, genesis.nVersion, genesis.nTime,
+ /*merkle_root=*/uint256::ZERO, genesis.nBits);
+ std::vector<CBlockHeader> chain2{};
+ GenerateHeaders(chain2, TARGET_BLOCKS - 2, genesis_hash, genesis.nVersion, genesis.nTime,
+ /*merkle_root=*/uint256::ONE, genesis.nBits);
+
+ const CBlockIndex* chain_start{WITH_LOCK(::cs_main, return m_node.chainman->m_blockman.LookupBlockIndex(genesis_hash))};
+ SneakyRedownload(chain_start, chain1, chain2);
+ HappyPath(chain_start, chain1);
+ TooLittleWork(chain_start, chain2);
+ TooBigBuffer(chain_start, chain1);
}
BOOST_AUTO_TEST_SUITE_END()
diff --git a/src/test/util/logging.h b/src/test/util/logging.h
index 73ac23825f..622ff58938 100644
--- a/src/test/util/logging.h
+++ b/src/test/util/logging.h
@@ -36,6 +36,16 @@ public:
~DebugLogHelper() { check_found(); }
};
+// Test fails if the pattern *DOES NOT* show up.
#define ASSERT_DEBUG_LOG(message) DebugLogHelper UNIQUE_NAME(debugloghelper)(message)
+// Test fails if the pattern *DOES* show up.
+#define ASSERT_NO_DEBUG_LOG(message) \
+ DebugLogHelper UNIQUE_NAME(nologhelper){ \
+ message, \
+ [](const std::string* line) { \
+ if (line) throw std::runtime_error("unexpected log line: " + *line); \
+ return false; /* suppress default 'not found' failure */ \
+ } \
+ }
#endif // BITCOIN_TEST_UTIL_LOGGING_H
b006f4b
to
9c4a045
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the extensive review @l0rinc! Implemented many of the suggestions and answered the other points.
Removed the commit breaking out scopes in favor of the suggested CHECK_RESULT
macro (#32579 (comment)). Broke out the comment improvements into their own commit ahead of that since it gives a "skeleton" for the next commit's diff to hang off of.
9c4a045
to
14e933a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I find the current layout of tiny, focused commits a lot easier to follow, thanks!
Left a few comments, after that I will test it again locally and I can ACK.
7fe7341
to
c3bc47c
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for invalidating the ACKs. Implemented suggestion in #32579 (comment)
- Fixes fuzzing of
commitment_offset
to have min value of 0 instead of 1 as on master, and sets the max depending on the fuzzedcommitment_period
.
Re-ACK 342359f Nice, everything lgtm. Just some basic cleanups and changing the headers sync fuzz target to choose the params since my original review. |
…unctions Made arith_uint256 constexpr-constructible. Co-authored-by: Lőrinc <pap.lorinc@gmail.com>
More lightweight than vectors which needed to be copied in tests. Also good to get rid of headers_batch-vector before breaking up test.
Helps logically separate the scenarios being tested. Also adds missing comment for part 4. (unique_ptrs and ProcessingResults will be cleaned up in next commit).
+ new assert helping explain why CHAIN_WORK == TARGET_BLOCKS * 2.
Introduces CHECK_RESULT for consistently validating ProcessingResult. * Verifies HeadersSyncState::State directly after ProcessNextHeaders(). * Uses BOOST_REQUIRE_EQUAL for HeadersSyncState::State - Nicer failure output and prevents continuing test in nonsensical state. * Encourages checking Locator and result.pow_validated_headers.
Move calculated constants from the top of src/headerssync.cpp into src/kernel/chainparams.cpp. Instead of being hardcoded to mainnet parameters, HeadersSyncState can now vary depending on chain or test. Signet and testnets got new HeadersSyncParams constants through temporarily altering headerssync-params.py with corresponding GENESIS_TIME and MINCHAINWORK_HEADERS (based off defaultAssumeValid block height comments, corresponding to nMinimumChainWork). Regtest doesn't have a default assume valid block height, so the values are copied from Testnet 4. Since the constants only affect memory usage, and have very low impact unless dealing with a largely malicious chain, it's not that critical to keep updating them for non-mainnet chains.
Since headers_sync_chainwork_tests.cpp now has full control over its own REDOWNLOAD_BUFFER_SIZE, it can now know exactly how many pow_validated_headers to expect, meaning we can test for exp_headers_size (expected) instead of min_headers_size (minimum).
342359f
to
53341ea
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Latest push:
- Breaks up test into multiple boost test cases instead of regular functions (#32579 (comment)). In doing so, I changed back to using a custom fixture as on master. Diffing with dimmed-zebra helps show what was just moved and what actually changed
git diff 342359f 53341ea
. - Removed redundant
assert()
after bringing that block closer to constants (#32579 (comment)). - Minor fixups to comments and local variables so that they are changed less and in more proper commits.
Re ACK 53341ea Breaking up the three scenarios into their own test cases makes sense. I checked to make sure the test logic is unchanged. |
std::vector<CBlockHeader> first_chain; | ||
std::vector<CBlockHeader> second_chain; | ||
const auto& first_chain{FirstChain()}; | ||
const auto& second_chain{SecondChain()}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: The sneaky redownload and insufficient work cases should be differentiated by making the second chain have enough work, e.g.: davidgumberg@1b1b7f2
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That might be a cool higher level change, although hopefully a commitment will mismatch before we reach the final header.
Hm... thinking through the current code - given 15'000 headers and a period of 600, we have 38 commitments. So there's a 1 in 2^38 chance that all commitments will match. There's a 1/600 chance that HeadersSyncState::commit_offset
will be 599, in which case second_chain
would be too short to check the last commitment, making it a 1 in 2^37 chance of test failure for that offset. (In that case, we would not reach the PoW limit from processing the second chain and the CHECK_RESULT()
below will explode).
bitcoin/src/test/headers_sync_chainwork_tests.cpp
Lines 168 to 172 in 53341ea
// Try to sneakily feed back the second chain during REDOWNLOAD. | |
CHECK_RESULT(hss.ProcessNextHeaders(second_chain, true), | |
hss, /*exp_state=*/State::FINAL, | |
/*exp_success*/false, // Foiled! We detected mismatching headers. | |
/*exp_request_more=*/false, |
Although this aspect isn't materially changed by this PR, I agree it would be good to at least document these probabilities in the test. Will do if I re-push.
(Also note that the chain generation functions use static
internally in order to avoid increasing the runtime of the test).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we should be that worried about a
GenerateHeaders(first_chain, target_blocks-1, Params().GenesisBlock().GetHash(), | ||
Params().GenesisBlock().nVersion, Params().GenesisBlock().nTime, | ||
ArithToUint256(0), Params().GenesisBlock().nBits); | ||
// Pretend the message is still "full", so we don't abort. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: I think this comment was incorrectly placed in the original, it is the ProcessNextHeaders
above that needs /*full_headers_message=*/true
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems related to your previous GH-comment. Current PR version has the comment:
CHECK_RESULT(hss.ProcessNextHeaders({{first_chain.front()}}, /*full_headers_message=*/true), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My bad, what I mean is that this comment:
// Pretend the message is still "full", so we don't abort.
Belongs to the ProcessNextHeaders
above.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Base version for this PR:
bitcoin/src/test/headers_sync_chainwork_tests.cpp
Lines 95 to 98 in fa05a72
hss.reset(new HeadersSyncState(0, Params().GetConsensus(), chain_start, chain_work)); | |
(void)hss->ProcessNextHeaders({first_chain.front()}, true); | |
// Pretend the first header is still "full", so we don't abort. | |
auto result = hss->ProcessNextHeaders(headers_batch, true); |
That comment would indeed be more useful before the first ProcessNextHeaders
-call. Will correct if I push again.
|
||
// Feed in remaining headers, meeting the work threshold again and | ||
// completing the REDOWNLOAD phase: | ||
CHECK_RESULT(hss.ProcessNextHeaders({first_chain.begin() + REDOWNLOAD_BUFFER_SIZE + 1, first_chain.end()}, true), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit:
CHECK_RESULT(hss.ProcessNextHeaders({first_chain.begin() + REDOWNLOAD_BUFFER_SIZE + 1, first_chain.end()}, true), | |
CHECK_RESULT(hss.ProcessNextHeaders({first_chain.begin() + REDOWNLOAD_BUFFER_SIZE + 1, first_chain.end()}, false), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agree here, it seems to be a more taxing test to not pretend like there are any more headers, that HeadersSyncState
will have to make do. Will consider changing this if I push, although it is the same on master. Maybe making it random or doing a for x in {true, false}
loop over it would be even better.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review ACK 53341ea
I reviewed the code and checked that tests were passing locally, but didn't do any extensive testing.
This PR makes the HEADER_COMMITMENT_PERIOD
and REDOWNLOAD_BUFFER_SIZE
configurable by the tests, instead of using an hardcoded value, to avoid ending up testing an unrealistic behavior once REDOWNLOAD_BUFFER_SIZE
surpasses 15_000 in 0.30.
(I like how we can test what could happen using the first commit and investigating the weird behavior!)
Additionally, the PR makes it easier to follow the headers_sync_chainwork_test
, by cleaning up code and expanding comments. It also makes the test more extensive, by making sure to check every parameter of ProcessingResult using the CHECK_RESULT
macro.
crACK 53341ea The test changes here look good to me. I left some nits that can be addressed if following-up or rebasing. I sanity checked that a fresh node doing headers syncing for mainnet, testnet, and signet work on my machine with this branch. I think it would be good to have input from one of the authors/reviewers of #25717 on the non-test changes to headers sync in this PR (4aea5fa and 1771e59) before merging. |
@Sjors or @sdaftuar: would you mind doing some light review of 1771e59? It represents the meat of this PR, making the headers sync state params that affect memory usage able to be configured for tests/other chains instead of hard-coded to mainnet conditions. (davidgumberg also suggested (#32579 (comment)) reviewing 4aea5fa which also touches the (Made a list of author+reviewers from #25717 and used a RNG to select 2 to ping). |
I generally disagree with the approach in this PR. It's doing a lot of refactoring that feels too much like refactoring for the sake of refactoring. I think this can be dropped from the milestone. |
Commit orderI understand that given the current order of commits, the test refactorings seem less motivated. Order used to be refactorings + main commit + illustrative test + additional log test (3740523~9...3740523). Having all the refactoring first got the tests in better shape so that later behavior changes could be expressed clearer. @fanquake wrote on IRC 2 weeks ago (https://bitcoin-irc.chaincode.com/bitcoin-core-dev/2025-08-07#1142169;):
That made me think it would be better to have an independent commit at the beginning of the PR which breaks once the Context of headers sync changesI also must admit that the refactoring began in the context of my other WIP changes for Refactoring directionIf you think the test refactorings go in arbitrary directions, I would be curious to hear which parts you still disagree with now that I've used multiple test cases as you suggested (#32579 (comment)). Finishing thoughtsIf you think that the changes in the main commit 1771e59 are also refactoring for the sake of refactoring, rather than parameters ending up where they should be to enable better testing, that's a different conversation. |
(As I said on IRC, I respect the decision to drop it from the milestone. There's enough moving parts in the release process. It would've been neat to fix ahead of bumping |
Given that the current PR already demonstrates the production code is still correct, I tend to agree that we don't have to rush. Would have been cleaner, but it's not urgent. I do think however that the refactorings are necessary - but I'd keep the commits before the final fix, and if other reviewers think it's too risky, maybe we can do it in a separate PR - unifying it with @danielabrozzoni's change and your other related proposal either through a tracking issue or pushing the rest as drafts. I am fine with doing the refactorings here. |
Background
As part of the release process we often run contrib/devtools/headerssync-params.py and increase the values of the constants
HEADER_COMMITMENT_PERIOD
andREDOWNLOAD_BUFFER_SIZE
in src/headerssync.cpp as per doc/release-process.md (example: 11a2d3a). This helps fine tune the memory consumption perHeadersSyncState
-instance in the face of malicious peers.(The
REDOWNLOAD_BUFFER_SIZE
/HEADER_COMMITMENT_PERIOD
ratio determines how many Headers Sync commitment bits must match between PRESYNC & REDOWNLOAD phases before we start permanently storing headers from a peer. For more details see comments src/headerssync.h and contrib/devtools/headerssync-params.py).Problem: Not feeding back headers until completing sync
In the next (v30) or following release, it is very likely that
REDOWNLOAD_BUFFER_SIZE
(14827
as of v29) will exceed thetarget_blocks
constant used to control the length of chains generated for testing Headers Sync (15000
, headers_sync_chainwork_tests.cpp).Calculation
Date at time of calculation: 2025-05-21
Block height:
897'662
Avg block time:
~9.8min
(https://mempool.space).Added 6 months to
TIME
constant in headerssync-params.py:datetime(2028, 4, 6)
Increased block height in headerssync-params.py (
MINCHAINWORK_HEADERS
) to:920'000
Block height difference:
22'338
Days:
22'338 * 9.8min / (60min * 24h) = ~152
Estimated date for block height
920'000
: 2025-10-20REDOWNLOAD_BUFFER_SIZE
value calculated by headerssync-params.py:15002
The
HeadersSyncState::m_redownloaded_headers
-buffer would not reach theREDOWNLOAD_BUFFER_SIZE
-threshold during those unit tests. As a consequenceHeadersSyncState::PopHeadersReadyForAcceptance()
will not start feeding back headers until the PoW threshold has been met. While this will not cause the unit test to start failing on master, it means it would go from testing behavior that resembles mainnet (way more than 15'000 blocks to reach the PoW limit), to behavior that is not possible/expected there.Solution
Preempt the tests from only testing this unrealistic condition of completing Headers Sync before reaching
REDOWNLOAD_BUFFER_SIZE
by making tests able to define their own values through the newHeadersSyncParams
instead of having them hard-coded for all chains & tests.Commits
REDOWNLOAD_BUFFER_SIZE
threshold to illustrate the need for the rest of the PR (see commit message).HeadersSyncState
less hard-coded.