Skip to content

Conversation

jnewbery
Copy link
Contributor

Cache block filter headers at heights of multiples of 1000 in memory.

Block filter headers at height 1000x are checkpointed, and will be the most frequently requested. Cache them in memory to avoid costly disk reads.

@jnewbery
Copy link
Contributor Author

This takes a different approach from the caching in #16442. I prefer it because it isolates all of the caching logic inside the indexer rather than in net processing. I'll put the other approach up as a different branch for comparison.

Copy link
Contributor

@theStack theStack left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Concept ACK -- I didn't look very deeply into the proposed solution from the original PR (#16442, commit be5f7c5), but this solution here seems to be way simpler.

@DrahtBot
Copy link
Contributor

DrahtBot commented May 13, 2020

The following sections might be updated with supplementary metadata relevant to reviewers and maintainers.

Conflicts

Reviewers, this pull request conflicts with the following ones:

If you consider this pull request important, please also help to review the conflicting pull requests. Ideally, start with the one that should be merged first.

@theStack
Copy link
Contributor

theStack commented May 13, 2020

I'm almost ready for ACKing this PR, just have some minor suggestion/discussion points:

  • Would it make sense to also only check the cache if we have checkpoints, i.e. a method structure like
bool is_checkpoint = (block_index->nHeight % CFCHECKPT_INTERVAL == 0); 
if (is_checkpoint)
    // Check the headers cache

// Regular lookup

if (is_checkpoint)
    // Add to the headers cache

? Of course, the drawback of this approach is that the condition for checking needs to be removed if the cache is ever used for other filter headers than those from the checkpoints. Also not sure if its really worth it from a performance point of view. But in 999 out of 1000 times the lookup in the map could be skipped.

  • In the regular case the cache should never get full, considering that this would by plan only happen in the far future (block 2000000, >25 years from now, with the current maximum size constant of 2000). If it gets full however, should we take some action like putting a debug output or even an assert, to get noticed that we either have some memory exhaustion bug or that the maximum size needs to be increased?
  • Any idea how to effectively test performance improvements like this? I don't know anything better than just putting some LogPrintfs in the cache hit / cache add cases, run the (maybe slightly modified) functional test and analyze the log output to check if it behaves as expected.

Copy link
Member

@maflcko maflcko left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree with @theStack that performance improvements should be benchmarked. This could turn into another improvement that is actually a slow-down, see #18352.

Copy link
Contributor

@fjahr fjahr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree with previous comments: I prefer this to the previous approach. I think it would be good to have some benchmarks. Otherwise, code looks very good to me.

@jonasschnelli
Copy link
Contributor

Concept ACK

@jnewbery jnewbery force-pushed the 2020-05-cfcheckpts-cache branch 2 times, most recently from 23be13d to 7ad83ed Compare May 13, 2020 22:43
@jnewbery
Copy link
Contributor Author

@theStack

Would it make sense to also only check the cache if we have checkpoints, i.e. a method structure like

Done

@MarcoFalke

a non-recursive mutex should be sufficient.

Changed RecursiveMutex -> Mutex

I'm still working on adding a microbench, although it's not clear to me that it'll show any improvement since the disk reads may be cached by the OS for the microbench.

Copy link
Contributor

@jkczyz jkczyz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Concept ACK.

Kudos on the simplicity of this approach!

theStack added a commit to theStack/bitcoin that referenced this pull request May 14, 2020
theStack added a commit to theStack/bitcoin that referenced this pull request May 14, 2020
@theStack
Copy link
Contributor

theStack commented May 14, 2020

Using jnewbery's nice node-shell module (a modification of test-shell that allows to connect with the functional test framework to a running full-node) I crafted up a quick and dirty test-script (see branch https://github.com/theStack/bitcoin/tree/pr18960_test) that sends getcfcheckpt requests with the stop-hash of the halving block 630000 and measures the processing time.

Freshly started bitcoind from master branch (w/o checkpoint headers cache implementation), after mempool imports are done:

getcfcheckpt request processing times
=====================================
-> first:     53.75ms
-> following: 50.70ms (average)
=====================================

Freshly started bitcoind from PR branch (with checkpoint headers cache implementation), after mempool imports are done:

getcfcheckpt request processing times
=====================================
-> first:     50.92ms
-> following: 50.80ms (average)
=====================================

There doesn't seem to be a noticable difference. I don't know if my methodology is flawed though. Maybe all the processing around is taking so much more time that the actual part of fetching the header (from the cache or from disk) is neglectible. If anyone has an idea how to improve the benchmark script, happy to hear suggestions!

// EDIT: As jnewbery found out in the course of today's PR review club meeting, there is a good explanation on why the values are always close to 50ms:

19:36 < jnewbery> There are 100ms sleeps in the messagehandler thread between looping through all peers:
                  https://github.com/jnewbery/bitcoin/blob/7ad83ed252914d6d5b8ed81f103aecf052c68fb7/src/net.cpp#L2061
19:37 < jnewbery> on a non-busy node, 50ms is the average time you have to wait for the message handler thread to wake up

@michaelfolkson
Copy link

Approach ACK

@maflcko
Copy link
Member

maflcko commented May 14, 2020

@theStack I believe in between the requests you can drop the cache to simulate eviction (which may occasionally happen during normal operation). See https://unix.stackexchange.com/questions/17936/setting-proc-sys-vm-drop-caches-to-clear-cache

@jnewbery
Copy link
Contributor Author

@narula suggested in today's PR Review Club meeting that it'd be less cognitive overhead to simply take the m_cs_headers_cache for the entirety of the LookupFilterHeader() function, rather than taking, releasing and taking it again. That seem like a reasonable request to me. We can't ever block important work with this lock, so holding it a bit longer than strictly necessary is not an issue.

@jnewbery
Copy link
Contributor Author

@theStack - thank you for doing some benching of this. Your approach inspired me to do something similar myself, using debug logs to find the exact time to construct a cfcheckpt response. I've added a debug_log_delta context wrapper to the node_shell, which allows me to provide two debug log messages, then do actions on the node and extract the time delta between those two logs. I applied that patch to:

  • without a cache
  • with this cache
  • with @narula's suggestion of taking the lock only once (I was interested to see if there were any performance differences).

And then ran the following node-shell script:

REPO_PATH="<path_to_repo>"
DATADIR_PATH="<path_to_datadir>"
import sys
sys.path.insert(0, f"{REPO_PATH}/test/functional")
from test_framework.node_shell import NodeShell
test = NodeShell()
test.setup(datadir=DATADIR_PATH)
# <test_framework.node_shell.NodeShell.__TestShell object at 0x7f7704820490>
node = test.nodes[0]
bb = node.getbestblockhash()
from test_framework.messages import FILTER_TYPE_BASIC, msg_getcfcheckpt
request = msg_getcfcheckpt(filter_type=FILTER_TYPE_BASIC, stop_hash=int(bb, 16))
for i in range(15):
    with node.debug_log_delta("getcfcheckpt request received", "cfcheckpt response constructed"):
        node.p2ps[0].send_and_ping(request)
test.shutdown()

no cache

getcfcheckpt request received to cfcheckpt response constructed: 7.824ms
getcfcheckpt request received to cfcheckpt response constructed: 7.054ms
getcfcheckpt request received to cfcheckpt response constructed: 7.021ms
getcfcheckpt request received to cfcheckpt response constructed: 7.303ms
getcfcheckpt request received to cfcheckpt response constructed: 7.355ms
getcfcheckpt request received to cfcheckpt response constructed: 7.26ms
getcfcheckpt request received to cfcheckpt response constructed: 7.169ms
getcfcheckpt request received to cfcheckpt response constructed: 7.166ms
getcfcheckpt request received to cfcheckpt response constructed: 7.194ms
getcfcheckpt request received to cfcheckpt response constructed: 8.574ms
getcfcheckpt request received to cfcheckpt response constructed: 7.119ms
getcfcheckpt request received to cfcheckpt response constructed: 7.156ms
getcfcheckpt request received to cfcheckpt response constructed: 4.153ms
getcfcheckpt request received to cfcheckpt response constructed: 8.162ms
getcfcheckpt request received to cfcheckpt response constructed: 6.485ms

There does seem to be a very slight performance improvement after the first request. Maybe around 5-10% from the mean, but variance is quite high, so it might be noise.

cache (taking locks twice)

getcfcheckpt request received to cfcheckpt response constructed: 12.2ms
getcfcheckpt request received to cfcheckpt response constructed: 2.855ms
getcfcheckpt request received to cfcheckpt response constructed: 2.992ms
getcfcheckpt request received to cfcheckpt response constructed: 2.869ms
getcfcheckpt request received to cfcheckpt response constructed: 3.141ms
getcfcheckpt request received to cfcheckpt response constructed: 3.692ms
getcfcheckpt request received to cfcheckpt response constructed: 3.452ms
getcfcheckpt request received to cfcheckpt response constructed: 2.916ms
getcfcheckpt request received to cfcheckpt response constructed: 3.413ms
getcfcheckpt request received to cfcheckpt response constructed: 3.648ms
getcfcheckpt request received to cfcheckpt response constructed: 3.816ms
getcfcheckpt request received to cfcheckpt response constructed: 3.84ms
getcfcheckpt request received to cfcheckpt response constructed: 3.812ms
getcfcheckpt request received to cfcheckpt response constructed: 3.869ms
getcfcheckpt request received to cfcheckpt response constructed: 3.445ms

The initial request is slower (presumably because the cache needs to be warmed), but subsequent requests are on average >50% faster than with no cache.

cache (taking lock once)

getcfcheckpt request received to cfcheckpt response constructed: 11.785ms
getcfcheckpt request received to cfcheckpt response constructed: 3.35ms
getcfcheckpt request received to cfcheckpt response constructed: 3.19ms
getcfcheckpt request received to cfcheckpt response constructed: 3.475ms
getcfcheckpt request received to cfcheckpt response constructed: 3.175ms
getcfcheckpt request received to cfcheckpt response constructed: 3.345ms
getcfcheckpt request received to cfcheckpt response constructed: 3.274ms
getcfcheckpt request received to cfcheckpt response constructed: 3.161ms
getcfcheckpt request received to cfcheckpt response constructed: 3.259ms
getcfcheckpt request received to cfcheckpt response constructed: 3.217ms
getcfcheckpt request received to cfcheckpt response constructed: 3.238ms
getcfcheckpt request received to cfcheckpt response constructed: 3.253ms
getcfcheckpt request received to cfcheckpt response constructed: 3.33ms
getcfcheckpt request received to cfcheckpt response constructed: 3.32ms
getcfcheckpt request received to cfcheckpt response constructed: 3.36ms

The first request is slightly faster than when taking locks twice (although that might be noise). Subsequent requests are the same speed as the 'taking locks twice' branch, which is what we'd expect because locks are only taken once in that branch if the header is found in the cache.

@MarcoFalke - I think these full node manual benches are more meaningful than microbenches, so I'm not planning to add any automated benches to /src/bench.

@jnewbery
Copy link
Contributor Author

@narula suggested in today's PR Review Club meeting that it'd be less cognitive overhead to simply take the m_cs_headers_cache for the entirety of the LookupFilterHeader() function, rather than taking, releasing and taking it again.

I'll make this change tomorrow.

@theStack
Copy link
Contributor

theStack commented May 15, 2020

In yesterday's review club meeting on this PR, it was suggested to put LogPrintf() directly into the network processing code parts to measure the execution time of getcfcheckpt requests. (My previous approach of measuring the request response time from the test framework was not precise enough, as the time also included python overhead and the time for the message handler to wake up.) I did some testing with this refined approach and can roughly confirm the results from jnewbery (see commit theStack@48cff1b):

without cache:

2020-05-14T20:15:11Z getcfcheckpt processing took 10247us
2020-05-14T20:15:11Z getcfcheckpt processing took 6105us
2020-05-14T20:15:11Z getcfcheckpt processing took 6437us
2020-05-14T20:15:11Z getcfcheckpt processing took 5330us
2020-05-14T20:15:11Z getcfcheckpt processing took 6964us
2020-05-14T20:15:11Z getcfcheckpt processing took 5523us
2020-05-14T20:15:12Z getcfcheckpt processing took 5413us
2020-05-14T20:15:12Z getcfcheckpt processing took 5843us
2020-05-14T20:15:12Z getcfcheckpt processing took 7060us
2020-05-14T20:15:12Z getcfcheckpt processing took 6165us
...

with cache:

2020-05-14T20:21:26Z getcfcheckpt processing took 9801us
2020-05-14T20:21:26Z getcfcheckpt processing took 2619us
2020-05-14T20:21:26Z getcfcheckpt processing took 3338us
2020-05-14T20:21:26Z getcfcheckpt processing took 2446us
2020-05-14T20:21:26Z getcfcheckpt processing took 2560us
2020-05-14T20:21:27Z getcfcheckpt processing took 2331us
2020-05-14T20:21:27Z getcfcheckpt processing took 2703us
2020-05-14T20:21:27Z getcfcheckpt processing took 2614us
2020-05-14T20:21:27Z getcfcheckpt processing took 2598us
2020-05-14T20:21:27Z getcfcheckpt processing took 2444us
...

One can see that there's a speedup of at least 2x with the cache.

Note that my approach to get these numbers was slightly different, as I directly measured the time in bitcoind via GetTimeMicros() (commit theStack@49f5f3d):

diff --git a/src/net_processing.cpp b/src/net_processing.cpp
index 1df1fab..6e620d9 100644
--- a/src/net_processing.cpp
+++ b/src/net_processing.cpp
@@ -3380,7 +3380,10 @@ bool ProcessMessage(CNode* pfrom, const std::string& msg_type, CDataStream& vRec
     }

     if (msg_type == NetMsgType::GETCFCHECKPT) {
+        int64_t start_time = GetTimeMicros();
         ProcessGetCFCheckPt(pfrom, vRecv, chainparams, connman);
+        int64_t processing_duration = GetTimeMicros() - start_time;
+        LogPrintf("getcfcheckpt processing took %lldus\n", processing_duration);
         return true;
     }

but the idea is the same.

Now I'm very curious if the change from std::map to std::unordered_map shows a noticable difference in performance.

@MarcoFalke: Nice idea to drop the cache between the requests. Unfortunately, on my current development box I don't have the permissions (root rights) to do this :/ if anyone having the possibility, the results would be very interesting.

@jnewbery jnewbery force-pushed the 2020-05-cfcheckpts-cache branch from 7ad83ed to 4018953 Compare May 15, 2020 16:00
@jnewbery
Copy link
Contributor Author

it'd be less cognitive overhead to simply take the m_cs_headers_cache for the entirety of the LookupFilterHeader() function

@narula

Done.

@@ -31,6 +31,8 @@ constexpr char DB_FILTER_POS = 'P';
constexpr unsigned int MAX_FLTR_FILE_SIZE = 0x1000000; // 16 MiB
/** The pre-allocation chunk size for fltr?????.dat files */
constexpr unsigned int FLTR_FILE_CHUNK_SIZE = 0x100000; // 1 MiB
/** Maximum size of the cfheaders cache */
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: give justification in comments for magic numbers so future readers have some idea how this was chosen and if/when it will need to be adjusted.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

Comment on lines 406 to 421
// Add to the headers cache
if (m_headers_cache.size() < CF_HEADERS_CACHE_MAX_SZ) {
m_headers_cache.emplace(block_index->GetBlockHash(), entry.header);
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You'll still want to condition this on block_index->nHeight % CFCHECKPT_INTERVAL == 0, otherwise the most recent 2000 requested headers will be cached rather than the desired ones.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yikes. Thanks

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@narula
Copy link
Contributor

narula commented May 15, 2020

utACK

@jnewbery
Copy link
Contributor Author

I've force-pushed a branch that changes this to use an unordered_map and only lock once.

Comparing the unordered map implementation to map:

unordered_map

Log
getcfcheckpt request received to cfcheckpt response constructed: 12.808ms
getcfcheckpt request received to cfcheckpt response constructed: 2.783ms
getcfcheckpt request received to cfcheckpt response constructed: 2.712ms
getcfcheckpt request received to cfcheckpt response constructed: 2.656ms
getcfcheckpt request received to cfcheckpt response constructed: 2.672ms
getcfcheckpt request received to cfcheckpt response constructed: 2.82ms
getcfcheckpt request received to cfcheckpt response constructed: 2.994ms
getcfcheckpt request received to cfcheckpt response constructed: 3.068ms
getcfcheckpt request received to cfcheckpt response constructed: 3.335ms
getcfcheckpt request received to cfcheckpt response constructed: 3.444ms
getcfcheckpt request received to cfcheckpt response constructed: 3.159ms
getcfcheckpt request received to cfcheckpt response constructed: 3.194ms
getcfcheckpt request received to cfcheckpt response constructed: 2.601ms
getcfcheckpt request received to cfcheckpt response constructed: 2.692ms
getcfcheckpt request received to cfcheckpt response constructed: 2.982ms
getcfcheckpt request received to cfcheckpt response constructed: 3.209ms
getcfcheckpt request received to cfcheckpt response constructed: 3.116ms
getcfcheckpt request received to cfcheckpt response constructed: 3.136ms
getcfcheckpt request received to cfcheckpt response constructed: 2.666ms
getcfcheckpt request received to cfcheckpt response constructed: 3.091ms
getcfcheckpt request received to cfcheckpt response constructed: 3.444ms
getcfcheckpt request received to cfcheckpt response constructed: 3.385ms
getcfcheckpt request received to cfcheckpt response constructed: 3.147ms
getcfcheckpt request received to cfcheckpt response constructed: 3.145ms
getcfcheckpt request received to cfcheckpt response constructed: 3.053ms

First lookup: 12.8ms, average subsequent lookup: 3.02ms

map

Log
getcfcheckpt request received to cfcheckpt response constructed: 12.959ms
getcfcheckpt request received to cfcheckpt response constructed: 3.001ms
getcfcheckpt request received to cfcheckpt response constructed: 3.656ms
getcfcheckpt request received to cfcheckpt response constructed: 2.07ms
getcfcheckpt request received to cfcheckpt response constructed: 2.537ms
getcfcheckpt request received to cfcheckpt response constructed: 2.838ms
getcfcheckpt request received to cfcheckpt response constructed: 2.994ms
getcfcheckpt request received to cfcheckpt response constructed: 3.143ms
getcfcheckpt request received to cfcheckpt response constructed: 3.383ms
getcfcheckpt request received to cfcheckpt response constructed: 3.666ms
getcfcheckpt request received to cfcheckpt response constructed: 3.889ms
getcfcheckpt request received to cfcheckpt response constructed: 3.553ms
getcfcheckpt request received to cfcheckpt response constructed: 3.394ms
getcfcheckpt request received to cfcheckpt response constructed: 3.3ms
getcfcheckpt request received to cfcheckpt response constructed: 3.5ms
getcfcheckpt request received to cfcheckpt response constructed: 3.372ms
getcfcheckpt request received to cfcheckpt response constructed: 3.329ms
getcfcheckpt request received to cfcheckpt response constructed: 3.308ms
getcfcheckpt request received to cfcheckpt response constructed: 3.564ms
getcfcheckpt request received to cfcheckpt response constructed: 3.746ms
getcfcheckpt request received to cfcheckpt response constructed: 3.714ms
getcfcheckpt request received to cfcheckpt response constructed: 3.423ms
getcfcheckpt request received to cfcheckpt response constructed: 3.517ms
getcfcheckpt request received to cfcheckpt response constructed: 3.636ms
getcfcheckpt request received to cfcheckpt response constructed: 3.595ms
First lookup: 13.0ms, average subsequent lookup: 3.60s

So from this single test, it appears that an unordered_map is indeed faster. That's what we'd expect, since lookup is O(1) and number of lookups required for a single getcfcheckpt request is O(n), so total is O(n). For a map, a single lookup is O(logn), so total is O(nlogn).

The original implementation is faster still, since it's storing the checkpts in a single vector, but the aim of this PR is to avoid disk reads for getcfcheckpt messages, which the simple unordered_map implementation does. Further performance improvements could be done as a follow-up if required.

@jnewbery jnewbery force-pushed the 2020-05-cfcheckpts-cache branch from 4018953 to b6eba9e Compare May 15, 2020 16:48
@jnewbery
Copy link
Contributor Author

Feedback from @narula and @jkczyz addressed. Thanks!

@jkczyz
Copy link
Contributor

jkczyz commented May 15, 2020

ACK b6eba9e

Ran all tests.

@fjahr
Copy link
Contributor

fjahr commented May 15, 2020

utACK b6eba9e

{
LOCK(m_cs_headers_cache);
auto header = m_headers_cache.find(block_index->GetBlockHash());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Before the last change, the cache lookup only happened for checkpoint blocks (i.e. block number modulo 1000 is zero). Is this removal of the condition intended?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No. Not intentional. I just messed up splitting out the LOCK(m_cs_headers_cache). Thanks for catching this.

@jnewbery jnewbery force-pushed the 2020-05-cfcheckpts-cache branch from b6eba9e to c63e934 Compare May 18, 2020 16:53
Cache block filter headers at heights of multiples of 1000 in memory.

Block filter headers at height 1000x are checkpointed, and will be the
most frequently requested. Cache them in memory to avoid costly disk
reads.
@jnewbery jnewbery force-pushed the 2020-05-cfcheckpts-cache branch from c63e934 to 0187d4c Compare May 18, 2020 16:54
@jnewbery
Copy link
Contributor Author

Thanks for the rereview @jkczyz and @theStack , and for catching the problems with checking checkpoint heights. Should be fixed now.

@jkczyz
Copy link
Contributor

jkczyz commented May 18, 2020

ACK 0187d4c

@jnewbery jnewbery changed the title [indexes] Add compact block filter headers cache indexes: Add compact block filter headers cache May 18, 2020
@theStack
Copy link
Contributor

ACK 0187d4c 🎉

By the way, I'm neutral to the question on whether to use std::map or std::unordered_map for the cache -- std::unordered_map does indeed seem to lead to some performance improvement (my tests show a rough 10% improvement, see numbers below; jnewberys tests show an even higher improvement of close to 20%), but on the other hand takes both more space (as pointed out by MarcoFalke) and also more code due to the need to provide a hash function. Maybe someone has a more clear opinion on the choice, personally I'm okay with ACKing either version.

Test with std::map:

2020-05-19T09:09:01Z getcfcheckpt processing took 13817us
2020-05-19T09:09:01Z getcfcheckpt processing took 2855us
2020-05-19T09:09:01Z getcfcheckpt processing took 3573us
2020-05-19T09:09:01Z getcfcheckpt processing took 3444us
2020-05-19T09:09:01Z getcfcheckpt processing took 3236us
2020-05-19T09:09:01Z getcfcheckpt processing took 3385us
2020-05-19T09:09:01Z getcfcheckpt processing took 3543us
2020-05-19T09:09:01Z getcfcheckpt processing took 3378us
2020-05-19T09:09:01Z getcfcheckpt processing took 3450us
2020-05-19T09:09:01Z getcfcheckpt processing took 3041us
2020-05-19T09:09:01Z getcfcheckpt processing took 3561us
...

➡️ Average subsequent lookup: 3,346ms

Test with std::unordered_map:

2020-05-19T08:59:53Z getcfcheckpt processing took 12205us
2020-05-19T08:59:53Z getcfcheckpt processing took 3650us
2020-05-19T08:59:53Z getcfcheckpt processing took 4660us
2020-05-19T08:59:53Z getcfcheckpt processing took 3393us
2020-05-19T08:59:53Z getcfcheckpt processing took 3669us
2020-05-19T08:59:53Z getcfcheckpt processing took 2919us
2020-05-19T08:59:53Z getcfcheckpt processing took 2565us
2020-05-19T08:59:53Z getcfcheckpt processing took 2804us
2020-05-19T08:59:53Z getcfcheckpt processing took 2348us
2020-05-19T08:59:53Z getcfcheckpt processing took 2511us
2020-05-19T08:59:53Z getcfcheckpt processing took 2264us
...

➡️ Average subsequent lookup: 3,078ms

@fjahr
Copy link
Contributor

fjahr commented May 19, 2020

re-utACK 0187d4c

Compared to my last review is_checkpoint variable is introduced for better readability and if statement that was removed in error is added back: https://github.com/bitcoin/bitcoin/compare/b6eba9e492e1dfb38baec894bfda1cfb4d709e5d..0187d4c118ab4c0f5c2d4fb180c2a8dea8ac53cf

Copy link

@ariard ariard left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review ACK 0187d4c.

I prefer this approach of caching inside index instead of net_processing compare to #16442, it reduces cognitive load. I also assent with ignoring reorg as argued, probability of the event is rare enough and overhead cheap enough to justify having some junk in cache.

@@ -30,6 +38,9 @@ class BlockFilterIndex final : public BaseIndex
bool ReadFilterFromDisk(const FlatFilePos& pos, BlockFilter& filter) const;
size_t WriteFilterToDisk(FlatFilePos& pos, const BlockFilter& filter);

Mutex m_cs_headers_cache;
std::unordered_map<uint256, uint256, FilterHeaderHasher> m_headers_cache GUARDED_BY(m_cs_headers_cache);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you have to change PR again, you may comment that key value is block hash and mapped value filter-header, type being the same it may be confusing at first look.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point. I'll add a comment if I need to retouch the branch.

@laanwj
Copy link
Member

laanwj commented May 21, 2020

code review ACK 0187d4c

@laanwj laanwj merged commit 4479eb0 into bitcoin:master May 21, 2020
@jnewbery jnewbery deleted the 2020-05-cfcheckpts-cache branch May 21, 2020 18:30
sidhujag pushed a commit to syscoin/syscoin that referenced this pull request May 21, 2020
0187d4c [indexes] Add compact block filter headers cache (John Newbery)

Pull request description:

  Cache block filter headers at heights of multiples of 1000 in memory.

  Block filter headers at height 1000x are checkpointed, and will be the most frequently requested. Cache them in memory to avoid costly disk reads.

ACKs for top commit:
  jkczyz:
    ACK 0187d4c
  theStack:
    ACK 0187d4c 🎉
  fjahr:
    re-utACK 0187d4c
  laanwj:
    code review ACK 0187d4c
  ariard:
    Code Review ACK 0187d4c.

Tree-SHA512: 2075ae36901ebcdc4a217eae5203ebc8582181a0831fb7a53a119f031c46bca960a610a38a3d0636a9a405f713efcf4200c85f10c8559fd80139036d89473c56
jasonbcox pushed a commit to Bitcoin-ABC/bitcoin-abc that referenced this pull request Nov 19, 2020
Summary:
```
Cache block filter headers at heights of multiples of 1000 in memory.

Block filter headers at height 1000x are checkpointed, and will be the
most frequently requested. Cache them in memory to avoid costly disk
reads.
```

Backport of core [[bitcoin/bitcoin#18960 | PR18960]].

Depends on D8463.

Test Plan:
  ninja all check-all

Reviewers: #bitcoin_abc, deadalnix

Reviewed By: #bitcoin_abc, deadalnix

Differential Revision: https://reviews.bitcoinabc.org/D8464
kwvg added a commit to kwvg/dash that referenced this pull request Aug 22, 2021
kwvg added a commit to kwvg/dash that referenced this pull request Aug 22, 2021
kwvg added a commit to kwvg/dash that referenced this pull request Sep 16, 2021
kwvg added a commit to kwvg/dash that referenced this pull request Sep 18, 2021
kwvg added a commit to kwvg/dash that referenced this pull request Sep 19, 2021
thelazier pushed a commit to thelazier/dash that referenced this pull request Sep 25, 2021
kwvg added a commit to kwvg/dash that referenced this pull request Oct 12, 2021
@bitcoin bitcoin locked as resolved and limited conversation to collaborators Feb 15, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.