Skip to content

Conversation

sipa
Copy link
Member

@sipa sipa commented Nov 18, 2014

Currently we only inv the hash of the new tip, which means that several round-trips are needed to first discover the hashes of all blocks in between. We can just announce all new hashes.

Just a minor improvement, not intended for 0.10.

@sipa sipa force-pushed the announceall branch 2 times, most recently from f505115 to 3ed7f8e Compare November 25, 2014 11:25
@laanwj laanwj added the P2P label Nov 28, 2014
CBlockIndex *pindexMostWork = NULL;
do {
boost::this_thread::interruption_point();

CBlockIndex *pindexNewTip;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the moved line, this variable was initialized to NULL, but no more. Maybe better to do so just in case so that potential bugs result in more predictable outcomes (same for pindexFork on the line below).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure I agree with this as best practice (initializing variables may hide bugs too that uninitialized memory and eg valgrind could show), but: done.

@sipa
Copy link
Member Author

sipa commented Apr 8, 2015

Updated.

@sdaftuar
Copy link
Member

Did some basic testing, and looks good to me.

@sipa
Copy link
Member Author

sipa commented Apr 11, 2015

Mental note: perhaps this should limit itself to 8-16 new entries or so. If it's more, rely on peer's incremental synchronization to catch up (inspired by a comment by @sdaftuar in #5982).

EDIT: implemented

@sdaftuar
Copy link
Member

I guess one side effect of this is that when announcing a reorg to an 0.10 peer, this increases the overhead. Say there are N blocks announced in the inv generated here. Then an 0.10 peer will generate N getheaders messages (with the same locator in each, but with a different hashstop), and the 0.10 peer will therefore receive N headers responses, with {0,1,...,(N-1)} headers in each message.

The total overhead is still not that large in absolute terms I think, but with N^2 behavior here, maybe this argues for the cap being more like 4 or 6, rather than 16?

Also I think in the case that we are truncating the inv's, perhaps it would be better to revert to only inv'ing the tip, rather than announcing up to our cap (whatever we pick for that value), since 0.10 nodes would otherwise be requesting the same long chain of headers multiple times.

@sipa
Copy link
Member Author

sipa commented Apr 17, 2015

@sdaftuar Hmm, maybe that's a reason to only send all hashes when using 'headers' announcements, not 'inv' announcement (#5982)?

@sdaftuar
Copy link
Member

@sipa Yes I think that's reasonable, and that's the approach I'm trying for #5982 -- when inv-ing, only announce the tip; but when announcing headers, be willing to announce everything back to the last fork point (with some limits, as discussed in that issue).

@jgarzik
Copy link
Contributor

jgarzik commented Jul 23, 2015

concept and light review ACK.

Main comment: If many nodes on the network are re-org'ing at once, doesn't this cause traffic to spike?

I'll see if I can queue up some worst case local testing in a week or so, if nobody beats me to it.

@sipa
Copy link
Member Author

sipa commented Sep 22, 2015

Superseded by #6494.

@sipa sipa closed this Sep 22, 2015
@bitcoin bitcoin locked as resolved and limited conversation to collaborators Sep 8, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants