Skip to content

Conversation

sdaftuar
Copy link
Member

@sdaftuar sdaftuar commented Jul 6, 2022

Eliminate the unnecessary atomic guarding m_last_getheaders_timestamp, which is only accessed in a single thread (thanks to MarcoFalke for pointing this out).

Also address a nit that came up in #25454.

Comment on lines 3977 to 3981
peer->m_last_getheaders_timestamp.store(NodeSeconds{});
WITH_LOCK(peer->m_last_getheaders_mutex, peer->m_last_getheaders_timestamp = {});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why change NodeSeconds{} to {}?

m_last_getheaders_timestamp is of type
NodeClock::time_point aka
std::chrono::time_point<NodeClock> aka
std::chrono::time_point<std::chrono::system_clock>

while NodeSeconds is of type
std::chrono::time_point<NodeClock, std::chrono::seconds> aka
std::chrono::time_point<std::chrono::system_clock, std::chrono::seconds>.

So, the type of m_last_getheaders_timestamp omits the second template parameter. It defaults to std::chrono::system_clock::duration. From the documentation, it is std::chrono::duration<rep, period>, but I can't find where it is specified that std::chrono::system_clock::period is std::ratio<1>. It only says:

period -- a std::ratio type representing the tick period of the clock, in seconds

Even if that is ok, there is now inconsistency wrt how m_last_getheaders_timestamp is set - in its declaration NodeSeconds{} is used and here {} is used.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NodeSeconds{}, as well as std::chrono::time_point<NodeClock>{} will construct a time point representing the clock's epoch, see https://en.cppreference.com/w/cpp/chrono/time_point/time_point. The duration (/time) since epoch is always zero, regardless of the durations period. So this should be fine as is.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alright, zero is always zero even if the period is std::ratio<1> for one and something else for the other.

This means that it should be possible to use just {} in the declaration for consistency, right?

-NodeClock::time_point m_last_getheaders_timestamp GUARDED_BY(m_last_getheaders_mutex) {NodeSeconds{}};
+NodeClock::time_point m_last_getheaders_timestamp GUARDED_BY(m_last_getheaders_mutex) {};

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This means that it should be possible to use just {} in the declaration for consistency, right?

For atomic that isn't possible due to compiler bugs, but without atomic, it may be possible.

Comment on lines -2279 to 2281
if (current_time - peer.m_last_getheaders_timestamp.load() > HEADERS_RESPONSE_TIME) {
if (current_time - peer.m_last_getheaders_timestamp > HEADERS_RESPONSE_TIME) {
m_connman.PushMessage(&pfrom, msgMaker.Make(NetMsgType::GETHEADERS, locator, uint256()));
peer.m_last_getheaders_timestamp = current_time;
Copy link
Contributor

@vasild vasild Jul 11, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A similar result can be achieved without a mutex, in a lock-free manner:

     // Only allow a new getheaders message to go out if we don't have a recent
     // one already in-flight
-    if (current_time - peer.m_last_getheaders_timestamp.load() > HEADERS_RESPONSE_TIME) {
+    auto last = peer.m_last_getheaders_timestamp.load();
+    if (current_time - last > HEADERS_RESPONSE_TIME &&
+        peer.m_last_getheaders_timestamp.compare_exchange_strong(last, current_time)) {
+
         m_connman.PushMessage(&pfrom, msgMaker.Make(NetMsgType::GETHEADERS, locator, uint256()));
-        peer.m_last_getheaders_timestamp = current_time;
         return true;
     }

Anyway, feel free to ignore. I think in this case using a mutex is also ok.

Comment on lines 2277 to 2279
const auto current_time = NodeClock::now();

LOCK(peer.m_last_getheaders_mutex);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it not better to first lock the mutex and then acquire the current time, because locking the mutex can take "long" time and then we would operate on an outdated current_time?

Copy link
Member

@maflcko maflcko left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Concept ACK removing atomic (see my rationale: #25454 (comment))

Though, I don't think a Mutex is needed either? #25454 (comment)

sdaftuar added 2 commits July 12, 2022 13:38
This variable is only used in a single thread, so no atomic or mutex is
necessary to guard it.
@sdaftuar sdaftuar force-pushed the 2022-07-getheaders-fixups branch from 8b09242 to 613e221 Compare July 12, 2022 17:39
@sdaftuar
Copy link
Member Author

Updated the first commit to remove the mutex as well, since I had it wrong that this variable was being accessed in multiple threads.

@sdaftuar sdaftuar changed the title p2p: Replace atomic with a mutex for m_last_getheaders_timestamp p2p: Eliminate atomic for m_last_getheaders_timestamp Jul 12, 2022
Copy link
Contributor

@vasild vasild left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ACK 613e221

@maflcko
Copy link
Member

maflcko commented Jul 13, 2022

review ACK 613e221

This brings it consistently in line with other members that are only used in one thread, such as m_addr_token_bucket for example.

@maflcko maflcko merged commit 8efa73e into bitcoin:master Jul 14, 2022
sidhujag pushed a commit to syscoin/syscoin that referenced this pull request Jul 14, 2022
@bitcoin bitcoin locked and limited conversation to collaborators Jul 14, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants