Skip to content

Conversation

theuni
Copy link
Member

@theuni theuni commented Feb 6, 2017

Note that we still need a fix for hSocket for 0.14. Still discussing the optimal approach.

These are (afaik) all long-standing races or concurrent accesses. Going forward (post 0.14), we can clean these up so that they're not all individual atomic accesses.

  • Reintroduce cs_vRecv to guard receive-specific vars
  • Lock vRecv/vSend for CNodeStats
  • Make some vars atomic.
  • Only set the connection time in CNode's constructor so that it doesn't change

These are (afaik) all long-standing races or concurrent accesses. Going
forward, we can clean these up so that they're not all individual atomic
accesses.

- Reintroduce cs_vRecv to guard receive-specific vars
- Lock vRecv/vSend for CNodeStats
- Make some vars atomic.
- Only set the connection time in CNode's constructor so that it doesn't change
@fanquake fanquake added the P2P label Feb 6, 2017
@@ -389,7 +389,6 @@ CNode* CConnman::ConnectNode(CAddress addrConnect, const char *pszDest, bool fCo
uint64_t nonce = GetDeterministicRandomizer(RANDOMIZER_ID_LOCALHOSTNONCE).Write(id).Finalize();
CNode* pnode = new CNode(id, nLocalServices, GetBestHeight(), hSocket, addrConnect, CalculateKeyedNetGroup(addrConnect), nonce, pszDest ? pszDest : "", false);
pnode->nServicesExpected = ServiceFlags(addrConnect.nServices & nRelevantServices);
pnode->nTimeConnected = GetSystemTimeInSeconds();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did this move?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It was redundant. It's additionally set in CNode's ctor.

@@ -642,6 +647,7 @@ bool CNode::ReceiveMsgBytes(const char *pch, unsigned int nBytes, bool& complete
{
complete = false;
int64_t nTimeMicros = GetTimeMicros();
LOCK(cs_vRecv);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did you mean to protect the entire receive loop below with this lock?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just trying to use a big hammer here for 0.14. The lock only contends with copyStats.

@TheBlueMatt
Copy link
Contributor

Hmm, this actually needs a lot more to make copyStats non-racy, and I think we really need to do them - we're copying std::strings concurrently right now.

@theuni
Copy link
Member Author

theuni commented Feb 6, 2017

Pushed a few more at @TheBlueMatt's request. This is essentially his PR now, just re-using this one rather than opening another.

Implied ACK, btw.

@TheBlueMatt
Copy link
Contributor

The above set of commits fixes all the races I'm aware of with the exception of CNode::addrLocal and CNode::addrName. Both of which I proposed fixing in a separate PR since my preferred fix is a (tiny bit) more involved (just putting accessors around them).

Copy link
Contributor

@gmaxwell gmaxwell left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ACK

@theuni
Copy link
Member Author

theuni commented Feb 10, 2017

#9695 is a superset of this, and those who have acked here have also acked there. Closing in favor.

@theuni theuni closed this Feb 10, 2017
@dcousens
Copy link
Contributor

dcousens commented Feb 10, 2017

@theuni you just self-referenced this issue?

Did you mean #9708?

@theuni
Copy link
Member Author

theuni commented Feb 10, 2017

@dcousens haha, let's say I was trying to bring github down with a recursive link attack :p

Busted c/p. Yes, 9708. Thanks.

@bitcoin bitcoin locked as resolved and limited conversation to collaborators Sep 8, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants