-
Notifications
You must be signed in to change notification settings - Fork 1.4k
txnprovider/shutter: integration tests and fixes #13983
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…s-to-block-tracker
…es-to-block-tracker
…h/erigon into shutter-fixes-to-block-tracker
…s-to-block-tracker
…s-to-block-tracker
…s-to-block-tracker
…s-to-block-tracker
used in #13983 taking a cohesive unit of logic out of the bigger PR for ease of reviewing adds a json rpc client for interacting with the engine api the gist is: - we have an integration test which starts up Erigon as EL only - we drive Erigon with a MockCl - in order to be able to drive Erigon we need to access it's engine api json rpc (for this we need an engine api client) - analogous to that, we need a json rpc api client for the rpcdaemon publicly exposed apis (e.g. the "eth_" namespace, "debug_", etc.) - this is a black box test where we feed Erigon inputs via the public api surface and check the end result again via the public api surface
…s-to-block-tracker
…es-to-block-tracker
…es-to-block-tracker
…s-to-block-tracker
} | ||
iterations++ | ||
if iterations > 1024 { | ||
return 0, nil, fmt.Errorf("failed to find a free port after %d iterations", iterations) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will this ever hit? If it does, doesn't it mean it's being used in the wrong way? I think the statefulness of this method might not be a good thing here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've never hit it as of now, it assumes it won't take us more than 1024 tries to find a non consumed port (so say 1 test uses 10 ports, that's 100 tests running at the same time will potentially cause us to hit this)
You are right though, maybe I can switch to an approach where we have 1 global atomic port counter which we increment, try if the port is free, if yes then give it back to the port consumer, if not increment and try the next port and so on. It can be circular so if it ever reaches port 65535 it starts from 1024 again. It still needs a global atomic counter but it is a bit less stateless and less likely to be exhausted I guess. Let me try
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My concern is the "consumed" ports are not consumed, and just reserved in the map. Maybe the ports were closed but the map wasn't updated, or ports were never opened. But that's probably an edge case not applicable here. One way would be to just use a free port, just in time, and if it didn't get opened, just try again (just in time).
The code to get a free port (stateless) and the management of the map (global test counter within the test(s))that you are using for convenience to not overlap each other could be kept segregated. Because your random returned port might just be 8080 and it could be used by another service.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, what about this: 98c2671
it is close to what you are suggesting as "request on demand" and still catering for this problem I mentioned here #13983 (comment) (we just need to make sure all big tests like this that open ports in our repo use this approach - right now there are only a few tests which open ports so we should be fine - I may also move this func in a top level package in a follow up PR so it is visible)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good. And yes, i think that will fix about a few of those that fail because of insistence on specific ports. I made a mental note to tackle one on the next failure.
…s-to-block-tracker
…s-to-block-tracker
…xtending fork… (#14578) … (#14192) found in #13983 we weren't getting any new log notifications (rpc log subscriptions api) when the EL was going via the `e.forkValidator.FlushExtendingFork` code path which does an in-memory flush of the previously cached execution of `EthereumExecutionModule.ValidateChain` @taratorio you forgot to cherry-pick this --------- Co-authored-by: milen <94537774+taratorio@users.noreply.github.com>
integration tests caught several bugs with initial implementation
gist of testing approach:
test scenarios:
bug fixes caught and fixed by this: