-
Notifications
You must be signed in to change notification settings - Fork 34
doc: Add internal design section #111
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Thanks, this all seems helpful and correct reading it at a first pass. I think I would just tweak a few things and probably add a few additional details. It think it's especailly helpful this notes that libmultiprocess is just acting as a wrapper to make it easier to write and call cap'nproto interfaces from c++. But it is completely possible to call or implement the same interfaces without using libmultiprocess (especially from other languages like rust python and javascript). One thing that is a little off in 86415b4 is that it makes it sound like ProxyClient and ProxyServer objects wrap ends of socket, when it would be more accurate just to say that they communicate over sockets. Specifically when there is a socket connection there is not just one ProxyClient and one ProxyServer object sitting at either end of the socket, but one ProxyClient and one ProxyServer for each instantiated interface, and probably dozens or hundreds of ProxyClient and ProxyService objects all using the same socket. If you want to mention the class that does sit at either end of a socket, that would just be the Similarly, in diagram from PR description it could give wrong impression to show AsyncioContext containing AsyncIoStream, because the same context can be used to communicate across many streams. For example, if there is a bitcoin-node process, it may spawn a bitcoin-wallet and communicate across one stream with it, with but also accept bitcoin-mine connections and use other streams to communicate with them. All these streams will use the same EventLoop and AsyncioContext. It is true in both of these 1:N relationships though, that not all of the N are equal. Even though there will be many ProxyClient/ProxyServer clients using the same socket, and many AsyncIoStreams using the same AsyncioContext and EventLoop, users are not undifferentiated and identical. While the EventLoop uses many AsyncIoStreams it does have one special Anyway these changes look good, and I'm happy to merge them anytime you want. |
86415b4
to
4b8a128
Compare
Thank you for the feedback, I tweaked the docs a bit to make the relationship between connections, proxy objects and capnp objects a bit clearer. One thing I have been wondering about is the naming and usage of |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ACK 86415b4
I had some ideas reading this so left a bunch of suggestions, but I'm happy to merge it as, just let me know. I think everything you wrote is accurate and this is definitely an improvement
doc/design.md
Outdated
|
||
### Internals | ||
|
||
The `ProxyClient` and `ProxyServer` generated classes are not directly exposed to the user, as described in [usage.md](usage.md). Instead, they wrap c++ interfaces and appear to the user as pointers to an interface. They are first instantiated when calling `ConnectStream` and `ServeStream` respectively for creating the `InitInterface`. These methods setup a `Connection` through a socket and wrap a `capnp::RpcSystem` configured for client and server mode respectively. Once more interfaces are instantiated from the `InitInterface`, they communicate over the same connection. Both `ConnectStream` and `ServerStream` also require an instantiation of the `EventLoop`. The `EventLoop` owns pending requests, notifies on request dispatch, allows clients from multiple threads to make synchronous calls, and handles some cleanup routines on exit. It must be run in a separate thread to allow calls being issued from multiple distinct threads. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In commit "doc: Add internal design section" (86415b4)
This looks good as is, but few things come to mind reading this:
-
"These methods setup a
Connection
" could be clearer since technically there are 2Connection
objects for each connection (ConnectStream creates one when it is called on the connecting side, and ServeStream creates one each time there is an incoming connection on the side receiving connections). Would maybe change to "These methods establish connections, internally creatingConnection
objects, with eachConnection
wrapping acapnp::RpcSystem
-
May clarify "Once more interfaces are instantiated from the
InitInterface
" as "TheInitInterface
interface will typically have methods which return other interfaces, giving the connecting process the ability to call other functions in the serving process. Interfaces can also have methods accepting other interfaces as parameters, giving serving processes ability to call back and invoke functions in connecting processes. Creating new interfaces does not create new connections, and typically many interface objects will share the same connection." -
Would maybe change "It must be run in a separate thread to allow calls..." to "It must run in a separate thread so it is always active and can process incoming requests from local clients and remote connections."
doc/design.md
Outdated
|
||
The `ProxyClient` and `ProxyServer` generated classes are not directly exposed to the user, as described in [usage.md](usage.md). Instead, they wrap c++ interfaces and appear to the user as pointers to an interface. They are first instantiated when calling `ConnectStream` and `ServeStream` respectively for creating the `InitInterface`. These methods setup a `Connection` through a socket and wrap a `capnp::RpcSystem` configured for client and server mode respectively. Once more interfaces are instantiated from the `InitInterface`, they communicate over the same connection. Both `ConnectStream` and `ServerStream` also require an instantiation of the `EventLoop`. The `EventLoop` owns pending requests, notifies on request dispatch, allows clients from multiple threads to make synchronous calls, and handles some cleanup routines on exit. It must be run in a separate thread to allow calls being issued from multiple distinct threads. | ||
|
||
When a generated method on the `ProxyClient` is called, it calls `clientInvoke` with the capnp-translated types. `clientInvoke` creates a self-executing promise (`kj::TaskSet`) that drives the execution of the request and gives ownership of it to the `EventLoop`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In commit "doc: Add internal design section" (86415b4)
Might be worth adding that after clientInvoke
sends the request, it blocks waiting until a response is received, or until there is a call from the server that needs to run on the same client thread, using a Waiter
object.
doc/design.md
Outdated
|
||
When a generated method on the `ProxyClient` is called, it calls `clientInvoke` with the capnp-translated types. `clientInvoke` creates a self-executing promise (`kj::TaskSet`) that drives the execution of the request and gives ownership of it to the `EventLoop`. | ||
|
||
On the server side, the `capnp::RpcSystem` receives the capnp request and invokes the corresponding c++ method through the corresponding `ProxyServer` and the heavily templated `serverInvoke` triggering a `ServerCall`. Its result is then converted back by invoking `ServerRet`. The two are connected through `ServerField`. The main method driving execution of a request is `PassField`, which is invoked through `ServerField`. Instantiated interfaces, or capabilities in capnp speak, are tracked and owned by the server's `capnp::RpcSystem`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In commit "doc: Add internal design section" (86415b4)
Could expand "Its result is then converted back by invoking ServerRet
." Maybe say "Return values from the c++ methods are copied into Cap'nProto responses by ServerRet
and exceptions are caught and copied by ServerExcept
".
Also from this description it sounds like ServerField should probably be renamed to ServerArg or ServerParam. But none of these names are very good, they should probably be named more like ServerMethodArgumentHandler ServerMethodReturnValueHandler ServerMethodExceptionHandler
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These suggested names sound good to me, More verbose, but they convey the meaning more clearly.
Thanks for the comments, will pour over this some more and hopefully come up with more improvements. |
4b8a128
to
1fa2ca7
Compare
Took your improvements @ryanofsky, and added some minor corrections. |
ACK 1fa2ca7. Looks good, and nice job incorporating the extra information, especially about the waiter object. I'll go ahead and merge this now but feel free to make more changes or suggest improvements. |
This update brings in the following changes: bitcoin-core/libmultiprocess#111: doc: Add internal design section bitcoin-core/libmultiprocess#113 Add missing include to util.h
This update brings in the following changes: bitcoin-core/libmultiprocess#111: doc: Add internal design section bitcoin-core/libmultiprocess#113 Add missing include to util.h
This update brings in the following changes: bitcoin-core/libmultiprocess#111: doc: Add internal design section bitcoin-core/libmultiprocess#113 Add missing include to util.h
This update brings in the following changes: bitcoin-core/libmultiprocess#111: doc: Add internal design section bitcoin-core/libmultiprocess#113 Add missing include to util.h
This update brings in the following changes: bitcoin-core/libmultiprocess#111: doc: Add internal design section bitcoin-core/libmultiprocess#113 Add missing include to util.h
Add recent changes and fixes for shutdown bugs: bitcoin-core/libmultiprocess#111: doc: Add internal design section bitcoin-core/libmultiprocess#113: Add missing include to util.h bitcoin-core/libmultiprocess#116: shutdown bugfix: destroy RPC system before running cleanup callbacks bitcoin-core/libmultiprocess#118: shutdown bugfix: Prevent segfault in server if connection is broken during long function call
Add recent changes and fixes for shutdown bugs: bitcoin-core/libmultiprocess#111: doc: Add internal design section bitcoin-core/libmultiprocess#113: Add missing include to util.h bitcoin-core/libmultiprocess#116: shutdown bugfix: destroy RPC system before running cleanup callbacks bitcoin-core/libmultiprocess#118: shutdown bugfix: Prevent segfault in server if connection is broken during long function call
Add recent changes and fixes for shutdown bugs. bitcoin-core/libmultiprocess#111: doc: Add internal design section bitcoin-core/libmultiprocess#113: Add missing include to util.h bitcoin-core/libmultiprocess#116: shutdown bugfix: destroy RPC system before running cleanup callbacks bitcoin-core/libmultiprocess#118: shutdown bugfix: Prevent segfault in server if connection is broken during long function call bitcoin-core/libmultiprocess#119: cmake: avoid libatomic not found error on debian
90b4055 Update libmultiprocess library (Ryan Ofsky) Pull request description: Add recent changes and fixes for shutdown bugs. bitcoin-core/libmultiprocess#111: doc: Add internal design section bitcoin-core/libmultiprocess#113: Add missing include to util.h bitcoin-core/libmultiprocess#116: shutdown bugfix: destroy RPC system before running cleanup callbacks bitcoin-core/libmultiprocess#118: shutdown bugfix: Prevent segfault in server if connection is broken during long function call bitcoin-core/libmultiprocess#119: cmake: avoid libatomic not found error on debian ACKs for top commit: fanquake: ACK 90b4055 TheCharlatan: ACK 90b4055 Tree-SHA512: 2c256667f0c16e00bb5a81b2c6d3db103fae211844e32b111bbed673ab2612ad1478e6b3ecd3a867a4e425cfa6e778b67388343626597a8fac800a15cea5e53a
Add recent changes and fixes for shutdown bugs. bitcoin-core/libmultiprocess#111: doc: Add internal design section bitcoin-core/libmultiprocess#113: Add missing include to util.h bitcoin-core/libmultiprocess#116: shutdown bugfix: destroy RPC system before running cleanup callbacks bitcoin-core/libmultiprocess#118: shutdown bugfix: Prevent segfault in server if connection is broken during long function call bitcoin-core/libmultiprocess#119: cmake: avoid libatomic not found error on debian
I am not sure if the descriptions here are correct, or even all that useful, but I think some of the finer points are useful to document to serve as an anchor for future developers reading the code. Maybe I also missed some documentation in the headers and the other docs that already describe these things better.
What could potentially also be useful are some diagrams to show the wrapping hierarchy as done below (again, not sure if actually correct, and given the polymorphism and templating going on it is also questionable if such a hierarchy can accurately be depicted as a diagram).Given the points made by ryanofsky below, I don't think anymore than an object diagram will be particularly helpful.
A sequence diagram that follows a call from its c++ callsite to the client invocation to the server receiving and processing it, the client resolving it and finally returning a c++ type again would probably be more useful.