-
Notifications
You must be signed in to change notification settings - Fork 30
Description
Summary of today's GR Architecture Meeting 2023-10-24
We talked about today how the gr::work::Status::DONE
as defined for GR 4.0 as
namespace gr::work {
enum class Status {
ERROR = -100, /// error occurred in the work function
INSUFFICIENT_OUTPUT_ITEMS = -3, /// work requires a larger output buffer to produce output
INSUFFICIENT_INPUT_ITEMS = -2, /// work requires a larger input buffer to produce output
DONE = -1, /// this block has completed its processing and the flowgraph should be done
OK = 0, /// work call was successful and return values in i/o structs are valid
};
}
should be used to shut down the processing of the flow graph. We recognise and distinguish between the following two different shut-down cases:
- triggered externally via the scheduler (continuously running until some external factor, UI/user says otherwise)
- triggered internally, via the block states -> the focus of our
DONE
discussion.
Regarding the latter, there are many corner cases, opinionated choices, and ill-formed graph-topologies and block constraints that -- if implemented in their fullness -- would make the core design overcomplicated. Some of these remaining cases need to be left open to the users. However, the core will provide hooks and ports to allow for user extensions and graph-based configuration to allow for much more complex shut-down policies.
As discussed, we agreed on the following default policies when DONE
is used to shut down the execution of a flow graph:
- default: synchronous 'flowgraph is done when all of the sinks are
DONE
' & propagating 'EOF' down-stream tag propagation -- this is easy and already implemented. N.B. tags are synchronous and -- while usually connected to a sample -- can also be emitted w/o producing data. - optional: those (assumed few) specific blocks that are not sinks or sources may choose to emit an (async) `please terminate graph' message with a preferred policy that the given scheduler respects (or not) -- similar to the exception handling discussed earlier. Important: the propagation is in the direction from 'Block->Graph->Scheduler->User' and not in the inverse direction of the directed flow graph. The latter would require complicated logic and enforced implementation on the low-level Block and Buffer implementation. The scheduler would provide a dedicated 'shut-down' or control port that, if connected (user choice, same connection API as streaming ports) and if such a message is received, would request the scheduler to shut down with the provided policy (e.g. shut-down after 100 ms, injecting a 'EOF' tag on each source block output, ...)
The open point concerning 1: once a sink (or Block) has indicated DONE
, should the default behaviour automatically disconnect its input buffers (N.B. while keeping the graph edge definition) to remove back-pressure and thus allow the rest of the graph to continue to run -- or not. The connection would be re-established once the scheduler/graph is reinitialised/restarted.
I hope this helps explain the essence of our discussion. Please feel free to comment if I missed something and indicate with a 👍️ or 👎️ if you agree or disagree with the above or ❓️ if you have further questions and/or this should be discussed further.
Thanks again to everyone involved for your time and constructive discussion.
Action:
- add
EOF
propagation and default Block implementation i.e. ifEOF
latch and always returnDONE
untilreset()
is issued. - scheduler monitors all sinks and if all are
DONE
terminates the flow graph execution.- extension: optional flag that disconnects all input ports when
DONE
-> removes back-pressure and allows other parts of the flow graph to continue. N.B. scheduler needs to reconnect/ensure that these ports are connected on (re-)start. Implementation (whether in Block or Scheduler tbd.)
- extension: optional flag that disconnects all input ports when
Metadata
Metadata
Assignees
Labels
Type
Projects
Status