-
Notifications
You must be signed in to change notification settings - Fork 4.5k
Description
The current implementation of the block parser will synchronously evaluate a content string, and return an array of parsed blocks. This behavior prevents us from being able to leverage some techniques to optimize an initial rendering of the editor. For example, there could be a perceived performance improvement if we were to parse the first ~10 blocks, then render the block editor, then continue to parse the remaining (or parse the remaining only when the user interacts to scroll the block list, using some technique of infinite scroll or virtualized list rendering). Those efforts should be tracked as separate issues, but are currently blocked by the parse behavior, aimed to be explored for enhancement in this issue.
See related concepts: "Time to First Paint" and "First Meaningful Paint"
Possible alternatives to this approach could include ideas around:
- Manually segmenting a string of blocks content into smaller chunks, before passing to
parse
. There's some redundancy here in what would need to be implemented as a "partial parse", and likely to be difficult to achieve with nested blocks. - Asynchronous callbacks for parse, to avoid any blocking, enable parsing to occur over the network or in separate worker threads
Additional challenges:
- What does incremental parsing look like in the context of nested blocks?
Proposal: Introduce a new interface to the blocks parser which would allow blocks to be parsed in chunks. This could be a new block parser package, or an addition to the existing block parser which could then be used in the @wordpress/blocks
parse implementation.
Today:
const blocks = parse( contentString );
Option A (Generator): (* Personal recommended approach)
const blocks = Array.from( generateParse( contentString ) );
Pros:
- Aligns well to the idea of an iterable set of parsed blocks data
- Allows very granular control over how iteration proceeds
- Concept native to the JavaScript language (unlike streams as mostly Node-specific)
Cons:
- Perception of generators as being difficult to work with
Option B (Streams):
const blocks = Array.from( await streamParse( contentString ) );
// Relies on experimental async iteration of streams: https://2ality.com/2019/11/nodejs-streams-async-iteration.html
Pros:
- Aligns well to the idea of a streaming flow of parsed blocks data
Cons:
- Streams are a relatively Node-exclusive paradigm, though there appears to be new experimental DOM-native implementations, and user-land implementations
Option C (Chunked Results):
This could go a few different ways:
- Treating it as some poppable stack
- A generator-like abstraction of
{ done: boolean, next: function }
- An "instance" of a parser which produces X results at a time
const blocks = [];
const parser = createParser( contentString );
blocks.push( ...parser.take( 10 ) );
blocks.push( ...parser.take( 10 ) );
Pros:
- More familiar language syntax (i.e. not a generator or a stream 😄 )
- If it could be done, use the same
parse
function, and add a new second argument to specify the effective "per-page" of parsed blocks
Cons:
- Custom implementation is redundant with existing language constructs
- Unclear that it can be done to reuse existing
parse
function
cc @dmsnell