-
Notifications
You must be signed in to change notification settings - Fork 172
Description
When this project was started initial intention was to handle only short text. But now we have added Google News and Crawlers, hence there is need to handle longer text as well.
As we know that most of BERT based model support 512 max tokens (with few exceptions like BigBird). Currently Analyzer ignore (#113) excessive text.
Now Idea to introduce TextSplitter to split longer text and feed it to Analyzer. But it introduce another complexity with Analyzer predictions? How to combine inferences by multiple chunks for final prediction. Right now there not proper solution exist to handle this scenarios except try few like voting, averaging or like the one suggested here https://discuss.huggingface.co/t/new-pipeline-for-zero-shot-text-classification/681/84.
For sake of simplicity let's first implement TextSplitter. For this purpose let's take inspiration from Haystack splitter along with adding context like chunk_id, passage_id, etc into meta data.
For inference aggregation later we can add another node for Inference aggregation, we may call it InferenceAggregator
. This will aggregate Analyzer result on text chunks to compute final inference.