-
Notifications
You must be signed in to change notification settings - Fork 3.4k
Description
version: 4.5
language: java
The problem is caused by a synchronization block in LexerATNSimulator.addDFAState(...). The block is preventing the antlr lexer from scaling on a multiple core computer.
synchronized (dfa.states) {
...
}
My syntax file can be found at https://github.com/waterguo/antsdb/blob/master/fish-server/src/main/antlr4/com/antsdb/saltedfish/lexer/Mysql.g4
Input file is a database dump file generated by mysqldump. It has lots of blob and clob fields. Thus some tokens generated by lexer can be as large as 16mb.
My current workaround is to create a custom interpreter like below
setInterpreter(new LexerATNSimulator(this, _ATN, deepClone(_decisionToDFA), new PredictionContextCache()));
The workaround makes the parsing as close to 10 times better on a 16 core computer.
Though my workaround works but I think this should be an something addressed by antlr. Or maybe something to think about for the next version of antlr.
I am not able to provide the code to reproduce the problem for now due to sheer size of the input data - 100gb and complexity of my project. I will try to come up with a simplified code example that can demonstrate the problem later.