I've done using Irony for a while, and I just started using it on a new project. The project is basically to parse 2Mb to 12Mb text files that are pipe delimited. Each line begins with an Identifier term and is followed by that identifier type's data.
The next line may be a sub-identifier (i.e. a child node) or another identifier depending on what it's first term is.
A | ID1 | data1 | 1 | 12-31-2010
A1 | ID1 | subAdata1 | 23.00
B | ID1 | subBdata1 | text
A2 | ID1 | subAdata2 | 50.00
C | ID2 | 5 | text
Note that A and C are the Identifiers and A1, A2, and B are sub-identifiers of A.
My problem is that I have created a grammar that will parse this, however on a file that is 11Mb, the Grammar Explorer hangs and it never seems to produce a resulting ParseTree. My grammar has around 170 states. Any ideas on what may be causing my
system to hang? Are there performance issue with Irony if the input is a very large text file?