Rename compiler passes as follows:
reportLeftRecursion -> reportInfiniteRecursion
reportInfiniteLoops -> reportInfiniteRepetition
This reflects the fact that both passes detect different ways of causing
the same problem (possible infinite loop when parsing).
The new terminology is more precise and in line with commonly used
programming languages.
The change involves mainly renaming related compiler pass and files
associated with it.
The generated parser API specs are mostly extracted from
generated-parser.spec.js, which got renamed to
generated-parser-behavior.spec.js to better reflect its purpose.
Unit specs are unit tests of internal stuff. API specs are tests of the
user-visible APIs and behavior.
I think it makes sense to make this distinction because then the public
API line is more clearly visible e.g. when using the specs as
documentation.
This is a complete rewrite of the PEG.js code generator. Its goals are:
1. Allow optimizing the generated parser code for code size as well as
for parsing speed.
2. Prepare ground for future optimizations and big features (like
incremental parsing).
2. Replace the old template-based code-generation system with
something more lightweight and flexible.
4. General code cleanup (structure, style, variable names, ...).
New Architecture
----------------
The new code generator consists of two steps:
* Bytecode generator -- produces bytecode for an abstract virtual
machine
* JavaScript generator -- produces JavaScript code based on the
bytecode
The abstract virtual machine is stack-based. Originally I wanted to make
it register-based, but it turned out that all the code related to it
would be more complex and the bytecode itself would be longer (because
of explicit register specifications in instructions). The only downsides
of the stack-based approach seem to be few small inefficiencies (see
e.g. the |NIP| instruction), which seem to be insignificant.
The new generator allows optimizing for parsing speed or code size (you
can choose using the |optimize| option of the |PEG.buildParser| method
or the --optimize/-o option on the command-line).
When optimizing for size, the JavaScript generator emits the bytecode
together with its constant table and a generic bytecode interpreter.
Because the interpreter is small and the bytecode and constant table
grow only slowly with size of the grammar, the resulting parser is also
small.
When optimizing for speed, the JavaScript generator just compiles the
bytecode into JavaScript. The generated code is relatively efficient, so
the resulting parser is fast.
Internal Identifiers
--------------------
As a small bonus, all internal identifiers visible to user code in the
initializer, actions and predicates are prefixed by |peg$|. This lowers
the chance that identifiers in user code will conflict with the ones
from PEG.js. It also makes using any internals in user code ugly, which
is a good thing. This solves GH-92.
Performance
-----------
The new code generator improved parsing speed and parser code size
significantly. The generated parsers are now:
* 39% faster when optimizing for speed
* 69% smaller when optimizing for size (without minification)
* 31% smaller when optimizing for size (with minification)
(Parsing speed was measured using the |benchmark/run| script. Code size
was measured by generating parsers for examples in the |examples|
directory and adding up the file sizes. Minification was done by |uglify
--ascii| in version 1.3.4.)
Final Note
----------
This is just a beginning! The new code generator lays a foundation upon
which many optimizations and improvements can (and will) be made.
Stay tuned :-)
Includes:
* Moving the source code from /src to /lib.
* Adding an explicit file list to package.json
* Updating the Makefile.
* Updating the spec and benchmark suites and their READMEs.
Part of a fix for GH-32.
The purpose of this change is to avoid the need to index register
variables storing match results of sequences whose elements are labeled.
The indexing happened when match results of labeled elements were passed
to action/predicate functions.
In order to avoid indexing, the register allocator needs to ensure that
registers storing match results of any labeled sequence elements are
still "alive" after finishing parsing of the sequence. They should not
be used to store anything else at least until code of all actions and
predicates that can see the labels is executed. This requires that the
|allocateRegisters| pass has the knowledge of scoping. Because that
knowledge was already implicitly embedded in the |coputeParams| pass,
the logical step to prevent duplication was to merge it with the
|allocateRegisters| pass. This is what this commit does.
As a part of the merge the tests of both passes were largely refactored.
This is both to accomodate the merge and to make the tests in sync with
the code again (the tests became a bit out-of-sync during the last few
commits -- they tested more than was needed).
The speed/size impact is slightly positive:
Speed impact
------------
Before: 849.86 kB/s
After: 858.16 kB/s
Difference: 0.97%
Size impact
-----------
Before: 876618 b
After: 875602 b
Difference: -0.12%
(Measured by /tools/impact with Node.js v0.6.18 on x86_64 GNU/Linux.)
This commit changes the model underlying parser variables used to store
match results and parse positions. Until now they were treated as a
stack, now they are thought of as registers. The actual behavior does
not change (yet), only the terminology.
More specifically, this commit:
* Changes parser variable names from |result0|, |result1|, etc. to
|r0|, |r1|, etc.
* Changes various internal names and comments to match the new model.
* Renames the |computeVarIndices| pass to |allocateRegisters|.
This commit replaces all variable name computations in |computeVarNames|
and |computeParams| passes by computations of indices. The actual names
are computed later in the |generateCode| pass.
This change makes the code generator the only place that deals with the
actual variable names, making them easier to change for example.
The code generator code seems bit more complicated after the change, but
this complexity will pay off (and mostly disappear) later.
This is the first of many commits that gradually convert PEG.js's test
suite from QUnit to Jasmine, cleaning it up on the way.
Main reason for the change is that Jasmine allows nested contexts,
allowing to structure the tests in a better way than QUnit. Moreover,
the tests needed to be cleaned up a bit.