After making the |?| operator return |null| instead of an empty string
in the previous commit, empty strings were still returned from
predicates. This didn't make much sense.
Return value of a predicate is unimportant (if you have one in hand, you
already know the predicate succeeded) and one could even argue that
predicates shouldn't return any value at all. The closest thing to
"return no value" in JavaScript is returning |undefined|, so I decided
to make predicates return exactly that.
Implements part of #198.
Before this commit, the |?| operator returned an empty string upon
unsuccessful match. This commit changes the returned value to |null|. It
also updates the PEG.js grammar and the example grammars, which used the
value returned by |?| quite often.
Returning |null| is possible because it no longer indicates a match
failure.
I expect that this change will simplify many real-world grammars, as an
empty string is almost never desirable as a return value (except some
lexer-level rules) and it is often translated into |null| or some other
value in action code.
Implements part of #198.
Using a special value to indicate match failure instead of |null| allows
actions to return |null| as a regular value. This simplifies e.g. the
JSON parser.
Note the special value is internal and intentionally undocumented. This
means that there is currently no official way how to trigger a match
failure from an action. This is a temporary state which will be fixed
soon.
The negative performance impact (see below) is probably caused by
changing lot of comparisons against |null| (which likely check the value
against a fixed constant representing |null| in the interpreter) to
comparisons against the special value (which likely check the value
against another value in the interpreter).
Implements part of #198.
Speed impact
------------
Before: 1146.82 kB/s
After: 1031.25 kB/s
Difference: -10.08%
Size impact
-----------
Before: 950817 b
After: 973269 b
Difference: 2.36%
(Measured by /tools/impact with Node.js v0.6.18 on x86_64 GNU/Linux.)
Before this commit, the |expected| property of an exception object
thrown when a generated parser encountered an error contained
expectations as strings. These strings were in a human-readable format
suitable for displaying in the UI but not suitable for machine
processing. For example, expected string literals included quotes and a
string "any character" was used when any character was expected.
This commit makes expectations structured objects. This makes the
machine processing easier, while still allowing to generate a
human-readable representation if needed.
Implements part of #198.
Speed impact
------------
Before: 1180.41 kB/s
After: 1165.31 kB/s
Difference: -1.28%
Size impact
-----------
Before: 863523 b
After: 950817 b
Difference: 10.10%
(Measured by /tools/impact with Node.js v0.6.18 on x86_64 GNU/Linux.)
The compiler passes are now split into three stages:
* check -- passes that check for various error conditions
* transform -- passes that transform the AST (e.g. to perform
optimizations)
* generate -- passes that are related to code generation
Splitting the passes into stages is important for plugins. For example,
if a plugin wants to add a new optimization pass, it can add it at the
end of the "transform" stage without any knowledge about other passes it
contains. Similarly, if it wants to generate something else than the
default code generator does from the AST, it can just replace all passes
in the "generate" stage by its own one(s).
More generally, the stages make it possible to write plugins that do not
depend on names and actions of specific passes (which I consider
internal and subject of change), just on the definition of stages (which
I consider a public API with to which semver rules apply).
Implements part of GH-106.
This is a complete rewrite of the PEG.js code generator. Its goals are:
1. Allow optimizing the generated parser code for code size as well as
for parsing speed.
2. Prepare ground for future optimizations and big features (like
incremental parsing).
2. Replace the old template-based code-generation system with
something more lightweight and flexible.
4. General code cleanup (structure, style, variable names, ...).
New Architecture
----------------
The new code generator consists of two steps:
* Bytecode generator -- produces bytecode for an abstract virtual
machine
* JavaScript generator -- produces JavaScript code based on the
bytecode
The abstract virtual machine is stack-based. Originally I wanted to make
it register-based, but it turned out that all the code related to it
would be more complex and the bytecode itself would be longer (because
of explicit register specifications in instructions). The only downsides
of the stack-based approach seem to be few small inefficiencies (see
e.g. the |NIP| instruction), which seem to be insignificant.
The new generator allows optimizing for parsing speed or code size (you
can choose using the |optimize| option of the |PEG.buildParser| method
or the --optimize/-o option on the command-line).
When optimizing for size, the JavaScript generator emits the bytecode
together with its constant table and a generic bytecode interpreter.
Because the interpreter is small and the bytecode and constant table
grow only slowly with size of the grammar, the resulting parser is also
small.
When optimizing for speed, the JavaScript generator just compiles the
bytecode into JavaScript. The generated code is relatively efficient, so
the resulting parser is fast.
Internal Identifiers
--------------------
As a small bonus, all internal identifiers visible to user code in the
initializer, actions and predicates are prefixed by |peg$|. This lowers
the chance that identifiers in user code will conflict with the ones
from PEG.js. It also makes using any internals in user code ugly, which
is a good thing. This solves GH-92.
Performance
-----------
The new code generator improved parsing speed and parser code size
significantly. The generated parsers are now:
* 39% faster when optimizing for speed
* 69% smaller when optimizing for size (without minification)
* 31% smaller when optimizing for size (with minification)
(Parsing speed was measured using the |benchmark/run| script. Code size
was measured by generating parsers for examples in the |examples|
directory and adding up the file sizes. Minification was done by |uglify
--ascii| in version 1.3.4.)
Final Note
----------
This is just a beginning! The new code generator lays a foundation upon
which many optimizations and improvements can (and will) be made.
Stay tuned :-)
Implement a new syntax to extract matched strings from expressions. For
example, instead of:
identifier = first:[a-zA-Z_] rest:[a-zA-Z0-9_]* { return first + rest.join(""); }
you can now just write:
identifier = $([a-zA-Z_] [a-zA-Z0-9_]*)
This is useful mostly for "lexical" rules at the bottom of many
grammars.
Note that structured match results are still built for the expressions
prefixed by "$", they are just ignored. I plan to optimize this later
(sometime after the code generator rewrite).
The purpose of this change is to avoid the need to index register
variables storing match results of sequences whose elements are labeled.
The indexing happened when match results of labeled elements were passed
to action/predicate functions.
In order to avoid indexing, the register allocator needs to ensure that
registers storing match results of any labeled sequence elements are
still "alive" after finishing parsing of the sequence. They should not
be used to store anything else at least until code of all actions and
predicates that can see the labels is executed. This requires that the
|allocateRegisters| pass has the knowledge of scoping. Because that
knowledge was already implicitly embedded in the |coputeParams| pass,
the logical step to prevent duplication was to merge it with the
|allocateRegisters| pass. This is what this commit does.
As a part of the merge the tests of both passes were largely refactored.
This is both to accomodate the merge and to make the tests in sync with
the code again (the tests became a bit out-of-sync during the last few
commits -- they tested more than was needed).
The speed/size impact is slightly positive:
Speed impact
------------
Before: 849.86 kB/s
After: 858.16 kB/s
Difference: 0.97%
Size impact
-----------
Before: 876618 b
After: 875602 b
Difference: -0.12%
(Measured by /tools/impact with Node.js v0.6.18 on x86_64 GNU/Linux.)
This commit changes the model underlying parser variables used to store
match results and parse positions. Until now they were treated as a
stack, now they are thought of as registers. The actual behavior does
not change (yet), only the terminology.
More specifically, this commit:
* Changes parser variable names from |result0|, |result1|, etc. to
|r0|, |r1|, etc.
* Changes various internal names and comments to match the new model.
* Renames the |computeVarIndices| pass to |allocateRegisters|.
One stack is conceptually simpler, requires less code and will make a
transition to a register-based machine easier.
Note that the stack variables are now named a bit incorrectly
(|result0|, |result1|, etc. even when they store also parse positions).
I didn't bother with renaming because a transition to a register-based
machine will follow soon and the names will change anyway.
The speed/size impact is insignificant.
Speed impact
------------
Before: 839.05 kB/s
After: 839.67 kB/s
Difference: 0.07%
Size impact
-----------
Before: 949783 b
After: 961578 b
Difference: 1.24%
(Measured by /tools/impact with Node.js v0.6.18 on x86_64 GNU/Linux.)
This commit replaces all variable name computations in |computeVarNames|
and |computeParams| passes by computations of indices. The actual names
are computed later in the |generateCode| pass.
This change makes the code generator the only place that deals with the
actual variable names, making them easier to change for example.
The code generator code seems bit more complicated after the change, but
this complexity will pay off (and mostly disappear) later.
Places all code that does something with "action" AST nodes under code
handling "choice" nodes.
This ordering is logical because now all the node handling code matches
the sequence in which various node types usually appear when descending
through the AST tree.
Changes all code that does something with "literal", "class" or "any"
AST nodes so that the code deals with these in the follwing order:
1. literal
2. class
3. any
Previously the code used this ordering:
1. literal
2. any
3. class
The new ordering is more logical because the nodes are handled from the
most specific to the most generic.
PEG.js grammar rules are represented by |rule| nodes in the AST. Until
now, all such nodes had a |displayName| property which was either |null|
or stored rule's human-readable name. This commit gets rid of the
|displayName| property and starts representing rules with a
human-readable name using a new |named| node (a child of the |rule|
node).
This change simplifies code generation code a bit as tests for
|displayName| can be removed (see changes in generate-code.js). It also
separates different concerns from each other nicely.