Until now, expectations were constructed using object literals. This
commit changes the construction to use factory functions.
This change makes generated parsers slightly smaller because property
names don't have to be repeated many times and factory function calls
are more amenable to minifying.
Some numbers based on the aggregate size of parsers generated from
examples/*.pegjs:
Optimization Minified? Size before Size after Saving
------------------------------------------------------------
speed no 719066 716063 0.42%
speed yes 188998 180202 4.65%
size no 194810 197813 1.52%
size yes 108782 99947 8.12%
(Minification was done using "uglify --mangle --compress" with
uglify-js 2.4.24.)
The "buildMessage" utility function, which was previously internal, is
now exposed as SyntaxError.buildMessage in generated parsers.
The motivation behind this is two-fold:
1. Building of a syntax error message is a responsibility of the
SyntaxError class, meaning the code should be placed there.
2. By exposing the message building code, parser users can use it to
generate customized error messages without duplicating PEG.js's code.
Note that helper functions inside "buildMessage" ("describeExpected",
"describeFound", etc.) currently aren't exposed. They may become exposed
in the future if there is enough demand.
Instead of pre-generating expectation descriptions when generating
parsers, generate them dynamically from structured information contained
in the expectations.
This change makes descriptions a presentation-only concept. It also
makes generated parsers smaller.
Before this commit, expectations were sorted and de-duplicated before
they were passed to "buildMessage" and exposed in the "expected"
property of syntax errors. This commit moves this processing into
"buildMessage" and rewrites it to process only expectation descriptions.
This means expectations exposed in the "expected" property are "raw"
(not sorted and de-duplicated).
This change will allow us to get rid of the "description" property of
expectations and compute descriptions dynamically from structured
information in the expectations. This will make descriptions a
presentation-only concept. It will also make generated parsers smaller.
Note that to keep expectations in the "expected" property sorted even
without the "description" property, some sorting scheme based on
structured information in the expectations would have to be devised,
which would complicate things with only a little benefit. Therefore I
chose to keep the expectations there "raw".
Changes:
* Remove the "value" property (it is replaced by other properties).
* Add the "parts", "inverted", and "ignoreCase" properties (which
allow more structured access to expectation data).
Changes:
* Rename the "value" property to "text" (because it doesn't contain
the whole value, which also includes the case sensitivity flag).
* Add the "ignoreCase" property (which was missing).
If the described class is case-sensitive, nothing changes.
If the described class is case-insensitive, its description doesn't
indicate that anymore. The indication was awkward and it was meaningful
only for parser users familiar with PEG.js grammar syntax (typically a
minority). For cases where case insensitivity indication is vital, named
rules can be used to customize the reporting.
Note that literal descriptions already ignore the case-sensitivity flag;
this commit only makes things consistent.
Simplify regexps that specify ranges of characters to escape with "\xXX"
and "\uXXXX" in various escaping functions. Until now, these regexps
were (mostly) mutually exclusive with more selective regexps applied
before them, but this became a maintenance headache. I decided to
abandon the exclusivity, which allowed to simplify these regexps (at the
cost of introducing an ordering dependency).
Change how found strings are escaped when building syntax error
messages:
* Do not escape non-ASCII characters (U+0100-U+FFFF). They are
typically more readable in their raw form.
* Escape DEL (U+007F). It is a control character.
* Escape NUL (U+0000) as "\0", not "\x00".
* Do not use less known shortcut escape sequences ("\b", "\f"), only the
well-known ones ("\0", "\t", "\n", "\r").
These changes mirror expectation escaping changes done in
4fe682794d.
Part of work on #428.
Before this commit, descriptions of literals used in error messages were
built by applying JavaScript string escaping to their values, making the
descriptions look like JavaScript strings. Descriptions of character
classes were built using their raw text. These approaches were mutually
inconsistent and lead to descriptions which were over-escaped and not
necessarily human-friendly (in case of literals) or coupled with details
of the grammar (in case of character classes).
This commit changes description building code in both cases and unifies
it. The intent is to generate human-friendly descriptions of matched
expressions which are clean, unambiguous, and which don't escape too
many characters, while handling special characters such as newlines
well.
Fixes#127.
I no longer think that using raw literal texts in error messages is the
right thing to do. The main reason is that it couples error messages
with details of the grammar such as use of single or double quotes in
literals. A better solution is coming in the next commit.
This reverts commit 69a0f769fc.
In most places, we talk about "generating a parser", not "building a
parser", which the function name should reflect. Also, mentioning a
parser in the name is not necessary as in case of a parser generator
it's pretty clear what is generated.
So far, PEG.js was exported in a "PEG" global variable when no module
loader was detected. The same variable name was also conventionally used
when requiring it in Node.js or otherwise referring to it. This was
reflected in various places in the code, documentation, examples, etc.
This commit changes the variable name to "peg" and fixes all relevant
occurrences. The main reason for the change is that in Node.js, modules
are generally referred to by lower-case variable names, so "PEG" was
sticking out when used in Node.js projects.
The wrapping functions are also generated by PEG.js, so the comment
should be above them to mark them as such. This shouldn't cause any
problems technically.
Introduce two ways of specifying parser dependencies: the "dependencies"
option of PEG.buildParser and the -d/--dependency CLI option. Specified
dependencies are translated into AMD dependencies and Node.js's
"require" calls when generating an UMD parser.
Part of work on #362.
Extract "generateWrapper" code which generates code of the intro and the
returned parser object into helper functions. This is pure refactoring,
generated parser code is exactly the same as before.
This change will make it easier to modifiy "generateWrapper" to produce
UMD modules.
Part of work on #362.
Code which was at the toplevel of the "generateJS" function in the code
generator is now split into "generateToplevel" (which genreates parser
toplevel code) and "generateWrapper" (which generates a wrapper around
it). This is pure refactoring, generated parser code is exactly the same
as before.
This change will make it easier to modifiy the code genreator to produce
UMD modules.
Part of work on #362.
Instead of testing arguments.length to see whether an optional parameter
was passed to a function, compare its value to "undefined". This
approach has two advantages:
* It is in line with handling of default parameters in ES6.
* Optional parameters are actually spelled out in the parameter
list.
There is also one important disadvantage, namely that it's impossible to
pass "undefined" as an optional parameter value. This required a small
change in two tests.
Additional notes:
* Default parameter values are set in assignments immediately
after the function header. This reflects the fact that these
assignments really belong to the parameter list (which is where they
are in ES6).
* Parameter values are checked against "void 0" in places where
"undefined" can potentially be redefiend.
Labels in expressions like "(a:"a")" or "(a:"a" b:"b" c:"c")" were
visible to the outside despite being wrapped in parens. This commit
makes them invisible, as they should be.
Note this required introduction of a new "group" AST node, whose purpose
is purely to provide label scope isolation. This was necessary because
"label" and "sequence" nodes don't (and can't!) provide this isolation
themselves.
Part of a fix of #396.
Before this commit, generated parsers considered the following character
sequences as newlines:
Sequence Description
------------------------------
"\n" Unix
"\r" Old Mac
"\r\n" Windows
"\u2028" line separator
"\u2029" paragraph separator
This commit limits the sequences only to "\n" and "\r\n". The reason is
that nobody uses Unicode newlines or "\r" in practice.
A positive side effect of the change is that newline-handling code
became simpler (and likely faster).
Instead of setting ESLint environment to "node" globally, set it on
per-directory basis using separate .eslintrc.json files:
Directory Environment
-----------------------
bin node
lib commonjs
spec jasmine
It was impossible to use this approach for the "benchmark" directory
which contains a mix of files used in various environments. For
benchmark/run, the environment is set inline. For the other files, as
well as spec/helpers.js, the globals are declared manually (it is
impossible to express how these files are used just by a list of
environments).
Fixes#408.
Fix the following errors:
31:9 error "parser" is defined but never used no-unused-vars
406:14 error "expected" is defined but never used no-unused-vars
1304:15 error "s1" is defined but never used no-unused-vars
1386:15 error "s1" is defined but never used no-unused-vars
1442:15 error "s1" is defined but never used no-unused-vars
The expectation deduplication algorithm called |Array.prototype.splice|
to eliminate each individual duplication, which was slow. This caused
problems with grammar/input combinations that generated a lot of
expecations (see #377 for an example).
This commit replaces the algorithm with much faster one, eliminating the
problem.
In the past year I worked on various grammars where first/rest or
head/tail were used as labels for parts of lists. I found I associate
head/tail with a list immediately, while in case of first/rest I have to
"parse" grammar rules for a while before understanding their structure.
Moreover, I tend to assume that rest is a list of the same thigs as
first, but I don't have such assumption in case of head/tail. This
assumption was in conflict with the grammar structure.
I'm not sure how much these observations are applicable to others, but I
decided to act on them and switch from first/rest to head/tail.
The |found| property wasn't very useful as it mostly contained just one
character or |null| (the exception being syntax errors triggered by
|error| or |expected|). Similarly, the "but XXX found" part of the error
message (based on the |found| property) wasn't much useful and was
redundant in presence of location info.
For these reasons, this commit removes the |found| property and
corresponding part of the error message from syntax errors. It also
modifies error location info slightly to cover a range of 0 characters,
not 1 character (except when the error is triggered by |error| or
|expected|). This corresponds more precisely to the actual situation.
Fixes#372.
Report left recursion also in cases where the recursive rule invocation
is not a direct element of a sequence, but is wrapped inside an
expression.
Fixes#359.
Before this commit, the |reportLeftRecursion| pass was written in
functional style, passing the |visitedRules| array around as a parameter
and making a new copy each time a rule was visited. This apparently
caused performance problems in some deeply recursive grammars.
This commit makes it so that there is just one array which is shared
across all the visitor functions via a closure and modified as rules are
visited.
I don't like losing the functional style (it was elegant) but
performance is more important.
Fixes#203.
Add missing |named| case to the visitor in lib/compiler/asts.js, which
makes the infinite loop and left recursion detectors work correctly with
named rules.
The missing case caused |make parser| to fail with:
140:34: Infinite loop detected.
make: *** [parser] Error 1
* In strict mode code, functions can only be declared at top level or
immediately within another function. This means functions defined in
the initializer would throw.
Before this commit, position details (line and column) weren't computed
efficiently from the current parse position. There was a cache but it
held only one item and it was rarely hit in practice. This resulted in
frequent rescanning of the whole input when the |location| function was
used in various places in a grammar.
This commit extends the cache to remember position details for any
position they were ever computed for. In case of a cache miss, the cache
is searched for a value corresponding to the nearest lower position,
which is then used to compute position info for the desired position
(which is then cached). The whole input never needs to be rescanned.
No items are ever evicted from the cache. I think this is fine as the
max number of entries is the length of the input. If this becomes a
problem I can introduce some eviction logic later.
The performance impact of this change is significant. As the benchmark
suite doesn't contain any grammar with |location| calls I just used a
little ad-hoc benchmark script which measured time to parse the grammar
of PEG.js itself (which contains |location| calls):
var fs = require("fs"),
parser = require("./lib/parser");
var grammar = fs.readFileSync("./src/parser.pegjs", "utf-8"),
startTime, endTime;
startTime = (new Date()).getTime();
parser.parse(grammar);
endTime = (new Date()).getTime();
console.log(endTime - startTime);
The measured time went from ~293 ms to ~54 ms on my machine.
Fixes#337.
Replace |line|, |column|, and |offset| properties of tracing events with
the |location| property. It contains an object similar to the one
returned by the |location| function available in action code:
{
start: { offset: 23, line: 5, column: 6 },
end: { offset: 25, line: 5, column: 8 }
}
For the |rule.match| event, |start| refers to the position at the
beginning of the matched input and |end| refers to the position after
the end of the matched input.
For |rule.enter| and |rule.fail| events, both |start| and |end| refer to
the current position at the time the rule was entered.
Replace |line|, |column|, and |offset| properties of |SyntaxError| with
the |location| property. It contains an object similar to the one
returned by the |location| function available in action code:
{
start: { offset: 23, line: 5, column: 6 },
end: { offset: 25, line: 5, column: 8 }
}
For syntax errors produced in the middle of the input, |start| refers to
the first unparsed character and |end| refers to the character behind it
(meaning the span is 1 character). This corresponds to the portion of
the input in the |found| property.
For syntax errors produced the end of the input, both |start| and |end|
refer to a character past the end of the input (meaning the span is 0
characters).
For syntax errors produced by calling |expected| or |error| functions in
action code the location info is the same as the |location| function
would return.
Preform the following renames:
* |reportedPos| -> |savedPos| (abstract machine variable)
* |peg$reportedPos| -> |peg$savedPos| (variable in generated code)
* |REPORT_SAVED_POS| -> |LOAD_SAVED_POS| (instruction)
* |REPORT_CURR_POS| -> |UPDATE_SAVED_POS| (instruction)
The idea is that the name |reportedPos| is no longer accurate after the
|location| change (seea the previous commit) because now both
|reportedPos| and |currPos| are reported to user code. Renaming to
|savedPos| resolves this inaccuracy.
There is probably some better name for the concept than quite generic
|savedPos|, but it doesn't come to me.
Replace |line|, |column|, and |offset| functions with the |location|
function. It returns an object like this:
{
start: { offset: 23, line: 5, column: 6 },
end: { offset: 25, line: 5, column: 8 }
}
In actions, |start| refers to the position at the beginning of action's
expression and |end| refers to the position after the end of action's
expression. This allows one to easily add location info e.g. to AST
nodes created in actions.
In predicates, both |start| and |end| refer to the current position.
Fixes#246.
Beside the recursion detector, the visitor will also be used by infinite
loop detector.
Note the newly created |asts.matchesEmpty| function re-creates the
visitor each time it is called, which makes it slower than necessary.
This could have been worked around in various ways but I chose to defer
that optimization because real-world performance impact is small.
So far, left recursion detector assumed that left recursion occurs only
when the recursive rule is at the very left-hand side of rule's
expression:
start = start
This didn't catch cases like this:
start = "a"? start
In general, if a rule reference can be reached without consuming any
input, it can lead to left recursion. This commit fixes the detector to
consider that.
Fixes#190.
Parsers can now be generated with support for tracing using the --trace
CLI option or a boolean |trace| option to |PEG.buildParser|. This makes
them trace their progress, which can be useful for debugging. Parsers
generated with tracing support are called "tracing parsers".
When a tracing parser executes, by default it traces the rules it enters
and exits by writing messages to the console. For example, a parser
built from this grammar:
start = a / b
a = "a"
b = "b"
will write this to the console when parsing input "b":
1:1 rule.enter start
1:1 rule.enter a
1:1 rule.fail a
1:1 rule.enter b
1:2 rule.match b
1:2 rule.match start
You can customize tracing by passing a custom *tracer* to parser's
|parse| method using the |tracer| option:
parser.parse(input, { trace: tracer });
This will replace the built-in default tracer (which writes to the
console) by the tracer you supplied.
The tracer must be an object with a |trace| method. This method is
called each time a tracing event happens. It takes one argument which is
an object describing the tracing event.
Currently, three events are supported:
* rule.enter -- triggered when a rule is entered
* rule.match -- triggered when a rule matches successfully
* rule.fail -- triggered when a rule fails to match
These events are triggered in nested pairs -- for each rule.enter event
there is a matching rule.match or rule.fail event.
The event object passed as an argument to |trace| contains these
properties:
* type -- event type
* rule -- name of the rule the event is related to
* offset -- parse position at the time of the event
* line -- line at the time of the event
* column -- column at the time of the event
* result -- rule's match result (only for rule.match event)
The whole tracing API is somewhat experimental (which is why it isn't
documented properly yet) and I expect it will evolve over time as
experience is gained.
The default tracer is also somewhat bare-bones. I hope that PEG.js user
community will develop more sophisticated tracers over time and I'll be
able to integrate their best ideas into the default tracer.
Rename |generateCache{Header,Footer}| to |generateRule{Header,Footer}|
and change their responsibility to generate overall header/footer of a
rule function (when optimizing for speed) or the |peg$parseRule|
function (when optimizing for speed). This creates a natural place where
to generate tracing code (coming soon).
Action and predicate code can now see variables defined in expressions
"above" them.
Based on a pull request by Bryon Vandiver (@asterick):
https://github.com/pegjs/pegjs/pull/180Fixes#316.
The |visitor.build| function now supplies default visit functions for
visitors it builds. These functions don't do anything beside traversing
the tree and passing arguments around to child visit functions.
Having the default visit functions allowed to simplify several visitors.
The TEXT instruction now replaces position at the top of the stack with
the input from that position until the current position. This is simpler
and cleaner semantics than the previous one, where TEXT also popped an
additional value from the stack and kept the position there.
Implement the following bytecode instructions:
* PUSH_UNDEFINED
* PUSH_NULL
* PUSH_FAILED
* PUSH_EMPTY_ARRAY
These instructions push simple JavaSccript values to the stack directly,
without going through constants. This makes the bytecode slightly
shorter and the bytecode generator somewhat simpler.
Also note that PUSH_EMPTY_ARRAY allows us to avoid a hack which protects
the [] constant from modification.
The |stringEscape| function both in lib/compiler/javascript.js and in
generated parsers didn't escape characters in the U+0100..U+107F and
U+1000..U+107F ranges.
Split lib/utils.js into multiple files. Some of the functions were
generic, these were moved into files in lib/utils. Other funtions were
specific for the compiler, these were moved to files in lib/compiler.
This commit only moves functions around -- there is no renaming and
cleanup performed. Both will come later.
Modules now generally store the exported object in a named variable or
function first and only assign |module.exports| at the very end. This is
a difference when compared to style used until now, where most modules
started with a |module.exports| assignment.
I think the explicit name helps readability and understandability.
Initializer code is usually indented and this indentation is carried
over to generated code. This resulted in a piece of indented code in the
middle of the parser.
This commit wraps initializer code in |{...}|, which makes indentation
in generated parsers look a bit more natural.
The action/predicate code didn't have access to the parser object. This
was mostly a side effect actions/predicates being implemented as nested
functions, in which |this| is a reference to the global object (an ugly
JavaScript quirk). The initializer, being implemented differently, had
access to the parser object via |this|, but this was not documented.
Because having access to the parser object can be useful, this commits
introduces a new |parser| variable which holds a reference to it, is
visible in action/predicate/initializer code, and is properly
documented.
See also:
https://groups.google.com/forum/#!topic/pegjs/Na7YWnz6Bmg
This is mostly done for consistency with the JavaScript example grammar,
from which the |Identifier| rule is taken from. See the previous commit
for details.
Instead of matching segments between blocks character by character,
match them as a whole. Also align the style with other similar rules
(e.g. the comment ones).
Before this commit, line continuations in character classes contributed
an empty string to the list of characters and character ranges matched
by a class. While this didn't lead to a buggy behavior with the current
code generator, the AST was wrong and the problem could have caused bugs
later.
This commit fixes the problem.
Semantic predicates are kind of |PrimaryExpression|, not kind of
|PrefixedExpression|. Therefore I extracted a rule for them and
referenced it from the |PrimaryExpression|.
Initializer and rules are now separated in a similar way as JavaScript
statements -- either by a semicolon or a line terminator, possibly with
whitespace and comments mixed in.
One consequence is that the grammars like this are now illegal:
foo = "a" bar = "b"
A semicolon needs to be inserted between the rules:
foo = "a";bar = "b"
I consider this a good change as the now-illegal syntax was somewhat
confusing.
This makes the |Primary| rule a bit more tidy. Also, matching the |.|
character really belongs to the lexical part of the grammar, next to
literals and character classes.
* Rename the |Action| rule to |CodeBlock| (it better describes what
the rule matches).
* Implement the rule in a simpler way and move it after more basic
lexical elements.
This change has two side effects:
* Label names can no longer be JavaScript reserved words.
* |$| is allowed again in label names. However, because of the
preference rules, names starting with it will be usually parsed as a
text operator followed by another identifier (denoting a rule
reference or label name).
Before this commit, whitespace was handled at the lexical level by
making tokens consume any whitespace coming after them. This was
accomplished by appending |__| to every token rule.
This commit changes whitespace handling to be more explicit. Tokens no
longer consume whitespace coming after them and syntactic rules have to
cope with it. While this slightly complicates the syntactic grammar, I
think it's a cleaner way. Moreover, it is what JavaScript example
grammar does.
One small side-effect of thich change is that the grammar is now
stand-alone (it doesn't require utils.js anymore).
When rule names are capitalized, it's easier to visually distinguish
them from non-capitalized label names. Moreover, I use capitalized rule
names in all my grammars these days.
Before this commit, a line continuation (backslash followed by a line
terminator character) contributed a character to a string or a character
class it was used in. In JavaScript and many other languages, line
continuation doesn't contribute anything.
This commit aligns PEG.js line continuation behavior with JavaScript.
Before this commit, the value of the |rawText| property of "class" AST
nodes was created in a hackish way from processed input and it didn't
always exactly represent the actual input text.
This commit changes the code so that the value of the |rawText| property
is created using the |text| function. This is a clean way which also
resolves the exact representation problem.
Also added few missing |hasOwnProperty| calls that JSHint didn't detect
because it only looks whether there is an |if| statement wrapping the
loop body.
Fixes the following JSHint error:
lib/compiler/passes/generate-bytecode.js: line 334, col 71, Expected an assignment or function call and instead saw an expression.
The one-parameter |Array.prototype.splice| call is a SpiderMonkey
extension. Apparently, IE doesn't implement it (unlike other supported
browsers), so we need to replace it with two-parameter version.
In case the generated parser parsed successfully part of input and left
some input unparsed (trailing input), the error message produced was
sometimes wrong. The code worked correctly only if there were no match
failures in the successfully parsed part (highly unlikely).
This commit fixes things by explicitly triggering a match failure with the
following expectation at the end of the successfully parsed part of the
input:
peg$fail({ type: "end", description: "end of input" });
This change also made it possible to simplify the |buildMessage|
function, which can now ignore the case of no expectations.
Fixes#119.
There are two invariants in generated bytecode related to the stack:
1. Branches of a condition must move the stack pointer in the same way.
2. Body of a loop can't move the stack pointer.
These invariants were always true, but they were not checked. Now we
check them at least when compiling with optimization for speed, because
there we analyze the stack pointer movements statically.
The error check was useful when actions could have returned |null| to
trigger a match failure. This is no longer supported so the check isn't
needed anymore.
Speed impact
------------
Before: 1022.70 kB/s
After: 1035.45 kB/s
Difference: 1.24%
Size impact
-----------
Before: 975434 b
After: 931540 b
Difference: -4.50%
(Measured by /tools/impact with Node.js v0.6.18 on x86_64 GNU/Linux.)
Before this commit, the |expected| and |error| functions didn't halt the
parsing immediately, but triggered a regular match failure. After they
were called, the parser could backtrack, try another branches, and only
if no other branch succeeded, it triggered an exception with information
possibly based on parameters passed to the |expected| or |error|
function (this depended on positions where failures in other branches
have occurred).
While nice in theory, this solution didn't work well in practice. There
were at least two problems:
1. Action expression could have easily triggered a match failure later
in the input than the action itself. This resulted in the
action-triggered failure to be shadowed by the expression-triggered
one.
Consider the following example:
integer = digits:[0-9]+ {
var result = parseInt(digits.join(""), 10);
if (result % 2 === 0) {
error("The number must be an odd integer.");
return;
}
return result;
}
Given input "2", the |[0-9]+| expression would record a match
failure at position 1 (an unsuccessful attempt to parse yet another
digit after "2"). However, a failure triggered by the |error| call
would occur at position 0.
This problem could have been solved by silencing match failures in
action expressions, but that would lead to severe performance
problems (yes, I tried and measured). Other possible solutions are
hacks which I didn't want to introduce into PEG.js.
2. Triggering a match failure in action code could have lead to
unexpected backtracking.
Consider the following example:
class = "[" (charRange / char)* "]"
charRange = begin:char "-" end:char {
if (begin.data.charCodeAt(0) > end.data.charCodeAt(0)) {
error("Invalid character range: " + begin + "-" + end + ".");
}
// ...
}
char = [a-zA-Z0-9_\-]
Given input "[b-a]", the |charRange| rule would fail, but the
parser would try the |char| rule and succeed repeatedly, resulting
in "b-a" being parsed as a sequence of three |char|'s, which it is
not.
This problem could have been solved by using negative predicates,
but that would complicate the grammar and still wouldn't get rid of
unintuitive behavior.
Given these problems I decided to change the semantics of the |expected|
and |error| functions. They don't interact with regular match failure
mechanism anymore, but they cause and immediate parse failure by
throwing an exception. I think this is more intuitive behavior with less
harmful side effects.
The disadvantage of the new approach is that one can't backtrack from an
action-triggered error. I don't see this as a big deal as I think this
will be rarely needed and one can always use a semantic predicate as a
workaround.
Speed impact
------------
Before: 993.84 kB/s
After: 998.05 kB/s
Difference: 0.42%
Size impact
-----------
Before: 1019968 b
After: 975434 b
Difference: -4.37%
(Measured by /tools/impact with Node.js v0.6.18 on x86_64 GNU/Linux.)
The |error| function allows users to report custom match failures inside
actions.
If the |error| function is called, and the reported match failure turns
out to be the cause of a parse error, the error message reported by the
parser will be exactly the one specified in the |error| call.
Implements part of #198.
Speed impact
------------
Before: 999.83 kB/s
After: 1000.84 kB/s
Difference: 0.10%
Size impact
-----------
Before: 1017212 b
After: 1019968 b
Difference: 0.27%
(Measured by /tools/impact with Node.js v0.6.18 on x86_64 GNU/Linux.)
This is in anticipation of |peg$error|. The |peg$expected| and
|peg$error| internal functions will nicely mirror the |expected| and
|error| functions available to user code in actions.
Implements part of #198.
The |expected| function allows users to report regular match failures
inside actions.
If the |expected| function is called, and the reported match failure
turns out to be the cause of a parse error, the error message reported
by the parser will be in the usual "Expected ... but found ..." format
with the description specified in the |expected| call used as part of
the message.
Implements part of #198.
Speed impact
------------
Before: 1146.82 kB/s
After: 1031.25 kB/s
Difference: -10.08%
Size impact
-----------
Before: 950817 b
After: 973269 b
Difference: 2.36%
(Measured by /tools/impact with Node.js v0.6.18 on x86_64 GNU/Linux.)
After making the |?| operator return |null| instead of an empty string
in the previous commit, empty strings were still returned from
predicates. This didn't make much sense.
Return value of a predicate is unimportant (if you have one in hand, you
already know the predicate succeeded) and one could even argue that
predicates shouldn't return any value at all. The closest thing to
"return no value" in JavaScript is returning |undefined|, so I decided
to make predicates return exactly that.
Implements part of #198.
Before this commit, the |?| operator returned an empty string upon
unsuccessful match. This commit changes the returned value to |null|. It
also updates the PEG.js grammar and the example grammars, which used the
value returned by |?| quite often.
Returning |null| is possible because it no longer indicates a match
failure.
I expect that this change will simplify many real-world grammars, as an
empty string is almost never desirable as a return value (except some
lexer-level rules) and it is often translated into |null| or some other
value in action code.
Implements part of #198.
Using a special value to indicate match failure instead of |null| allows
actions to return |null| as a regular value. This simplifies e.g. the
JSON parser.
Note the special value is internal and intentionally undocumented. This
means that there is currently no official way how to trigger a match
failure from an action. This is a temporary state which will be fixed
soon.
The negative performance impact (see below) is probably caused by
changing lot of comparisons against |null| (which likely check the value
against a fixed constant representing |null| in the interpreter) to
comparisons against the special value (which likely check the value
against another value in the interpreter).
Implements part of #198.
Speed impact
------------
Before: 1146.82 kB/s
After: 1031.25 kB/s
Difference: -10.08%
Size impact
-----------
Before: 950817 b
After: 973269 b
Difference: 2.36%
(Measured by /tools/impact with Node.js v0.6.18 on x86_64 GNU/Linux.)
Before this commit, the |expected| property of an exception object
thrown when a generated parser encountered an error contained
expectations as strings. These strings were in a human-readable format
suitable for displaying in the UI but not suitable for machine
processing. For example, expected string literals included quotes and a
string "any character" was used when any character was expected.
This commit makes expectations structured objects. This makes the
machine processing easier, while still allowing to generate a
human-readable representation if needed.
Implements part of #198.
Speed impact
------------
Before: 1180.41 kB/s
After: 1165.31 kB/s
Difference: -1.28%
Size impact
-----------
Before: 863523 b
After: 950817 b
Difference: 10.10%
(Measured by /tools/impact with Node.js v0.6.18 on x86_64 GNU/Linux.)
In the bytecode generator, the |context.action| property wasn't
correctly reset when generating bytecode for sequence elements. As a
result, when a sequence was wrapped in an action and it contained
another sequence as an element, the generator thought that the inner
sequence was wrapped in an action too.
For example, the following grammar:
start = ("a" "b") "c" { return "x"; }
was compiled as if it looked like this:
start = ("a" "b" { return "x"; }) "c" { return "x"; }
This commit fixes the problem by resetting |context.action| correctly.
Fixes GH-168.
Code that calculated which part of the input to match against a literal
was wrong in case of case-insensitive literals when generating
speed-optimized parsers. As a result, matching of case-insensitive
literals worked only at the end of the input (where too big length
passed to the |substr| method didn't matter).
Fixes GH-153.
The deduplication skipped over an expected string right after the one
that was removed because the index variable was incorrectly incremented
in that case.
Based on a patch by @fresheneesz:
https://github.com/dmajda/pegjs/pull/146
The compiler passes are now split into three stages:
* check -- passes that check for various error conditions
* transform -- passes that transform the AST (e.g. to perform
optimizations)
* generate -- passes that are related to code generation
Splitting the passes into stages is important for plugins. For example,
if a plugin wants to add a new optimization pass, it can add it at the
end of the "transform" stage without any knowledge about other passes it
contains. Similarly, if it wants to generate something else than the
default code generator does from the AST, it can just replace all passes
in the "generate" stage by its own one(s).
More generally, the stages make it possible to write plugins that do not
depend on names and actions of specific passes (which I consider
internal and subject of change), just on the definition of stages (which
I consider a public API with to which semver rules apply).
Implements part of GH-106.
The |plugins| option allows users to use plugins that change how PEG.js
operates.
A plugin is any JavaScript object with a |use| method. After the user
calls |PEG.buildParser|, this method is called for each plugin with the
following two parameters:
* PEG.js config that describes used grammar parser and compiler
passes used to generate the parser
* options passed by user to |PEG.buildParser|
The plugin is expected to change the config as needed, possibly based on
the options passed by user. It can e.g. change the used grammar parser,
change the compiler passes (including adding its own), etc. This way it
can extend PEG.js in a flexible way.
Implements part of GH-106.
The |passes| parameter will allow to pass the list of passes from
|PEG.buildParser|. This will be used by plugins. The old way via setting
the |appliedPassNames| property is removed.
Implements part of GH-106.
This is a complete rewrite of the PEG.js code generator. Its goals are:
1. Allow optimizing the generated parser code for code size as well as
for parsing speed.
2. Prepare ground for future optimizations and big features (like
incremental parsing).
2. Replace the old template-based code-generation system with
something more lightweight and flexible.
4. General code cleanup (structure, style, variable names, ...).
New Architecture
----------------
The new code generator consists of two steps:
* Bytecode generator -- produces bytecode for an abstract virtual
machine
* JavaScript generator -- produces JavaScript code based on the
bytecode
The abstract virtual machine is stack-based. Originally I wanted to make
it register-based, but it turned out that all the code related to it
would be more complex and the bytecode itself would be longer (because
of explicit register specifications in instructions). The only downsides
of the stack-based approach seem to be few small inefficiencies (see
e.g. the |NIP| instruction), which seem to be insignificant.
The new generator allows optimizing for parsing speed or code size (you
can choose using the |optimize| option of the |PEG.buildParser| method
or the --optimize/-o option on the command-line).
When optimizing for size, the JavaScript generator emits the bytecode
together with its constant table and a generic bytecode interpreter.
Because the interpreter is small and the bytecode and constant table
grow only slowly with size of the grammar, the resulting parser is also
small.
When optimizing for speed, the JavaScript generator just compiles the
bytecode into JavaScript. The generated code is relatively efficient, so
the resulting parser is fast.
Internal Identifiers
--------------------
As a small bonus, all internal identifiers visible to user code in the
initializer, actions and predicates are prefixed by |peg$|. This lowers
the chance that identifiers in user code will conflict with the ones
from PEG.js. It also makes using any internals in user code ugly, which
is a good thing. This solves GH-92.
Performance
-----------
The new code generator improved parsing speed and parser code size
significantly. The generated parsers are now:
* 39% faster when optimizing for speed
* 69% smaller when optimizing for size (without minification)
* 31% smaller when optimizing for size (with minification)
(Parsing speed was measured using the |benchmark/run| script. Code size
was measured by generating parsers for examples in the |examples|
directory and adding up the file sizes. Minification was done by |uglify
--ascii| in version 1.3.4.)
Final Note
----------
This is just a beginning! The new code generator lays a foundation upon
which many optimizations and improvements can (and will) be made.
Stay tuned :-)
Previously, the report-left-recursion and report-missing-rules passes
used PEG.GrammarError without requiring it, causing a ReferenceError.
Since requiring lib/peg.js would cause circular requirements, this
commit imports lib/grammar-error.js as GrammarError.
The bug was introduced in commit
4cda79951a.
Fixes GH-135.
When called inside an action, the |text| function returns the text
matched by action's expression. It can be also called inside an
initializer or a predicate where it returns an empty string.
The |text| function will be useful mainly in cases where one needs a
structured representation of the input and simultaneously the raw text.
Until now, the only way to get the raw text in these cases was to
painfully build it from the structured representation.
Fixes GH-131.
Implement a new syntax to extract matched strings from expressions. For
example, instead of:
identifier = first:[a-zA-Z_] rest:[a-zA-Z0-9_]* { return first + rest.join(""); }
you can now just write:
identifier = $([a-zA-Z_] [a-zA-Z0-9_]*)
This is useful mostly for "lexical" rules at the bottom of many
grammars.
Note that structured match results are still built for the expressions
prefixed by "$", they are just ignored. I plan to optimize this later
(sometime after the code generator rewrite).
Cache the last reported position info. If the position advances, the
code uses the cache and only computes the differnece. If the position
goes back, the cache is simply dropped.
Getting rid of the |trackLineAndColumn| simplifies the code generator
(by unifying two paths in the code).
The |line| and |column| functions currently always compute all the
position info from scratch, which is horribly ineffective. This will be
improved in later commit(s).
This will allow to compute position data lazily and get rid of the
|trackLineAndColumn| option without affecting performance of generated
parsers that don't use position data.
Before this commit, incorrect regexps were produced for classes starting
with "\^". For example, this grammar:
start = [\^a]
didn't match "a" because the generated regexp inside the parser was
/^[^a]/, not /^[\^a]/ as it should be.
This commit fixes the issue by escaping "^" in |quoteForRegexpClass|.
Fixes GH-125.
Before this commit, |PEG.buildParser| always returned a parser object.
The only way to get its source code was to call the |toSource| method on
it. While this method worked for parsers produced by |PEG.buildParser|
directly, it didn't work for parsers instantiated by executing their
source code. In other words, it was unreliable.
This commit remvoes the |toSource| method on generated parsers and
introduces a new |output| option to |PEG.buildParser|. It allows callers
to specify whether they want to get back the parser object
(|options.output === "parser"|) or its source code (|options.output ===
"source"|). This is much better and more reliable API.
Includes:
* Moving the source code from /src to /lib.
* Adding an explicit file list to package.json
* Updating the Makefile.
* Updating the spec and benchmark suites and their READMEs.
Part of a fix for GH-32.
The source code is now in the src directory. The library needs to be
built using "rake", which creates the lib/peg.js file by combining the
source files.
1. |PEG.Compiler| -> |PEG.compiler|
2. |PEG.grammarParser| -> |PEG.parser|
This brings us closer to the desired structure of the PEG object, which
is:
+-PEG
|- parser
+- compiler
|- checks
|- passes
+- emitter
These are the only things (together with the |PEG.buildParser| function
and exceptions) that I want to be publicly accessible -- as extension
points and also for easy testing of PEG.js's components.