Instead of testing arguments.length to see whether an optional parameter
was passed to a function, compare its value to "undefined". This
approach has two advantages:
* It is in line with handling of default parameters in ES6.
* Optional parameters are actually spelled out in the parameter
list.
There is also one important disadvantage, namely that it's impossible to
pass "undefined" as an optional parameter value. This required a small
change in two tests.
Additional notes:
* Default parameter values are set in assignments immediately
after the function header. This reflects the fact that these
assignments really belong to the parameter list (which is where they
are in ES6).
* Parameter values are checked against "void 0" in places where
"undefined" can potentially be redefiend.
The "toParse" matcher in generated-parser-behavior.spec.js effectively
had these signatures:
toParse(input)
toParse(input, expected)
toParse(input, options, expected)
This commit regularizes them to:
toParse(input)
toParse(input, expected)
toParse(input, expected, options)
Similarly, the "toFailToParse" matcher in
generated-parser-behavior.spec.js effectively had these signatures:
toFailToParse(input)
toFailToParse(input, details)
toFailToParse(input, options, details)
This commit regularizes them to:
toFailToParse(input)
toFailToParse(input, details)
toFailToParse(input, details, options)
Finally, the "toChangeAST" matcher in helpers.js effectively had these
signatures:
toChangeAST(grammar, details)
toChangeAST(grammar, options, details)
This commit regularizes them to:
toChangeAST(grammar, details)
toChangeAST(grammar, details, options)
The overall purpose of these changes is to avoid different parameters
appearing at the same position, which is hard to manage without using
"arguments".
Labels in expressions like "(a:"a")" or "(a:"a" b:"b" c:"c")" were
visible to the outside despite being wrapped in parens. This commit
makes them invisible, as they should be.
Note this required introduction of a new "group" AST node, whose purpose
is purely to provide label scope isolation. This was necessary because
"label" and "sequence" nodes don't (and can't!) provide this isolation
themselves.
Part of a fix of #396.
Semantic predicate and action specs which verified label scope didn't
exercise labels in subexpressions. This commit adds cases exercising
them, including few commented-out cases which reveal #396 (these will be
uncommented when the bug gets fixed).
Note that added specs exercise all relevant expression types. This is
needed because code that makes subexpression labels invisible to the
outside is separate for each expression type so one generic test
wouldn't generate enough coverage.
Part of a fix of #396.
So far, semantic predicate and action specs which verified scope of
labels from containing or outer sequences exercised only cases where
label variables were defined. This commit adds also some negative cases.
The idea comes from @Mingun.
Semantic predicate and action specs which verified scope of labels from
outer sequences exercised all relevant expression types. This is not
needed as the behavior is common for all expression types and no extra
code needs to be written to make it work for each of them.
This commit changes the specs to verify scope of labels from outer
sequences using only one expression type.
Semantic predicate specs which verified scope of labels from containing
sequences used 3 elements where 1 is enough.
This commit removes the redundant elements.
Semantic predicate and action specs which verified label scope used
repetitive "it" blocks. Rewrite them to use just one "it" block and a
list of testcases. This makes them more concise.
Semantic predicate specs which verified label scope also checked parser
results. This is not necessary because for the purpose of these specs it
is enough to verify that label variables have correct values, which is
done in predicate code already.
This commit removes parser result checks from these specs.
Before this commit, generated parsers considered the following character
sequences as newlines:
Sequence Description
------------------------------
"\n" Unix
"\r" Old Mac
"\r\n" Windows
"\u2028" line separator
"\u2029" paragraph separator
This commit limits the sequences only to "\n" and "\r\n". The reason is
that nobody uses Unicode newlines or "\r" in practice.
A positive side effect of the change is that newline-handling code
became simpler (and likely faster).
Instead of setting ESLint environment to "node" globally, set it on
per-directory basis using separate .eslintrc.json files:
Directory Environment
-----------------------
bin node
lib commonjs
spec jasmine
It was impossible to use this approach for the "benchmark" directory
which contains a mix of files used in various environments. For
benchmark/run, the environment is set inline. For the other files, as
well as spec/helpers.js, the globals are declared manually (it is
impossible to express how these files are used just by a list of
environments).
Fixes#408.
Fix the following errors:
59:35 error "options" is defined but never used no-unused-vars
91:11 error "plugin" is defined but never used no-unused-vars
102:35 error "options" is defined but never used no-unused-vars
128:35 error "options" is defined but never used no-unused-vars
Note that ESLint revealed a real problem where the test supposedly
verifying receiving options by a plugin didn't actually verify anything.
Implement the swap and change various directives in the source code. The
"make hint" target becomes "make lint".
The change leads to quite some errors being reported by ESLint. These
will be fixed in subsequent commits.
Note the configuration enables just the recommended rules. Later I plan
to enable more rules to enforce the coding standard. The configuration
also sets the environment to "node", which is far from ideal as the
codebase contains a mix of CommonJS, Node.js and browser code. I hope to
clean this up at some point.
The |found| property wasn't very useful as it mostly contained just one
character or |null| (the exception being syntax errors triggered by
|error| or |expected|). Similarly, the "but XXX found" part of the error
message (based on the |found| property) wasn't much useful and was
redundant in presence of location info.
For these reasons, this commit removes the |found| property and
corresponding part of the error message from syntax errors. It also
modifies error location info slightly to cover a range of 0 characters,
not 1 character (except when the error is triggered by |error| or
|expected|). This corresponds more precisely to the actual situation.
Fixes#372.
Report left recursion also in cases where the recursive rule invocation
is not a direct element of a sequence, but is wrapped inside an
expression.
Fixes#359.
Add missing |named| case to the visitor in lib/compiler/asts.js, which
makes the infinite loop and left recursion detectors work correctly with
named rules.
The missing case caused |make parser| to fail with:
140:34: Infinite loop detected.
make: *** [parser] Error 1
Replace |line|, |column|, and |offset| properties of tracing events with
the |location| property. It contains an object similar to the one
returned by the |location| function available in action code:
{
start: { offset: 23, line: 5, column: 6 },
end: { offset: 25, line: 5, column: 8 }
}
For the |rule.match| event, |start| refers to the position at the
beginning of the matched input and |end| refers to the position after
the end of the matched input.
For |rule.enter| and |rule.fail| events, both |start| and |end| refer to
the current position at the time the rule was entered.
Replace |line|, |column|, and |offset| properties of |SyntaxError| with
the |location| property. It contains an object similar to the one
returned by the |location| function available in action code:
{
start: { offset: 23, line: 5, column: 6 },
end: { offset: 25, line: 5, column: 8 }
}
For syntax errors produced in the middle of the input, |start| refers to
the first unparsed character and |end| refers to the character behind it
(meaning the span is 1 character). This corresponds to the portion of
the input in the |found| property.
For syntax errors produced the end of the input, both |start| and |end|
refer to a character past the end of the input (meaning the span is 0
characters).
For syntax errors produced by calling |expected| or |error| functions in
action code the location info is the same as the |location| function
would return.
Preform the following renames:
* |reportedPos| -> |savedPos| (abstract machine variable)
* |peg$reportedPos| -> |peg$savedPos| (variable in generated code)
* |REPORT_SAVED_POS| -> |LOAD_SAVED_POS| (instruction)
* |REPORT_CURR_POS| -> |UPDATE_SAVED_POS| (instruction)
The idea is that the name |reportedPos| is no longer accurate after the
|location| change (seea the previous commit) because now both
|reportedPos| and |currPos| are reported to user code. Renaming to
|savedPos| resolves this inaccuracy.
There is probably some better name for the concept than quite generic
|savedPos|, but it doesn't come to me.
Replace |line|, |column|, and |offset| functions with the |location|
function. It returns an object like this:
{
start: { offset: 23, line: 5, column: 6 },
end: { offset: 25, line: 5, column: 8 }
}
In actions, |start| refers to the position at the beginning of action's
expression and |end| refers to the position after the end of action's
expression. This allows one to easily add location info e.g. to AST
nodes created in actions.
In predicates, both |start| and |end| refer to the current position.
Fixes#246.
So far, left recursion detector assumed that left recursion occurs only
when the recursive rule is at the very left-hand side of rule's
expression:
start = start
This didn't catch cases like this:
start = "a"? start
In general, if a rule reference can be reached without consuming any
input, it can lead to left recursion. This commit fixes the detector to
consider that.
Fixes#190.
Parsers can now be generated with support for tracing using the --trace
CLI option or a boolean |trace| option to |PEG.buildParser|. This makes
them trace their progress, which can be useful for debugging. Parsers
generated with tracing support are called "tracing parsers".
When a tracing parser executes, by default it traces the rules it enters
and exits by writing messages to the console. For example, a parser
built from this grammar:
start = a / b
a = "a"
b = "b"
will write this to the console when parsing input "b":
1:1 rule.enter start
1:1 rule.enter a
1:1 rule.fail a
1:1 rule.enter b
1:2 rule.match b
1:2 rule.match start
You can customize tracing by passing a custom *tracer* to parser's
|parse| method using the |tracer| option:
parser.parse(input, { trace: tracer });
This will replace the built-in default tracer (which writes to the
console) by the tracer you supplied.
The tracer must be an object with a |trace| method. This method is
called each time a tracing event happens. It takes one argument which is
an object describing the tracing event.
Currently, three events are supported:
* rule.enter -- triggered when a rule is entered
* rule.match -- triggered when a rule matches successfully
* rule.fail -- triggered when a rule fails to match
These events are triggered in nested pairs -- for each rule.enter event
there is a matching rule.match or rule.fail event.
The event object passed as an argument to |trace| contains these
properties:
* type -- event type
* rule -- name of the rule the event is related to
* offset -- parse position at the time of the event
* line -- line at the time of the event
* column -- column at the time of the event
* result -- rule's match result (only for rule.match event)
The whole tracing API is somewhat experimental (which is why it isn't
documented properly yet) and I expect it will evolve over time as
experience is gained.
The default tracer is also somewhat bare-bones. I hope that PEG.js user
community will develop more sophisticated tracers over time and I'll be
able to integrate their best ideas into the default tracer.
Action and predicate code can now see variables defined in expressions
"above" them.
Based on a pull request by Bryon Vandiver (@asterick):
https://github.com/pegjs/pegjs/pull/180Fixes#316.
The generated parser API specs are mostly extracted from
generated-parser.spec.js, which got renamed to
generated-parser-behavior.spec.js to better reflect its purpose.
Unit specs are unit tests of internal stuff. API specs are tests of the
user-visible APIs and behavior.
I think it makes sense to make this distinction because then the public
API line is more clearly visible e.g. when using the specs as
documentation.
The TEXT instruction now replaces position at the top of the stack with
the input from that position until the current position. This is simpler
and cleaner semantics than the previous one, where TEXT also popped an
additional value from the stack and kept the position there.
Implement the following bytecode instructions:
* PUSH_UNDEFINED
* PUSH_NULL
* PUSH_FAILED
* PUSH_EMPTY_ARRAY
These instructions push simple JavaSccript values to the stack directly,
without going through constants. This makes the bytecode slightly
shorter and the bytecode generator somewhat simpler.
Also note that PUSH_EMPTY_ARRAY allows us to avoid a hack which protects
the [] constant from modification.
The action/predicate code didn't have access to the parser object. This
was mostly a side effect actions/predicates being implemented as nested
functions, in which |this| is a reference to the global object (an ugly
JavaScript quirk). The initializer, being implemented differently, had
access to the parser object via |this|, but this was not documented.
Because having access to the parser object can be useful, this commits
introduces a new |parser| variable which holds a reference to it, is
visible in action/predicate/initializer code, and is properly
documented.
See also:
https://groups.google.com/forum/#!topic/pegjs/Na7YWnz6Bmg
This is mostly done for consistency with the JavaScript example grammar,
from which the |Identifier| rule is taken from. See the previous commit
for details.
Instead of matching segments between blocks character by character,
match them as a whole. Also align the style with other similar rules
(e.g. the comment ones).