| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
You can now specify relationships directly in the language:
File[/foo] -> Service[bar]
Specifies a normal dependency while:
File[/foo] ~> Service[bar]
Specifies a subscription.
You can also do relationship chaining, specifying multiple
relationships on a single line:
File[/foo] -> Package[baz] -> Service[bar]
Note that while it's confusing, you don't have to have all
of the arrows be the same direction:
File[/foo] -> Service[bar] <~ Package[baz]
This can provide some succinctness at the cost of readability.
You can also specify full resources, rather than just
resource refs:
file { "/foo": ensure => present } -> package { bar: ensure => installed }
But wait! There's more! You can also specify a subscription on either side
of the relationship marker:
yumrepo { foo: .... }
package { bar: provider => yum, ... }
Yumrepo <| |> -> Package <| provider == yum |>
This, finally, provides easy many to many relationships in Puppet, but it also opens
the door to massive dependency cycles. This last feature is a very powerful stick,
and you can considerably hurt yourself with it.
Signed-off-by: Luke Kanies <luke@puppetlabs.com>
|
|
|
|
|
|
| |
It's about 10x faster to read the whole file than to read each line and
concatenate them (actually, it's O(n) vs. O(n^2), so the exact speedup
depends on the file size).
|
|
|
|
|
|
|
|
|
|
|
| |
From email:
Some of the errors I needed to track down were actually coming from my
string interpolation branch:
* I wasn't handling "Foo ${1} bar" as a regexp back reference (and I don't like it, but hey)
* I wasn't warning about & passing on the "unneeded" backslash in strings like 'foo\"bar'
* I fumbled part of the conflict resolution with Brice's hash patch.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch moves the syntactic aspects of string interpolation up
into the lexer/parser phase, preparatory to moving the semantic
portions down to the as yet unnamed futures resolution phase.
This is an enabling move, designed to allow:
* Futures resolution in and between interpolated strings
* Interpolation of hash elements into strings
* Removal of certain order-dependent paths
* Further modularization of the lexer/parser
The key change is switching from viewing strings with interpolation
as single lexical entities (which await later special case processing)
to viewing them as formulas for constructing strings, with the internal
structure of the string exposed by the parser.
Thus a string like:
"Hello $name, are you enjoying ${language_feature}?"
internally becomes something like:
concat("Hello ",$name,", are you enjoying ",$language_feature,"?")
where "concat" is an internal string concatenation function.
A few test cases to show the user observable effects of this change:
notice("string with ${'a nested single quoted string'} inside it.")
$v2 = 3+4
notice("string with ${['an array ',3,'+',4,'=',$v2]} in it.")
notice("string with ${(3+5)/4} nested math ops in it.")
...and so forth.
The key changes in the internals are:
* Unification of SQTEXT and DQTEXT into a new token type STRING (since
nothing past the lexer cares about the distinction.
* Creation of several new token types to represent the components of
an interpolated string:
DQPRE The initial portion of an interpolated string
DQMID The portion of a string betwixt two interpolations
DQPOST The final portion of an interpolated string
DQCONT The as-yet-unlexed portion after an interpolation
Thus, in the example above (phantom curly braces added for clarity),
DQPRE "Hello ${
DQMID }, are you enjoying ${
DQPOST }?"
DQCONT is a bookkeeping token and is never generated.
* Creation of a DOLLAR_VAR token to strip the "$" off of variables
with explicit dollar signs, so that the VARIABLEs produced from
things like "Test ${x}" (where the "$" has already been consumed)
do not fail for want of a "$"
* Reworking the grammar rules in the obvious way
* Introduction of a "concatenation" AST node type (which will be going
away in a subsequent refactor).
Note finally that this is a component of a set of interrelated refactors,
and some of the changes around the edges of the above will only makes
sense in context of the other parts.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is my proposed attack on the lexing problem, with a few minor
cleanups to simplify its integration. The strategy:
* Anotate tokens with a method "acceptable?" that determines if
they can be generated in a given context. Have this default
to true.
* Give the lexer the notion of a context; initialize it and
update it as needed. The present context records the name of
the last significant token generated and a start_of_line flag.
* When a token is found to match, check if it is acceptable in
the present context before generating it.
These changes don't result any any change in behaviour but they
enable:
* Give the REGEX token an acceptable? rule that only permits a
regular expression in specific contexts.
The other changes were a fix to the scan bug Brice reported,
adjusting a test and clearing up some cluttered conditions in the
context collection path.
Added tests and subsumed change restricting REGEX to one line.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is not the real fix. It is just an hot-fix to limit
the issue.
The issue is that the lexer regexes have precedences over simple
'/' (divide).
In the following expression:
$var = 4096 / 4
$var2 = "/tmp/file"
The / 4... part is mis-lexed as a regex instead of a mathematical
expression.
The current fix limits regex to one-line.
Signed-off-by: Brice Figureau <brice-puppet@daysofwonder.com>
|
|
|
|
| |
Signed-off-by: Luke Kanies <luke@madstop.com>
|
|
|
|
|
|
|
|
|
|
| |
The lexer recognizes regex delimited by / as in:
/^$/
The match operator is defined by =~
The not match operator is defined by !~
Signed-off-by: Brice Figureau <brice-puppet@daysofwonder.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Due to the problem that we associate documentation in the lexer and
not in the parser (which would be to complex and unmaintenable to
do), and since the parser reads new tokens before reducing
the current statement (thus creating the AST node), we could
sometimes associate comments seen after a statement associated
to this one.
Ex:
1. $foo = 1
2. # doc of next class
3. class test {
When we parse the first line, the parser can reduce this to the
correct VarDef only after it lexed the CLASS token.
But lexing this token means we already pushed on the comment stack
the "doc of next class" comment.
That means at the time we create the AST VarDef node, the parser thinks
it should associate this documentation to it, which is incorrect.
As soon as the parser uses token line number, we can enhance the lexer
to allow comments to be associated to current AST node only if
the statement line number is greater or equal than the last comment
line number.
This way it is impossible to associate a comment appearing later in the
source than a previous statement.
Signed-off-by: Brice Figureau <brice-puppet@daysofwonder.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Careful inspection of the parser code show that when we
associate a source line number for an AST node, we use the
current line number of the currently lexed token.
In many case, this is correct, but there are some cases where
this is incorrect.
Unfortunately due to how LALR parser works the ast node creation
of a statement can appear _after_ we lexed another token after
the current statement:
1. $foo = 1
2.
3. class test
When the parser asks for the class token, it can reduce the
assignement statement into the AST VarDef node, because no other
grammar rule match. Unfortunately we already lexed the class token
so we affect to the VarDef node the line number 3 instead of 1.
This is not a real issue for error reporting, but becomes a real
concern when we associate documentation comments to AST node for
puppetdoc.
The solution is to enhance the tokens lexed and returned to the parser
to carry their declaration line number.
Thus a token value becomes a hash: { :value => tokenvalue, :line }
Next, each time we create an AST node, we use the line number of
the correct token (ie the foo line number in the previous example).
Signed-off-by: Brice Figureau <brice-puppet@daysofwonder.com>
|
|
|
|
|
|
|
|
|
| |
Comments and multi-line comments produces no token per-se during
lexing, so the lexer loops to find another token.
The issue was that we were not skipping whitespace after finding
such non-token.
Signed-off-by: Brice Figureau <brice-puppet@daysofwonder.com>
|
| |
|
|
|
|
|
|
|
| |
This involves lexing '::class' tokens along with correctly
looking them up from the Resource::Reference class.
Signed-off-by: Luke Kanies <luke@madstop.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The lexer maintains a stack of last seen comments.
On blank lines the lexer flush the comments.
On each opening brace the lexer enters a new stack level.
On each block AST nodes, the stack is popped.
Each AST nodes has a doc property that is filled with the
last seen comments on node creation (in fact only on important node
creation representing statements).
Signed-off-by: Brice Figureau <brice-puppet@daysofwonder.com>
|
|
|
|
| |
Signed-off-by: Brice Figureau <brice-puppet@daysofwonder.com>
|
|
|
|
| |
Signed-off-by: Luke Kanies <luke@madstop.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The expressions can be used in if 'test' and in the
right side of assignements.
The expressions can contain any number of sub-expressions
combined by either arithmetic operators, comparison operators,
or boolean operators.
Random Usage Examples:
$result = ((( $two + 2) / $one) + 4 * 5.45) - (6 << 7) + (0x800 + -9)
or
if ($a < 10) and ($a + 10 != 200) {
...
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The append variable operator can be used to append something to
a variable defined in a parent scope, containing either a string
or an array.
The main use is to append array elements in classes to a variable
globally defined in a node.
Example:
$ssh_users = ['brice', 'admin1']
class backup {
$ssh_users += ['backup_operator']
...
}
Signed-off-by: Brice Figureau <brice-puppet@daysofwonder.com>
|
| |
|
|
|
|
|
| |
including not compiling the configurations, and also storeconfigs
is no longer required during parse-testing.
|
|
|
|
|
|
| |
classes for managing how the tokens work.
I also moved they tests to RSpec, but I didn't rewrite all of them.
|
| |
|
| |
|
|
|
|
| |
lexer. Updated CLASSREF token regex in the lexer.
|
|
|
|
|
|
| |
parameters
git-svn-id: https://reductivelabs.com/svn/puppet/trunk@2670 980ebf18-57e1-0310-9a29-db15c13687c0
|
|
|
|
|
|
| |
notification of what was expected in most cases
git-svn-id: https://reductivelabs.com/svn/puppet/trunk@2531 980ebf18-57e1-0310-9a29-db15c13687c0
|
|
|
|
| |
git-svn-id: https://reductivelabs.com/svn/puppet/trunk@2522 980ebf18-57e1-0310-9a29-db15c13687c0
|
|
|
|
|
|
|
|
|
|
| |
http://mail.madstop.com/pipermail/puppet-users/2007-April/002398.html .
You can now retrieve qualified variables by specifying the full class path.
git-svn-id: https://reductivelabs.com/svn/puppet/trunk@2393 980ebf18-57e1-0310-9a29-db15c13687c0
|
|
|
|
|
|
| |
iterative evaluation, with collections being evaluated first. This way collections can find resources that either are inside defined types or are the types themselves.
git-svn-id: https://reductivelabs.com/svn/puppet/trunk@1967 980ebf18-57e1-0310-9a29-db15c13687c0
|
|
|
|
|
|
| |
significant rewrite of the parser, but it has little affect on the rest of the code tree.
git-svn-id: https://reductivelabs.com/svn/puppet/trunk@1726 980ebf18-57e1-0310-9a29-db15c13687c0
|
|
|
|
|
|
| |
#271.
git-svn-id: https://reductivelabs.com/svn/puppet/trunk@1610 980ebf18-57e1-0310-9a29-db15c13687c0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
worth the priority I suddenly placed on them).
First, it adds search paths as I originally requested in #114. There is
now a 'lib' setting, which can be used to tell Puppet where to find
manifests. Any file you tell Puppet to parse will have its directory
automatically added to the lib path. Also, Puppet will check the
PUPPETLIB environment variable for further directories to search.
Second, it converts the 'import' mechanism into a normal function, which
means that you can now use variables and what-have-you in it. Of
course, this function uses the lib mechanism. This is something that's
always bothered me about the language, and having it fixed means you can
do simple things like have custom code in the top scope for each
operating system and then do "import os/$operatingsystem" to evaluate
that code. Without this, you would either need a huge case statement or
the code would need to be in a class, which often isn't sufficient.
git-svn-id: https://reductivelabs.com/svn/puppet/trunk@1605 980ebf18-57e1-0310-9a29-db15c13687c0
|
|
|
|
|
|
| |
start, anyway.
git-svn-id: https://reductivelabs.com/svn/puppet/trunk@1483 980ebf18-57e1-0310-9a29-db15c13687c0
|
|
|
|
| |
git-svn-id: https://reductivelabs.com/svn/puppet/trunk@1245 980ebf18-57e1-0310-9a29-db15c13687c0
|
|
|
|
|
|
| |
the collection from the database up to adding the objects to the current scope, which is what sends it to the client.
git-svn-id: https://reductivelabs.com/svn/puppet/trunk@1190 980ebf18-57e1-0310-9a29-db15c13687c0
|
|
|
|
|
|
| |
recognizes it, the AST objects retain the settings, the scopes do the right conversion, the interpreter stores them all in the database, and then it strips the collectable objects out before sending the object list to the client
git-svn-id: https://reductivelabs.com/svn/puppet/trunk@1189 980ebf18-57e1-0310-9a29-db15c13687c0
|
|
|
|
|
|
| |
now very easy to add new functions. There is a pretty crappy, hardwired distinction between functions that return values and those that do not, but I do not see a good way around it right now. Functions are also currently responsible for handling their own arity, although I have plans for fixing that.
git-svn-id: https://reductivelabs.com/svn/puppet/trunk@1134 980ebf18-57e1-0310-9a29-db15c13687c0
|
|
|
|
| |
git-svn-id: https://reductivelabs.com/svn/puppet/trunk@1106 980ebf18-57e1-0310-9a29-db15c13687c0
|
|
|
|
| |
git-svn-id: https://reductivelabs.com/svn/puppet/trunk@1103 980ebf18-57e1-0310-9a29-db15c13687c0
|
|
|
|
|
|
| |
by a NAME or by single quoted text, i.e. fully qualified names for nodes must be enclosed in single quotes
git-svn-id: https://reductivelabs.com/svn/puppet/trunk@1064 980ebf18-57e1-0310-9a29-db15c13687c0
|
|
|
|
| |
git-svn-id: https://reductivelabs.com/svn/puppet/trunk@1047 980ebf18-57e1-0310-9a29-db15c13687c0
|
|
|
|
| |
git-svn-id: https://reductivelabs.com/svn/puppet/trunk@923 980ebf18-57e1-0310-9a29-db15c13687c0
|
|
|
|
| |
git-svn-id: https://reductivelabs.com/svn/puppet/trunk@760 980ebf18-57e1-0310-9a29-db15c13687c0
|
|
|
|
| |
git-svn-id: https://reductivelabs.com/svn/puppet/trunk@755 980ebf18-57e1-0310-9a29-db15c13687c0
|
|
|
|
| |
git-svn-id: https://reductivelabs.com/svn/puppet/trunk@700 980ebf18-57e1-0310-9a29-db15c13687c0
|
|
|
|
|
|
| |
all fixes for bugs i found as a result. I have not tried to execute the configuration yet.
git-svn-id: https://reductivelabs.com/svn/puppet/trunk@687 980ebf18-57e1-0310-9a29-db15c13687c0
|
|
|
|
|
|
| |
strings; all fixed now, and all tests pass again, including the new tests that cover the bugs i found
git-svn-id: https://reductivelabs.com/svn/puppet/trunk@652 980ebf18-57e1-0310-9a29-db15c13687c0
|
|
|
|
| |
git-svn-id: https://reductivelabs.com/svn/puppet/trunk@618 980ebf18-57e1-0310-9a29-db15c13687c0
|
|
|
|
|
|
| |
selectors and case statements
git-svn-id: https://reductivelabs.com/svn/puppet/trunk@587 980ebf18-57e1-0310-9a29-db15c13687c0
|
|
known-failing certificate test, but there appear to be some errors that are incorrectly not resulting in failurs. I will track those down ASAP.
git-svn-id: https://reductivelabs.com/svn/puppet/trunk@576 980ebf18-57e1-0310-9a29-db15c13687c0
|