| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
| |
|
|
|
|
| |
package providers.
|
|
|
|
|
|
|
| |
The various REST SSL terminii were never setup to use the
ca_server/ca_port if one is setup.
Signed-off-by: Brice Figureau <brice-puppet@daysofwonder.com>
|
|
|
|
|
|
|
|
| |
If there isn't any default mounts for plugins/modules, puppet
auto creates them. The issue is that they don't have any
authorization attached, so they default to deny all.
Signed-off-by: Brice Figureau <brice-puppet@daysofwonder.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Actually, the issue is:
* when the web server gets the request, it creates an indirection
request, filling attributes like ip or node from the HTTP request.
To do this, all the interesting attributes are given in a hash
(called options, see P::I::Request#new).
Once the request is properly initialized the options hash doesn't
contain the ip or node information (see set_attributes)
* the request is then transmitted to the file_serving layer,
which happily wants to use the node attribute to find environments or
perform authorization.
Unfortunately it fetches the node value from the request options hash,
not the request itself.
Since this node information is empty, puppet fails to find the
proper mount point, and fails the download.
This change makes sure we pass all the way down the node and fix
the authorization check.
Signed-off-by: Brice Figureau <brice-puppet@daysofwonder.com>
|
|
|
|
| |
Signed-off-by: Brice Figureau <brice-puppet@daysofwonder.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Thin storeconfigs is a limited version of storeconfigs that is
more performant and still allows the exported/collected resources
system wich is the primary use of storeconfigs.
It works by storing to the database only the exported resources, tags
and host facts.
Since usually those exported resources are less than the number
of total resources for a node, it is expected to be faster than
regular storeconfigs (especially for the first run).
Signed-off-by: Brice Figureau <brice-puppet@daysofwonder.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The issue is that when we convert Puppet::Parser::Resource catalog
to a Puppet::Resource catalog before storing it to the database,
we don't allow virtual resource to be converted.
Unfortunately exported resources are virtual by design, and as
such aren't converted, and we lose them, so it isn't possible
to store them in the database.
Unfortunately, the client will get the exported resources too.
The fix is dual-fold:
* we make sure exported resource are skipped when the transaction is
applied as a last safeguard
* we filter-out the catalog through the catalog compiler terminus before
the catalog is returned to the client
Signed-off-by: Brice Figureau <brice-puppet@daysofwonder.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The problem is that URI.escape by default doesn't escape '+' (and
some other characters). But some web framework (at least webrick)
unescape the query string behind Puppet's back changing all '+'
to spaces corrupting facts containing '+' characters (like base64
encoded values).
The current fix makes sure we use CGI.escape for all query string
parameters. Indirection keys/path are still using URI escaping because
this part of the URI format shouldn't be handled like query string
parameters (otherwise '/' url separators are encoded which changes
the uri path).
Signed-off-by: Brice Figureau <brice-puppet@daysofwonder.com>
|
| |
|
|
|
|
|
| |
This works around a linux kernel bug that causes a select() on
/proc/mounts to hang.
|
|
|
|
|
|
|
|
|
|
| |
We've moved the @providers class instance variable from
the individual Puppet::Type subclasses into a single
class instance variable in the Puppet::Type base class,
and are using an accessor to retrieve the per-class
providers hash.
Signed-off-by: Luke Kanies <luke@madstop.com>
|
|
|
|
|
|
|
|
|
| |
We basically just make sure that we tell Ruby
about files we've loaded, so you can 'require' these
files and doing so will essentially no-op, rather
than clobbering the already-loaded code.
Signed-off-by: Luke Kanies <luke@madstop.com>
|
|
|
|
| |
tests.
|
|
|
|
| |
Signed-off-by: Sam Livingston-Gray <geeksam@gmail.com>
|
|
|
|
| |
Signed-off-by: Sam Livingston-Gray <geeksam@gmail.com>
|
|
|
|
| |
correct values, and fix rule array handling
|
|
|
|
| |
Requires the pandoc binary to function (http://johnmacfarlane.net/pandoc/).
|
|
|
|
|
|
|
|
| |
This is to fix puppetdoc boolean parameters.
Puppetdoc defers sending parameters to Puppet::Util::Setting, and
in this case, boolean parameters are stored as a boolean value.
Signed-off-by: Brice Figureau <brice-puppet@daysofwonder.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Due to the problem that we associate documentation in the lexer and
not in the parser (which would be to complex and unmaintenable to
do), and since the parser reads new tokens before reducing
the current statement (thus creating the AST node), we could
sometimes associate comments seen after a statement associated
to this one.
Ex:
1. $foo = 1
2. # doc of next class
3. class test {
When we parse the first line, the parser can reduce this to the
correct VarDef only after it lexed the CLASS token.
But lexing this token means we already pushed on the comment stack
the "doc of next class" comment.
That means at the time we create the AST VarDef node, the parser thinks
it should associate this documentation to it, which is incorrect.
As soon as the parser uses token line number, we can enhance the lexer
to allow comments to be associated to current AST node only if
the statement line number is greater or equal than the last comment
line number.
This way it is impossible to associate a comment appearing later in the
source than a previous statement.
Signed-off-by: Brice Figureau <brice-puppet@daysofwonder.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Careful inspection of the parser code show that when we
associate a source line number for an AST node, we use the
current line number of the currently lexed token.
In many case, this is correct, but there are some cases where
this is incorrect.
Unfortunately due to how LALR parser works the ast node creation
of a statement can appear _after_ we lexed another token after
the current statement:
1. $foo = 1
2.
3. class test
When the parser asks for the class token, it can reduce the
assignement statement into the AST VarDef node, because no other
grammar rule match. Unfortunately we already lexed the class token
so we affect to the VarDef node the line number 3 instead of 1.
This is not a real issue for error reporting, but becomes a real
concern when we associate documentation comments to AST node for
puppetdoc.
The solution is to enhance the tokens lexed and returned to the parser
to carry their declaration line number.
Thus a token value becomes a hash: { :value => tokenvalue, :line }
Next, each time we create an AST node, we use the line number of
the correct token (ie the foo line number in the previous example).
Signed-off-by: Brice Figureau <brice-puppet@daysofwonder.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
It could happend that we were generating doc for subclasses
before classes, in which case we were forgotting some
parent class instance and recreating them.
We ended up generating doc for some classes multiple times, from
which some were missing documentation.
The fix is to sort the parsed classes alphabetically, which auto-
matically puts enclosing class before enclosed classes.
Signed-off-by: Brice Figureau <brice-puppet@daysofwonder.com>
|
|
|
|
|
|
|
|
| |
When the PUT body is large enough that Mongrel::HTTPRequest#body returns a StringIO object instead of a String. StringIO#to_s then returns "<StringIO#8236987299>" instead of the string contents.
When that string is passed to YAML it returns false which is then passed to save_object without any real time checking.
This is a combination of patches from Jordan Curzon and Ricky Zhou.
|
|
|
|
|
|
|
|
| |
This patch does two things:
* it enhance puppetca to list revoked certificates (prefixed by -)
* it fixes the ca crl verification which was broken
Signed-off-by: Brice Figureau <brice-puppet@daysofwonder.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
opaque strings
This patch removes the limitation of allow/deny which were
only matching ip addresses or hostname (or pattern of).
It makes sure any kind of string can be matched (by strict
equality) while still keeping the old behaviour.
Opaque strings can only contains: alphanumeric characters, -
_ and @.
Signed-off-by: Brice Figureau <brice-puppet@daysofwonder.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There were two problems:
* server->client communications is using Content-Type with the
direct format name instead of the format mime-type.
* client->server communications is not using Content-Type to
send the format of the serialized object. Instead it is using the
first member of the Accept header. The Accept header is usually
reserved for the other side, ie what the client will accept
when the server will respond.
This patch makes sure s->c communication contains correct Content-Type
headers.
This patch also adds a Content-Type header containing the mime-type of
the object sent by the client when saving.
Signed-off-by: Brice Figureau <brice-puppet@daysofwonder.com>
|
| |
|
|
|
|
|
|
|
|
| |
I also took the opportunity to clean up and simplify
the interface to the parts of the parser that interact
with this. Mostly it was method renames.
Signed-off-by: Luke Kanies <luke@madstop.com>
|
|
|
|
|
|
|
|
|
| |
This class is extracted from the Parser class,
and the main driver for it is to enable us to put mutexes
around some of the hashes to see if they're the source
of a race condition.
Signed-off-by: Luke Kanies <luke@madstop.com>
|
|
|
|
|
| |
Earlier ruby 1.8 versions do not have start_with? for Strings.
Found by John Barbuto.
|
|
|
|
| |
Signed-off-by: Jordan Curzon <curzonj@gmail.com>
|
|
|
|
|
|
|
|
| |
Mongrel::HttpRequest.query_parse outputs a params hash with nil
keys given certain query strings. Network::HTTP::Handler.decode_params
needs to check the incoming values.
Signed-off-by: Jordan Curzon <curzonj@gmail.com>
|
|
|
|
| |
Signed-off-by: Nigel Kersten <nigelk@google.com>
|
|
|
|
|
|
|
| |
This actually involved a bit of rewriting
of the code, but the code's simpler now, too.
Signed-off-by: Luke Kanies <luke@madstop.com>
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
This just makes it easier to add context to warnings
and other logs from the module.
Signed-off-by: Luke Kanies <luke@madstop.com>
|
|
|
|
|
|
|
|
|
| |
You should now use 'lib' instead of 'plugins'.
The old directory still works, but you get a warning
for every module that uses it.
Signed-off-by: Luke Kanies <luke@madstop.com>
|
|
|
|
| |
Signed-off-by: Luke Kanies <luke@madstop.com>
|
|
|
|
|
|
| |
We just add a bit of information to the exception.
Signed-off-by: Luke Kanies <luke@madstop.com>
|
|
|
|
|
|
|
|
|
|
| |
The goal of this commit is to fix ordering issues
that could result when the filebuckets are added
to the catalog after the resources that use them.
This condition showed up somewhat arbitrarily.
Signed-off-by: Luke Kanies <luke@madstop.com>
|
|
|
|
| |
Signed-off-by: James Turnbull <james@lovedthanlost.net>
|
|
|
|
|
|
|
|
| |
Previously, modules were not using their environments
when looking up their paths, which meant that they
often found files in the wrong environment.
Signed-off-by: Luke Kanies <luke@madstop.com>
|
| |
|
|
|
|
|
|
|
|
|
| |
Comments and multi-line comments produces no token per-se during
lexing, so the lexer loops to find another token.
The issue was that we were not skipping whitespace after finding
such non-token.
Signed-off-by: Brice Figureau <brice-puppet@daysofwonder.com>
|
|
|
|
| |
Bellman)
|
|
|
|
| |
Signed-off-by: James Turnbull <james@lovedthanlost.net>
|
| |
|
| |
|