| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch adds HTTP response decompression (both gzip and deflate streams).
This feature is disabled by default, and enabled with --http_compression.
This feature can be activated only if the local ruby version supports the
zlib ruby extension.
HTTP response decompression is active for all REST communications and file
sourcing.
To enable http compression on the server side, it is needed to use a
reverse proxy like Apache or Nginx with adhoc configuration:
Nginx:
gzip on;
gzip_types text/pson text/json text/marshall text/yaml application/x-raw text/plain;
Apache:
LoadModule deflate_module /usr/lib/apache2/modules/mod_deflate.so
AddOutputFilterByType DEFLATE text/plain text/pson text/json text/marshall text/yaml application/x-raw
Signed-off-by: Brice Figureau <brice-puppet@daysofwonder.com>
|
|
|
|
|
|
|
|
|
|
| |
Also adding JSON support.
This is so that we can remotely retrieve information
about resource types and classes, such as what arguments
are required.
Signed-off-by: Luke Kanies <luke@puppetlabs.com>
|
|
|
|
|
|
|
| |
Puppet::Resource::Catalog's spec
This issue causes other specs to fail, because they depend on the
default terminus being unchanged.
|
|
|
|
|
|
|
|
|
|
| |
This patch reverts the semantically significant parts of #2890 due to the
issues discussed on #3360 (security concerns when used with autosign,
inconsistency between REST & XMLRPC semantics) but leaves the semantically
neutral changes (code cleanup, added tests) in place.
This patch is intended for 0.25.x, but may also be applied as a step in the
resolution of #3450 (refactored #2890, add "remove_certs" flag) in Rolwf.
|
|
|
|
|
|
| |
bucket with a non-default path.
Signed-off-by: Jesse Wolfe <jes5199@gmail.com>
|
|
|
|
| |
Signed-off-by: Luke Kanies <luke@puppetlabs.com>
|
|
|
|
|
| |
Ruby was spewing warnings that there weren't enough parentheses.
I have fixed this by adding parentheses.
|
|
|
|
|
|
| |
This change to the REST branch restores some sanity by explicitly
allowing a destination URL for indirector save() calls,
removing a hack that I was using to accomplish this.
|
|
|
|
| |
puppetrun uses REST to trigger puppet runs.
|
|
|
|
| |
Rename Puppet::Agent::Runner to Puppet::Run, for consistency
|
|
|
|
|
|
|
|
| |
ralsh --host works now, and is using REST.
A node running puppetd --listen will allow ralsh to find, search, and
modify live resources, via REST.
Signed-off-by: Jesse Wolfe <jes5199@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
FileBucket Files have been reimplemented as an indirector terminus so that
they can be transmitted over REST.
The old Network::Client.dipper has been replaced with a compatibility later
in FileBucket::Dipper that uses the indirector to access filebucket termini.
Slightly revised patch:
* No longer allows nil contents in FileBucket outside of initialization
* Uses File.exist? instead of the deprecated File.exists?
* Tweaks JSON serialization and de-serialization to include "path"
Deferred issues:
* Feature #3371 "FileBucket should not keep files in memory".
* Feature #3372 "Replace FileBucket Dipper with more idiomatic calls"
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch re-implements the status() remote procedure as a REST interface.
A running server returns key-value pairs, currently the only implemented
key is "is_alive" which will always be set to true.
Some future tool will consume this by:
Puppet::Status.indirection.terminus_class = :rest
Puppet::Status.find('https://puppet:8140/production/status/default')
Now with unit tests.
plus fixes a typo.
plus integration test and default security setting.
plus tests suggested by Brice.
Signed-off-by: Jesse Wolfe <jes5199@gmail.com>
|
|
|
|
| |
Signed-off-by: Luke Kanies <luke@madstop.com>
|
|
|
|
|
|
|
|
|
|
| |
This fixes most of #1943, except the checksum indirection
still uses this.
This basically always chooses the most recent file when
finding files, and saves the file with the default format.
Signed-off-by: Luke Kanies <luke@madstop.com>
|
|
|
|
|
|
|
| |
It's no longer necessary, given the new ResourceTypeCollection
class.
Signed-off-by: Luke Kanies <luke@reductivelabs.com>
|
|
|
|
|
|
|
| |
It was previously handled by the Interpreter,
but we're planning on getting of that.
Signed-off-by: Luke Kanies <luke@reductivelabs.com>
|
|\
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Conflicts:
lib/puppet/agent.rb
lib/puppet/application/puppet.rb
lib/puppet/configurer.rb
man/man5/puppet.conf.5
spec/integration/defaults.rb
spec/unit/configurer.rb
|
| |
| |
| |
| |
| |
| |
| | |
In my patch for #3088 I made a erroneous assumption about the ruby exception
hierarchy and thus missed the fact that Timeout::error descends from both
SignalError and Interrupt. This is a partial reversion of the patch for #3088
to let these through so that more useful error messages can be produced.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Changing rescues from the default to Exception (to catch errors that don't
descend from StandardError) had the unintended consequence of catching (and
suppressing) SystemExit.
This patch restores the behavior of by reraising the exception.
Of the other exceptions that fall through the same crack (NoMemoryError,
SignalException, LoadError, Interrupt, NotImplementedError, and ScriptError)
this patch also reraises NoMemoryError, SignalException, and Interrupt in the
same way and leaves the rest captured.
|
|\|
| |
| |
| |
| |
| | |
Conflicts:
lib/puppet/ssl/host.rb
spec/spec_helper.rb
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This is basically the fix suggested on the ticket, cleaned up and
ruby-ized, with tests. The only functional modification is leaving
the default on entry2hash as --no-fqdn to preserve 0.25.1 behaviour
as the default.
Signed- ff-by: Markus Roberts <Markus@reality.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This patch implements the two-part suggestion from the ticket;
1) a client that receives a certificate that doesn't match its current
private key does not accept, store or use the certificate--instead it
removes any locally cached copies and acts as if the certificate had
never been found.
2) a puppetmaster that receives a csr from a client for whom it already
has a signed certificate now honors the request and considers it to
supercede any previously signed certificates.
In order to make the cache expiration work as expected, I changed a few
assumptions in the caching system:
* The expiration of a cached certificate is the earlier of the envelope
expiration and the certificate's expiration, as opposed to just overriding
the cache value
* Telling the cache to expire an item now removes it from the cache if
possible, rather than just setting an expiration date in the past and
hoping that somebody notices.
Signed-off-by: Markus Roberts <Markus@reality.com>
|
|\|
| |
| |
| |
| |
| |
| |
| | |
Conflicts:
lib/puppet/agent.rb
lib/puppet/application/puppetd.rb
lib/puppet/parser/ast/leaf.rb
lib/puppet/util/rdoc/parser.rb
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This is a moderately ugly workaround for the MRI garbage collection
bug (see the ticket for details).
I explored several other potential solutions (notably, monkey
patching the routines that trigger the bug) but none of them were
satisfactory. Monkey patching sub, gsub, sub!, gsub!, etc., for
example, either changes the scoping of $~, $1, etc. in a way that
could potentially subtly change the meaning of programs or (if you
are clever) faithfully reproduces the behaviour of MRI--including
the memory leak.
I decided to go with the standardized and somewhat obnoxious never-
used optional argument as it was easy to automatically insert and
should be even easier to automatically find and remove if a better
fix is developed. It also should be obtrusive enough to escape
accidental removal in refactoring.
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
If setup code for a process depends on network connectivity
it needs to be protected with a rescue clause as much as the
main body of the process.
Further, Timeout exceptions aren't under StandardError and thus
aren't caught by an un-typed rescue clause. This doesn't matter
if we've morphed the exception, but will cause the program to
fail if we haven't.
There are many places where these concerns _might_ cause a problem
but in most cases they never will in practice; this patch addresses
the five cases where I have been able to confirm that it actually
can cause the client daemon to exit and two more where I suspect
(but can not prove) that it could.
This is an extension of the prior patch to cover additional cases
found by automated testing (repeated catalog runs with a 1% chance
of timeout forced on all timeout-bound operations, ~5000 runs).
The new cases recurred multiple times (>100 each) and in a final pass
with these corrected (~2500 runs) no additional cases were found.
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Since the storeconfig refactoring (ie moving the catalog storeconfig
system under the indirector) in 0.25 we lost the capability to
store the node ip and node environment name.
This patch restores this feature.
Signed-off-by: Brice Figureau <brice-puppet@daysofwonder.com>
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
If setup code for a process depends on network connectivity
it needs to be protected with a rescue clause as much as the
main body of the process.
Further, Timeout exceptions aren't under StandardError and thus
aren't caught by an un-typed rescue clause. This doesn't matter
if we've morphed the exception, but will cause the program to
fail if we haven't.
There are many places where these concerns _might_ cause a problem
but in most cases they never will in practice; this patch addesses
the two cases where I have been able to confirm that it actually
can cause the client daemon to exit and two more where I suspect
(but can not prove) that it could.
I'd be willing to push this patch as it stands, as it at least
fixes demonstrable problems. A more general solution would be
nice.
|
|/
|
|
|
|
|
|
|
| |
This allows a separation between the wrapper class
and its internals, which is (at least) necessary for
the CA cert, which might not be found using the
internal name.
Signed-off-by: Luke Kanies <luke@madstop.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bundeling and renaming the pure ruby json library to addresses a
number of cross version serliaization bugs (#2615, et al).
This patch adds a subset of the files from the json_pure gem to
lib/puppet/external/pson (renamed to avoid conflicts with rails) so
that we will always have a known-good erialization format available.
The pure ruby json gem as distibuted defers to the compiled version
if it is installed. This is problematic in some circumstances so the
files that have been brought over have been modified to always and
only use the bundled version.
It's a large patch, so here's a breakdown of the change categories:
The majority of the lines are only marginally interesting:
* The json lib itself (in lib/puppet/external/pson) make up the bulk
of the lines.
* Renaming of json to pson make up the second largest group.
Somewhat more interesting are the following, which can be located by
searching the diffs for the indicated strings:
* Adjusting tests to reflect the changes
* Changing the encoding/decoding behavior so that nested structures
(e.g. resources) don't serialize as escaped strings. This should
make it much easier to process the results with external tools, if
needed. Search for "to_pson" and "to_pson_data_hash"
* Cleaning up the envelope/metadata
* Now provides a document_type (as opposed to a ruby class name) by
using a symple registration scheme instead of constant lookup
(search for "document_type")
* Added an api_version (search for "api_version")
* Added a hash for document metadata (search for "metadata")
* Removing the yaml monkeypatch and instead disabling yaml serialization
on ruby 1.8.1 in favor of pson (search for "yaml")
* Cleaning up the json/rails feature interaction (they're now totally
independent) (search for "feature")
|
|
|
|
|
|
|
|
|
|
| |
This allows us to search for a cert, and we use the searched-for
term as the cert name (for the wrapper, not the actual cert object),
rather than the real cert name.
This allows us to use symbolic names like 'ca', as we're currently doing.
Signed-off-by: Luke Kanies <luke@madstop.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This was an API compatibility problem with mongrel's HTTPResponse.start()
method between Mongrel 1.0.x and 1.1.x (the number of parameters changed).
The older version does not provide the option to set the response header
message which was used (redundantly with the response body) to return the
error message when the HTTP response was signaling an error.
In order to suport the older version the call was wrapped with a fallback
and the coresponding code in the other rest implementations was adjusted
to always send the error message in the response body. Then the rest
terminus was adjusted to pull the message from the response body (if it
is present) rather than from the header (which is only used as a fallback
for dealing with older puppetmasters), and the tests were augmeted to
verify this behaviour.
Signed-off-by: Markus Roberts <Markus@reality.com>
|
|
|
|
| |
Puppet::Node::Facts::Rest
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I'd made changes to the internals of the fileserving
system to fix #2544 (mostly switched from passing
the node around and then calculating the environment to just
passing the environment around), but those changes weren't consistent
throughout the fileserving code.
In the process of making them consistent, I realized that the
plain file server actually needs the node name rather than
the environment, so I switched to passing the request around,
because it has both pieces of information.
Also added further integration tests which will hopefully keep
this from cropping up again.
Signed-off-by: Luke Kanies <luke@madstop.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
We had to fix the fileserving plumbing to use the request
environment instead of trying to use the node environment.
This was apparently never fixed after we added the environment
to the URI in REST calls.
There's still a bit of refactoring left to clean up the APIs used
in some of this code.
Signed-off-by: Luke Kanies <luke@madstop.com>
|
|
|
|
| |
Signed-off-by: Luke Kanies <luke@madstop.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Even though Puppet never transmist a charset information in its
response/request content-type, some proxy (especially Apache with the
infamous AddDefaultCharset configuration) may add this "incorrect"
information.
This patch makes sure that only the mime-type is used when looking
for the format associated with a response or a request.
The patch also provides a better error message when the client or server
code is fed with a request whose mime-type can not be mapped to a known
format.
It also fixes a typo noticed by the original reporter.
Signed-off-by: Brice Figureau <brice-puppet@daysofwonder.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
#2507 contains two issues:
* a crash when we filters-out an unwanted resource which had edges
pointing to it.
* resources are losing their virtuality when they are transformed from
Puppet::Parser::Resource to Puppet::Resource. This means we weren't able
to distinguish anymore between an exported resource collected in the same
node as it was exported and an exported resource collected in another node.
The net result is that we can't apply exported resources that are
collected in the same node because they are filtered out by the catalog
filter (see the commits for #2391 for more information).
The fix is to keep the virtuality of the resources so that we can
differentiate those two types of exported resources. We keep this until
the catalog is ready to be sent, where we filter out the virtual resouces
only, the other still exported ones needs to be sent to the client.
To be real sure, the transaction also skips virtual resources.
Signed-off-by: Brice Figureau <brice-puppet@daysofwonder.com>
|
|
|
|
|
|
|
|
|
| |
All of the tests were failing because we had a call outside
of any of the tests, just to autoload the constant. Removed
that call and stubbed things so the tests don't run without
json.
Signed-off-by: Luke Kanies <luke@madstop.com>
|
|
|
|
| |
report_port setting. Add tests.
|
|
|
|
| |
Signed-off-by: Luke Kanies <luke@madstop.com>
|
|
|
|
|
|
|
| |
The various REST SSL terminii were never setup to use the
ca_server/ca_port if one is setup.
Signed-off-by: Brice Figureau <brice-puppet@daysofwonder.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Actually, the issue is:
* when the web server gets the request, it creates an indirection
request, filling attributes like ip or node from the HTTP request.
To do this, all the interesting attributes are given in a hash
(called options, see P::I::Request#new).
Once the request is properly initialized the options hash doesn't
contain the ip or node information (see set_attributes)
* the request is then transmitted to the file_serving layer,
which happily wants to use the node attribute to find environments or
perform authorization.
Unfortunately it fetches the node value from the request options hash,
not the request itself.
Since this node information is empty, puppet fails to find the
proper mount point, and fails the download.
This change makes sure we pass all the way down the node and fix
the authorization check.
Signed-off-by: Brice Figureau <brice-puppet@daysofwonder.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The issue is that when we convert Puppet::Parser::Resource catalog
to a Puppet::Resource catalog before storing it to the database,
we don't allow virtual resource to be converted.
Unfortunately exported resources are virtual by design, and as
such aren't converted, and we lose them, so it isn't possible
to store them in the database.
Unfortunately, the client will get the exported resources too.
The fix is dual-fold:
* we make sure exported resource are skipped when the transaction is
applied as a last safeguard
* we filter-out the catalog through the catalog compiler terminus before
the catalog is returned to the client
Signed-off-by: Brice Figureau <brice-puppet@daysofwonder.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The problem is that URI.escape by default doesn't escape '+' (and
some other characters). But some web framework (at least webrick)
unescape the query string behind Puppet's back changing all '+'
to spaces corrupting facts containing '+' characters (like base64
encoded values).
The current fix makes sure we use CGI.escape for all query string
parameters. Indirection keys/path are still using URI escaping because
this part of the URI format shouldn't be handled like query string
parameters (otherwise '/' url separators are encoded which changes
the uri path).
Signed-off-by: Brice Figureau <brice-puppet@daysofwonder.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There were two problems:
* server->client communications is using Content-Type with the
direct format name instead of the format mime-type.
* client->server communications is not using Content-Type to
send the format of the serialized object. Instead it is using the
first member of the Accept header. The Accept header is usually
reserved for the other side, ie what the client will accept
when the server will respond.
This patch makes sure s->c communication contains correct Content-Type
headers.
This patch also adds a Content-Type header containing the mime-type of
the object sent by the client when saving.
Signed-off-by: Brice Figureau <brice-puppet@daysofwonder.com>
|
|
|
|
|
|
| |
This provides about a 75x speedup, so it's totally
worth it. The downside is that queueing requires json,
but only on the server side.
|
| |
|