| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
| |
Prevents unmet dependency problems when running tests without the
couchrest gem
|
|
|
|
|
| |
* Cleaner implementation of abstract Couch terminus
* More thoroughly tested facts Couch terminus
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Implements an abstract CouchDB terminus and a concrete CouchDB terminus
used to store node facts. Node facts are stored in a "node" document as
the "facts" attribute. This node document may also be used by other
couchdb termini that store node-related information. It is recommended
to use a separate document (or documents) to store large data structures
like catalogs, linking them to their related node document using
embedded ids.
This implementation depends on the "couchrest" gem.
* Add Puppet.features.couchdb?
* Add Puppet[:couchdb_url] setting
* Add Puppet::Node::Facts#== for testing
* Add PuppetSpec::FIXTURE_DIR for easy access to fixture files
* Add CouchDB Terminus
* Add Facts::CouchDB terminus
* Stores facts inside a "node" document
* Use key (hostname) as _id for node document
* #find returns nil if document cannot be found
* #save finds and updates existing document OR creates new doc [1]
* Store facts in "facts" attribute of node document
|
| |
|
|
|
|
| |
Jesse fixed all these but David and others moved them and introduced some more so...
|
|
|
|
|
|
|
|
|
|
|
|
| |
deprecation warnings from Rails ActiveSupport
The metaid.rb file came straight from why the lucky stiff's "seeing
metaclasses clearly" article. Rails used this too, but they recently
deprecated the name metaclass in favor of singleton_class to match what
ruby-core decided to do. meta, eigen and singlton class were all
suggested and in the end singleton was agreed upon.
http://redmine.ruby-lang.org/issues/show/1082
|
|
|
|
|
|
|
|
|
|
|
| |
The FileBucket code had a bunch of checksum code
that was already available in a library, and it used a
checksum format (type + data) that was incompatible with
what we were using everywhere else.
This just fixes that code duplication.
Signed-off-by: Luke Kanies <luke@puppetlabs.com>
|
|
|
|
|
|
|
| |
Use a predicate function on the Mode object instead of comparing with
the executable name everywhere
Signed-off-by: Jesse Wolfe <jes5199@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch adds HTTP response decompression (both gzip and deflate streams).
This feature is disabled by default, and enabled with --http_compression.
This feature can be activated only if the local ruby version supports the
zlib ruby extension.
HTTP response decompression is active for all REST communications and file
sourcing.
To enable http compression on the server side, it is needed to use a
reverse proxy like Apache or Nginx with adhoc configuration:
Nginx:
gzip on;
gzip_types text/pson text/json text/marshall text/yaml application/x-raw text/plain;
Apache:
LoadModule deflate_module /usr/lib/apache2/modules/mod_deflate.so
AddOutputFilterByType DEFLATE text/plain text/pson text/json text/marshall text/yaml application/x-raw
Signed-off-by: Brice Figureau <brice-puppet@daysofwonder.com>
|
|
|
|
|
|
|
|
|
|
| |
Also adding JSON support.
This is so that we can remotely retrieve information
about resource types and classes, such as what arguments
are required.
Signed-off-by: Luke Kanies <luke@puppetlabs.com>
|
|
|
|
|
|
|
|
|
|
| |
This patch reverts the semantically significant parts of #2890 due to the
issues discussed on #3360 (security concerns when used with autosign,
inconsistency between REST & XMLRPC semantics) but leaves the semantically
neutral changes (code cleanup, added tests) in place.
This patch is intended for 0.25.x, but may also be applied as a step in the
resolution of #3450 (refactored #2890, add "remove_certs" flag) in Rolwf.
|
|
|
|
| |
signature of.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit is hopefully less messy than it
first appears, but it's certainly cross-cutting.
The reason for all of this is that we previously only
looked up builtin resource types from outside the parser,
but now that the defined resource types are available globally
via environments, we can push that lookup code to Resource.
Once we do that, however, we have to have environment and
namespace information in every resource.
Here I remove the Resource::Reference classes (except
the AST class), and use Resource instances instead. I
did this because the shared code between the two classes
got incredibly complicated, such that they should have had
a hierarchical relationship disallowed by their constants.
This complexity convinced me just to get rid of References
entirely.
I also make Puppet::Parser::Resource a subclass
of Puppet::Resource.
There are still broken tests in test/, but this was a big
enough commit I wanted to get it in.
Signed-off-by: Luke Kanies <luke@reductivelabs.com>
|
|
|
|
|
|
| |
This change to the REST branch restores some sanity by explicitly
allowing a destination URL for indirector save() calls,
removing a hack that I was using to accomplish this.
|
|
|
|
| |
puppetrun uses REST to trigger puppet runs.
|
|
|
|
| |
Rename Puppet::Agent::Runner to Puppet::Run, for consistency
|
|
|
|
|
|
|
|
| |
ralsh --host works now, and is using REST.
A node running puppetd --listen will allow ralsh to find, search, and
modify live resources, via REST.
Signed-off-by: Jesse Wolfe <jes5199@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
FileBucket Files have been reimplemented as an indirector terminus so that
they can be transmitted over REST.
The old Network::Client.dipper has been replaced with a compatibility later
in FileBucket::Dipper that uses the indirector to access filebucket termini.
Slightly revised patch:
* No longer allows nil contents in FileBucket outside of initialization
* Uses File.exist? instead of the deprecated File.exists?
* Tweaks JSON serialization and de-serialization to include "path"
Deferred issues:
* Feature #3371 "FileBucket should not keep files in memory".
* Feature #3372 "Replace FileBucket Dipper with more idiomatic calls"
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch re-implements the status() remote procedure as a REST interface.
A running server returns key-value pairs, currently the only implemented
key is "is_alive" which will always be set to true.
Some future tool will consume this by:
Puppet::Status.indirection.terminus_class = :rest
Puppet::Status.find('https://puppet:8140/production/status/default')
Now with unit tests.
plus fixes a typo.
plus integration test and default security setting.
plus tests suggested by Brice.
Signed-off-by: Jesse Wolfe <jes5199@gmail.com>
|
|
|
|
| |
Signed-off-by: Luke Kanies <luke@madstop.com>
|
|
|
|
| |
Signed-off-by: Luke Kanies <luke@madstop.com>
|
|
|
|
|
|
|
|
|
|
| |
This fixes most of #1943, except the checksum indirection
still uses this.
This basically always chooses the most recent file when
finding files, and saves the file with the default format.
Signed-off-by: Luke Kanies <luke@madstop.com>
|
|
|
|
|
|
|
| |
It's no longer necessary, given the new ResourceTypeCollection
class.
Signed-off-by: Luke Kanies <luke@reductivelabs.com>
|
|
|
|
|
|
|
| |
It was previously handled by the Interpreter,
but we're planning on getting of that.
Signed-off-by: Luke Kanies <luke@reductivelabs.com>
|
|
|
|
|
| |
A stub was causing a test failure by returning a string for a parameter
that requires a boolean.
|
|\
| |
| |
| |
| |
| | |
Conflicts:
lib/puppet/ssl/host.rb
spec/spec_helper.rb
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This is basically the fix suggested on the ticket, cleaned up and
ruby-ized, with tests. The only functional modification is leaving
the default on entry2hash as --no-fqdn to preserve 0.25.1 behaviour
as the default.
Signed- ff-by: Markus Roberts <Markus@reality.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This patch implements the two-part suggestion from the ticket;
1) a client that receives a certificate that doesn't match its current
private key does not accept, store or use the certificate--instead it
removes any locally cached copies and acts as if the certificate had
never been found.
2) a puppetmaster that receives a csr from a client for whom it already
has a signed certificate now honors the request and considers it to
supercede any previously signed certificates.
In order to make the cache expiration work as expected, I changed a few
assumptions in the caching system:
* The expiration of a cached certificate is the earlier of the envelope
expiration and the certificate's expiration, as opposed to just overriding
the cache value
* Telling the cache to expire an item now removes it from the cache if
possible, rather than just setting an expiration date in the past and
hoping that somebody notices.
Signed-off-by: Markus Roberts <Markus@reality.com>
|
| |
| |
| |
| |
| |
| |
| | |
1) Improve test so it doesn't fail if an autoload happens.
2) Improve test so it doesn't show a warning.
Signed-off-by: Jesse Wolfe <jes5199@gmail.com>
|
|\|
| |
| |
| |
| |
| |
| |
| | |
Conflicts:
lib/puppet/agent.rb
lib/puppet/application/puppetd.rb
lib/puppet/parser/ast/leaf.rb
lib/puppet/util/rdoc/parser.rb
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Since the storeconfig refactoring (ie moving the catalog storeconfig
system under the indirector) in 0.25 we lost the capability to
store the node ip and node environment name.
This patch restores this feature.
Signed-off-by: Brice Figureau <brice-puppet@daysofwonder.com>
|
|/
|
|
|
|
|
|
|
| |
This allows a separation between the wrapper class
and its internals, which is (at least) necessary for
the CA cert, which might not be found using the
internal name.
Signed-off-by: Luke Kanies <luke@madstop.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bundeling and renaming the pure ruby json library to addresses a
number of cross version serliaization bugs (#2615, et al).
This patch adds a subset of the files from the json_pure gem to
lib/puppet/external/pson (renamed to avoid conflicts with rails) so
that we will always have a known-good erialization format available.
The pure ruby json gem as distibuted defers to the compiled version
if it is installed. This is problematic in some circumstances so the
files that have been brought over have been modified to always and
only use the bundled version.
It's a large patch, so here's a breakdown of the change categories:
The majority of the lines are only marginally interesting:
* The json lib itself (in lib/puppet/external/pson) make up the bulk
of the lines.
* Renaming of json to pson make up the second largest group.
Somewhat more interesting are the following, which can be located by
searching the diffs for the indicated strings:
* Adjusting tests to reflect the changes
* Changing the encoding/decoding behavior so that nested structures
(e.g. resources) don't serialize as escaped strings. This should
make it much easier to process the results with external tools, if
needed. Search for "to_pson" and "to_pson_data_hash"
* Cleaning up the envelope/metadata
* Now provides a document_type (as opposed to a ruby class name) by
using a symple registration scheme instead of constant lookup
(search for "document_type")
* Added an api_version (search for "api_version")
* Added a hash for document metadata (search for "metadata")
* Removing the yaml monkeypatch and instead disabling yaml serialization
on ruby 1.8.1 in favor of pson (search for "yaml")
* Cleaning up the json/rails feature interaction (they're now totally
independent) (search for "feature")
|
|
|
|
|
|
|
|
|
|
| |
This allows us to search for a cert, and we use the searched-for
term as the cert name (for the wrapper, not the actual cert object),
rather than the real cert name.
This allows us to use symbolic names like 'ca', as we're currently doing.
Signed-off-by: Luke Kanies <luke@madstop.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This was an API compatibility problem with mongrel's HTTPResponse.start()
method between Mongrel 1.0.x and 1.1.x (the number of parameters changed).
The older version does not provide the option to set the response header
message which was used (redundantly with the response body) to return the
error message when the HTTP response was signaling an error.
In order to suport the older version the call was wrapped with a fallback
and the coresponding code in the other rest implementations was adjusted
to always send the error message in the response body. Then the rest
terminus was adjusted to pull the message from the response body (if it
is present) rather than from the header (which is only used as a fallback
for dealing with older puppetmasters), and the tests were augmeted to
verify this behaviour.
Signed-off-by: Markus Roberts <Markus@reality.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I'd made changes to the internals of the fileserving
system to fix #2544 (mostly switched from passing
the node around and then calculating the environment to just
passing the environment around), but those changes weren't consistent
throughout the fileserving code.
In the process of making them consistent, I realized that the
plain file server actually needs the node name rather than
the environment, so I switched to passing the request around,
because it has both pieces of information.
Also added further integration tests which will hopefully keep
this from cropping up again.
Signed-off-by: Luke Kanies <luke@madstop.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
We had to fix the fileserving plumbing to use the request
environment instead of trying to use the node environment.
This was apparently never fixed after we added the environment
to the URI in REST calls.
There's still a bit of refactoring left to clean up the APIs used
in some of this code.
Signed-off-by: Luke Kanies <luke@madstop.com>
|
|
|
|
| |
Signed-off-by: Luke Kanies <luke@madstop.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Even though Puppet never transmist a charset information in its
response/request content-type, some proxy (especially Apache with the
infamous AddDefaultCharset configuration) may add this "incorrect"
information.
This patch makes sure that only the mime-type is used when looking
for the format associated with a response or a request.
The patch also provides a better error message when the client or server
code is fed with a request whose mime-type can not be mapped to a known
format.
It also fixes a typo noticed by the original reporter.
Signed-off-by: Brice Figureau <brice-puppet@daysofwonder.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
#2507 contains two issues:
* a crash when we filters-out an unwanted resource which had edges
pointing to it.
* resources are losing their virtuality when they are transformed from
Puppet::Parser::Resource to Puppet::Resource. This means we weren't able
to distinguish anymore between an exported resource collected in the same
node as it was exported and an exported resource collected in another node.
The net result is that we can't apply exported resources that are
collected in the same node because they are filtered out by the catalog
filter (see the commits for #2391 for more information).
The fix is to keep the virtuality of the resources so that we can
differentiate those two types of exported resources. We keep this until
the catalog is ready to be sent, where we filter out the virtual resouces
only, the other still exported ones needs to be sent to the client.
To be real sure, the transaction also skips virtual resources.
Signed-off-by: Brice Figureau <brice-puppet@daysofwonder.com>
|
|
|
|
| |
Signed-off-by: Luke Kanies <luke@madstop.com>
|
|
|
|
| |
Signed-off-by: Luke Kanies <luke@madstop.com>
|
|
|
|
| |
report_port setting. Add tests.
|
|
|
|
|
|
|
| |
The various REST SSL terminii were never setup to use the
ca_server/ca_port if one is setup.
Signed-off-by: Brice Figureau <brice-puppet@daysofwonder.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Actually, the issue is:
* when the web server gets the request, it creates an indirection
request, filling attributes like ip or node from the HTTP request.
To do this, all the interesting attributes are given in a hash
(called options, see P::I::Request#new).
Once the request is properly initialized the options hash doesn't
contain the ip or node information (see set_attributes)
* the request is then transmitted to the file_serving layer,
which happily wants to use the node attribute to find environments or
perform authorization.
Unfortunately it fetches the node value from the request options hash,
not the request itself.
Since this node information is empty, puppet fails to find the
proper mount point, and fails the download.
This change makes sure we pass all the way down the node and fix
the authorization check.
Signed-off-by: Brice Figureau <brice-puppet@daysofwonder.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The issue is that when we convert Puppet::Parser::Resource catalog
to a Puppet::Resource catalog before storing it to the database,
we don't allow virtual resource to be converted.
Unfortunately exported resources are virtual by design, and as
such aren't converted, and we lose them, so it isn't possible
to store them in the database.
Unfortunately, the client will get the exported resources too.
The fix is dual-fold:
* we make sure exported resource are skipped when the transaction is
applied as a last safeguard
* we filter-out the catalog through the catalog compiler terminus before
the catalog is returned to the client
Signed-off-by: Brice Figureau <brice-puppet@daysofwonder.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The problem is that URI.escape by default doesn't escape '+' (and
some other characters). But some web framework (at least webrick)
unescape the query string behind Puppet's back changing all '+'
to spaces corrupting facts containing '+' characters (like base64
encoded values).
The current fix makes sure we use CGI.escape for all query string
parameters. Indirection keys/path are still using URI escaping because
this part of the URI format shouldn't be handled like query string
parameters (otherwise '/' url separators are encoded which changes
the uri path).
Signed-off-by: Brice Figureau <brice-puppet@daysofwonder.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There were two problems:
* server->client communications is using Content-Type with the
direct format name instead of the format mime-type.
* client->server communications is not using Content-Type to
send the format of the serialized object. Instead it is using the
first member of the Accept header. The Accept header is usually
reserved for the other side, ie what the client will accept
when the server will respond.
This patch makes sure s->c communication contains correct Content-Type
headers.
This patch also adds a Content-Type header containing the mime-type of
the object sent by the client when saving.
Signed-off-by: Brice Figureau <brice-puppet@daysofwonder.com>
|
|
|
|
|
|
| |
This provides about a 75x speedup, so it's totally
worth it. The downside is that queueing requires json,
but only on the server side.
|