| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
| |_|_|/
|/| | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Previously, for example, the configuration terminus that was a
subclass of 'code' would have been stored at
lib/puppet/indirector/code/configuration and would have had
to have been named 'configuration'. Now, the subclass
can be named however the author prefers, and it must be stored
at lib/puppet/indirector/configuration/<name>.rb, where <name>
is the name you've chosen for the terminus type. The name only
matters insomuch as it is used to load the file from disk and
find the appropriate class when asked.
The additional restriction is that the class constant for the terminus
type must have its name as the last word, and the indirection must
be the second to last word. Thus, in our example, we can choose
any class constant that ends with Configuration::Code; given that
there's only one Configuration class at this point, it makes the
most sense to define the class as Puppet::Node::Configuration::Code.
This is somewhat awkward, because of the class's location on disk,
but the only other real option is to autogenerate a
Puppet::Indirector::Configuration class constant, which is, I think,
uglier.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
I've provided backward compatibility with the old
handler.
The only terminus type that currently exists for reports
is the 'code' terminus, which is used to process reports
in the style of the old handler. At some point, we should
likely switch at least some of these report types (e.g., 'store')
to terminus types.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This counts as the first commit where configuration compiling
actually uses the caching correctly according to the application
model.
|
| | | |
| | | |
| | | |
| | | |
| | | | |
The problem was in how TransObjects were converted to
RAL resources. (Committed while flying over Arkansas.)
|
|\ \ \ \
| |/ / /
|/| | /
| | |/
| |/| |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This is the first real pass towards using caching. The `puppet`
executable actually uses the indirection work, instead of
handlers and such (and man! is it cleaner).
Most of this work was a result of trying to get the client-side
story working, with correct yaml caching of configurations, which
means this commit also covers converting configurations to yaml,
which was a much bigger PITA than it needed to be.
I still need to write integration tests, and I also need to cover
the server-side story of a normal configuration retrieval.
|
| |/
| |
| |
| |
| |
| | |
to the indirection layers. This should hopefully
enable the different application models we need in
our different executables.
|
| |
| |
| |
| |
| |
| |
| |
| | |
instead of a manifest, and removing all of the ambiguity
around whether an interpreter gets its own file specified
or uses the central setting.
Most of the changes are around fixing existing tests to use this new system.
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
to work. As a result, it involves a lot of integration-level
testing, and a lot of small design changes to make the code
actually work.
In particular, indirections can now have default termini,
so that configurations and facts default to their code terminus
Also, I've removed the ability to manually control whether
ast nodes are used. I might need to add it back in later,
but if so it will be in the form of a global setting,
rather than the previous system of passing it through 10 different
classes. Instead, the parser detects whether there are AST nodes
defined and requires them if so or ignores them if not.
About 75 tests are still failing in the main set of tests,
but it's going to be a long slog to get them working --
there are significant design issues around them, as most of
the failures are a result of tests trying to emulate both the
client and server sides of a connection, which normally would
have different fact termini but in this case must have the same
terminus just because they're in the same process and are global.
The next step, then, is to figure that process out, thus finding a way
to make this all work.
|
| |
| |
| |
| |
| |
| | |
fixing the integration tests, and extending the Classmethods
for the indirector so that indirected classes can set the
terminus class and cache class.
|
| |
| |
| |
| | |
high-cohesion "server" model that will handle REST and/or XMLRPC on webrick and/or mongrel.
|
| |
| |
| |
| | |
Server. Using Server as the master class for client connections. Server (former RESTServer) will instantiate the appropriate subclass based upon Puppet configurator setting. There are now tests broken in the network section which I can't seem to figure out yet. Not a happy place to be.
|
|\| |
|
| |
| |
| |
| |
| | |
sure we throw an appropriate exception if a parent is specified
but we cannot find it.
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
to requiring explicit configuration. This means that if
you as an application developer want to use a different indirection
terminus then you have to specify it; something like:
Puppet::Node.terminus_class = :ldap
Caches use the same kind of configuration:
Puppet::Node.cache_class = :memory
Accordingly, I've removed the existing setting definitions
from the defaults.rb.
|
| | |
|
| |
| |
| |
| | |
spec; fleshing out more behavior, implementing.
|
| | |
|
| |
| |
| |
| | |
method so that any hooks there will be run. Probably a violation of YAGNI, but I'm willing to suffer it :-)
|
| |
| |
| |
| | |
getting the added examples to pass.
|
|\| |
|
| |
| |
| |
| | |
added the any_failed? test to Transactions.
|
|\| |
|
| |
| |
| |
| |
| | |
not directly use the patch because I have refactored too
much.
|
| | |
|
| | |
|
| |
| |
| |
| | |
now be more reasonable.
|
|\| |
|
| |
| |
| |
| |
| |
| |
| | |
and fixing some bugs in the process.
Specifically, modules were no longer correctly handling
fully qualified files, and they do so once again.
|
| |
| |
| |
| | |
name, functionality, and/or location in the tree is subject to change, but it's down now somewhere so we can move forward on it.
|
|/
|
|
| |
client-side REST terminus behavior.
|
|
|
|
| |
information in yaml.
|
|
|
|
|
| |
a '<indirection>_cache' setting, then the indirection
will use the value there as the name of the cache.
|
|
|
|
|
| |
to underscore-separated words, e.g., FactStore becomes
fact_store.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
into the indirection system. There are still quite
a few unanswered questions, the two most notable
being embodied in unimplemented tests in the Configuration
Code terminus.
This also requires changing the behaviour in a few places.
In particular, 'puppet' and the 'module_puppet' cfengine
module need to store a Node object in memory with the appropriate
classes, since that's now the only way to communicate with
the compiler. That integration work has not yet been done,
partially because the old configuration handler (which the
soon-to-be-deprecated master handler now uses) still exists.
|
|
|
|
|
|
| |
be used by 'puppet' and the Cfengine 'module_puppet',
since they need to set up the node specially with
classes and other weird things.
|
|
|
|
| |
the changed design in the previous commit.
|
|
|
|
|
|
|
|
|
|
|
| |
checksum interaction behaves as I expect when
interacting with the file terminus.
I've also changed how files and checksums behave a bit.
Files now create model instances with the content as
the only argument during initialization, and checksums
now calculate their checksums rather than having them passed
in.
|
|
|
|
| |
acquire the behaviour of FileBuckets.
|
|
|
|
|
|
|
| |
'Puppet::Util::Settings'. This is to clear up
confusion caused by the fact that we now have a
'Configuration' class to model host configurations,
or any set of resources as a "configuration".
|
|
|
|
|
| |
be used as the back end for filebuckets and the
certificate authority.
|
|\
| |
| |
| |
| |
| |
| |
| |
| | |
Conflicts:
lib/puppet/defaults.rb
lib/puppet/indirector/facts/yaml.rb
spec/unit/indirector/indirection.rb
spec/unit/indirector/indirector.rb
|
| |
| |
| |
| |
| |
| |
| |
| | |
it's time to merge it back into the indirection branch.
Considering that this work was what drove me to create the
indirection branch in the first place, i should now be able to
merge both back in the master branch.
|
| |
| |
| |
| |
| |
| |
| |
| | |
branch. The file recursion code actually works for the first
time in a painful while, but there are still some quirks and design
issues to resolve, particularly around creating implicit resources
that then fail (i.e., the behaviour of the create_implicit_resource
method in Configuration).
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
I've gone too far down the rabbit hole to turn back now, but the
code is clearly getting more centralized around the Configuration
class, which is the goal.
Things are currently a bit muddy between recursion, dynamic resource
generation, transactions, and the configuration, and I don't expect
to be able to clear it up much until we rewrite all of the tests
for the Transaction class, since that is when we'll actually be
setting its behaviour.
At this point, Files (which are currently the only resources that
generate other resources) are responsible for adding their edges
to the relationship graph. This puts them knowing more than I would
like about how the relationship graph works, but it'll have to do for now.
There are still failing tests, but files seem to work again. Now to
go through the rest of the tests and make them work.
|
| |
| |
| |
| | |
are inside a configuration, so the resources can interact with the configuration to get things like relationships.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
ever converting the Transportable objects into a tree of components
and then converting that into a graph. This is a significant
step, and drastically simplifies the model of how to use a configuration.
The old code might have looked something like this:
file = Puppet::Type.create :path => "/whatever", ...
comp = Puppet::Type.create :name => :whatever
comp.push file
transaction = comp.evaluate
transaction.evaluate
The new code looks like this:
file = Puppet::Type.create :path => "/whatever", ...
config = Puppet::Node::Configuration.new
config.add_resource file
config.apply
I did not really intend to do this much refactoring, but I
found I could not use a Configuration object to do work
without refactoring a lot of the system. The primary problem
was that the Client::Master and the Config classes determined
how the transactions behaved; when I moved to using a Configuration,
this distinction was lost, which meant that configurations were
often needing to create other configurations, which resulted in
a whole lot of infinite recursion (e.g., Config objects that create
directories for Puppet use Configuration objects -- yes, I'm
s/Config/Settings/g soon -- and these Configuration objects would
need to create directories).
Not everything is fixed, but it's very close. I am clearly over
the hump, though, so I wanted to get a commit in.
|
| |
| |
| |
| | |
to forget the tests around the main find() method.
|