| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
It was previously using the GRATR::Edge class, which
had wonky overrides that dramatically slowed down
sorting (its hash mechanism hashed the source and
target so that edges with the same source/target got
the same hash, which we actually don't want any more).
This shouldn't change any functionality, just performance.
I didn't retain all functionality from the Edge class, but
a lot of that functionality was, um, horrible, like Edge[]
being equivalent to Edge.new.
|
|\| | | | | | | |
|
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
to Puppet::SimpleGraph, which should dramatically enhance
performance. It should be largely functionally equivalent,
with the only difference being that edges are no longer deduplicated.
|
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
is just too slow. This class has just about no iteration,
and vertex deletation is dramatically (as in, 1000x) faster).
Here are the results of some very simplistic graph operations:
Vertex tests (add and remove 1000 vertices):
gratr: Add: 0.01; Remove: 9.70
simple: Add: 0.02; Remove: 0.01
Edge tests (add and remove 1000 edges):
gratr: Add: 0.02; Remove: 0.03
simple: Add: 0.07; Remove: 0.02
I expect I can get the cost of the edge addition down some, but even
as it is, it's a couple of orders of magnitude faster.
This doesn't even count things like searching, which I did some other
testing on and got consistently faster results (somewhere between 1.5x and
1500x faster, depending on how the test was set up and how big the graph was).
|
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
a drastic performance increase.
However, profiling has shown that GRATR just isn't going
to cut it. I think I'm going to have to replace it as my
graphing base class to avoid the horrible, horrible performance
I keep encountering.
|
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
file recursion was previously not working, because
the relationship graph was setting itself as a resource's
primary configuration, which caused it to try creating its
own relationship graph.
I've now found that the current code is about 5x slower than
the released code, so now I hope to resolve that.
|
| | |_|_|_|_|/
| |/| | | | | |
|
|/ / / / / / |
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
single files, across modules, local file system,
and the traditional file server.
This work revolves around making sure that the termini
produce functional file instances, meaning that they
know how to find their content or metadata, which largely
comes down to setting their paths correctly.
I also created a new terminus base class for the local
filesystem, since there was so much common code between
content and metadata.
|
| | | | | | |
|
| | | | | | |
|
| |_|_|_|/
|/| | | |
| | | | |
| | | | | |
because I'm going to add some hooks for transforming returned objects.
|
| | | | |
| | | | |
| | | | |
| | | | | |
to :file.
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
This was done by putting all of the functionality in the
Content and Metadata class (actually, in a new base class
for them).
There are still some issues, and there need to be integration
tests between the :local (soon to be renamed :file) termini for
these classes.
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
new Fileset class.
The tests aren't the cleanest, in that there is still
a good bit of duplication in them, but it's what we got.
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
is the new server-side for file recursion, and I'll next be
hooking it to the fileserving 'search' methods. This is
basically a mechanism for abstracting that search functionality
into a single class.
|
|\ \ \ \ \ |
|
| | | | | | |
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
module_files indirection terminus types. Both hooks
use the fileserver configuration, but the module_files
hook only uses the 'modules' mount.
Also moved all responsibility for knowing whether to
use the 'modules' terminus type to the terminus selector;
it was previously spread between that and the file_server
terminus, which made some things annoyingly complicated.
This normalizes the deprecation notices and the logic about
how we make these decisions.
|
| |_|_|_|/
|/| | | |
| | | | |
| | | | | |
tests accordingly.
|
|/ / / /
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
use it, and added integration tests at the most important
hook points.
This provides the final class structure for all of these classes,
but a lot of the class names are pretty bad, so I'm planning on
going through all of them (especially the file_server stuff) and
renaming.
The functionality is all here for finding files, though (finally).
Once the classes are renamed, I'll be adding searching ability
(which will enable the recursive file copies) and then adding
the link management and enabling ignoring files.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
addition to Rest): A local terminus that just uses direct file paths,
and a mounts terminus that uses the file server to figure out what
the path should be.
It looks like it also makes sense to split the 'mounts' terminus further,
so there is a 'modules' terminus used to look files up in the terminus.
I've added some integration tests to verify that everything is
hooked together correctly.
Lastly, I added a directory for shared behaviours. There's a ton of
duplication in this setup, because the Content and Metadata classes
behave almost but not quite identically across the board.
|
| | | |
| | | |
| | | |
| | | | |
the TerminusSelector module to the File Metadata indirection.
|
| | | |
| | | |
| | | |
| | | | |
the standards I set in the TerminusSelector.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
added two abilities to the indirections: Models can specify a module to
extend the indirection instance with, and indirections will use a
:select_terminus method, if it's available, to select the terminus to
use for finding. (It's currently only used for finding, not destroying
or saving.)
The upshot is that a model can have a module that handles terminus
selection for it, and then extend its indirection with that module.
This will allow me to use the local terminus when the protocol is 'file'
and the REST terminus when the protocol is 'puppet'. It should also
open the door for other protocols if they become available.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
so that they make more sense in the REST API, and creating
stub tests for the indirection termini. Now it's on to
create the rest of the tests for them.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
new file serving structure. The next step is to
hook it up to the indirection so we can start writing
integration tests to see if we can actually serve up files.
|
| | | |
| | | |
| | | |
| | | | |
I moved the checksum code to a separate module.
|
| |_|/
|/| |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
to work with indirection. I've split the
fileserver handler into four pieces: Mount (which so
far I've just copied wholesale), Configuration (responsible
for reading the configuration file and determining what's allowed),
Metadata (retrieves information about the files), and Content
(retrieves the actual file content).
I haven't added the indirection tests yet, and the configuration
tests are still all stubs.
|
| |/
|/|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Previously, for example, the configuration terminus that was a
subclass of 'code' would have been stored at
lib/puppet/indirector/code/configuration and would have had
to have been named 'configuration'. Now, the subclass
can be named however the author prefers, and it must be stored
at lib/puppet/indirector/configuration/<name>.rb, where <name>
is the name you've chosen for the terminus type. The name only
matters insomuch as it is used to load the file from disk and
find the appropriate class when asked.
The additional restriction is that the class constant for the terminus
type must have its name as the last word, and the indirection must
be the second to last word. Thus, in our example, we can choose
any class constant that ends with Configuration::Code; given that
there's only one Configuration class at this point, it makes the
most sense to define the class as Puppet::Node::Configuration::Code.
This is somewhat awkward, because of the class's location on disk,
but the only other real option is to autogenerate a
Puppet::Indirector::Configuration class constant, which is, I think,
uglier.
|
| |
| |
| |
| | |
provider test work on non-Debian platforms.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
I've provided backward compatibility with the old
handler.
The only terminus type that currently exists for reports
is the 'code' terminus, which is used to process reports
in the style of the old handler. At some point, we should
likely switch at least some of these report types (e.g., 'store')
to terminus types.
|
|/
|
|
|
|
| |
This counts as the first commit where configuration compiling
actually uses the caching correctly according to the application
model.
|
|\ |
|
| |\
| | |
| | |
| | | |
porridge
|
| | | |
|
|\| | |
|
| |\ \ |
|
| | | | |
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This is the first real pass towards using caching. The `puppet`
executable actually uses the indirection work, instead of
handlers and such (and man! is it cleaner).
Most of this work was a result of trying to get the client-side
story working, with correct yaml caching of configurations, which
means this commit also covers converting configurations to yaml,
which was a much bigger PITA than it needed to be.
I still need to write integration tests, and I also need to cover
the server-side story of a normal configuration retrieval.
|
| |/ /
| | |
| | |
| | |
| | |
| | | |
to the indirection layers. This should hopefully
enable the different application models we need in
our different executables.
|
| | | |
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
to the configuration system. 'puppet' itself still
works, even with -e, but I expect that puppetd and
puppetmasterd are broken, and there are still quite a few
broken tests because the default fact store can't write but
that's the default behaviour for a networked configuration
master.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
instead of a manifest, and removing all of the ambiguity
around whether an interpreter gets its own file specified
or uses the central setting.
Most of the changes are around fixing existing tests to use this new system.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
to work. As a result, it involves a lot of integration-level
testing, and a lot of small design changes to make the code
actually work.
In particular, indirections can now have default termini,
so that configurations and facts default to their code terminus
Also, I've removed the ability to manually control whether
ast nodes are used. I might need to add it back in later,
but if so it will be in the form of a global setting,
rather than the previous system of passing it through 10 different
classes. Instead, the parser detects whether there are AST nodes
defined and requires them if so or ignores them if not.
About 75 tests are still failing in the main set of tests,
but it's going to be a long slog to get them working --
there are significant design issues around them, as most of
the failures are a result of tests trying to emulate both the
client and server sides of a connection, which normally would
have different fact termini but in this case must have the same
terminus just because they're in the same process and are global.
The next step, then, is to figure that process out, thus finding a way
to make this all work.
|
| |/
| |
| |
| |
| |
| | |
fixing the integration tests, and extending the Classmethods
for the indirector so that indirected classes can set the
terminus class and cache class.
|
| | |
|
| |
| |
| |
| | |
Server. Using Server as the master class for client connections. Server (former RESTServer) will instantiate the appropriate subclass based upon Puppet configurator setting. There are now tests broken in the network section which I can't seem to figure out yet. Not a happy place to be.
|