| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
| |
This fix issues with Debian Squeeze and later that include newer coreutils
package which puts `cut` in `/usr/bin`.
|
|
|
|
|
| |
This isn't necessary, but is more correct. I realized how I could do
this, and have now implemented it.
|
| |
|
|
|
|
| |
This patch includes program paths.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
This automatically generates UUID's for each physical filesystem, or
alternatively, you can specify one manually with the $fsuuid argument.
This will make a _big_ difference when using gluster::simple to
automatically deploy a large cluster of physical machines, since you
don't have to manually generate one uuid per device (which is time
consuming and could be a lot to do and a lot to maintain).
|
| |
|
| |
|
|
|
|
|
| |
Since different brick layouts are now implemented, it makes sense to
remove any remaining traces of the algorithmic work from the fact...
|
|
|
|
|
|
|
|
|
|
| |
* Don't use this feature unless you _really_ know what you're doing.
* Managing chained volumes is much harder than managing normal ones.
* If some of the volumes in the cluster use this, and others don't, then
you'll probably have an even crazier time with management.
* Please verify my algorithm and feel free to suggest changes.
* Some edge cases haven't been tested.
* This patch breaks out brick layout ordering into individual functions.
|
| |
|
|
|
|
|
|
|
| |
This adds custom set group support for users that might not have the
feature (I think it might only exist in RHS) and also to users who want
to add their own custom groups! Please ping me if the stock groups gain
or lose parameters, or if their set values change!
|
|
|
|
|
|
|
|
| |
This adds support for setting volume set groups which are groups of
properties that are set all at once on a volume. This is managed in a
clever way, so that if the definition of what a certain group contains
gets updated by the package manager, your volumes will get updated too,
on the next puppet run.
|
| |
|
|
|
|
|
|
| |
If you choose your IP addresses manually, this won't affect you. If
you're automatically deploying Puppet-Gluster with Vagrant, this will
probably be the missing piece that makes your build more automatic.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This adds VRRP integration to puppet-gluster. All you need to do is
set vrrp => true, and set a vip, and the rest should happen
automatically. The shared keepalived password is built by a distributed
password selection algorithm that I made up. Feel free to review this if
you'd like. It's probably as secure as your puppet server and clients
are. If you'd prefer to specify each token manually, you can do so in
the gluster::host password argument, or you can set one global vrrp
password in the gluster::server or gluster::simple classes. There's a
chance that you'll see a bit of VRRP flip-flop when you add/remove hosts
because the distributed password should change. The benefit is that by
default you don't need to set or manage any of those passwords!
This doesn't add firewalling so that the VIP can be used by clients.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch adds preliminary FSM support. This will be used and abused
more extensively in later patches. Automatic brick ordering is an
advanced feature and is meant for experienced puppet users. Changing the
available bricks before the cluster is built is not currently supported.
For that type of magic, please wait for gluster::elastic.
This feature expects that you name your bricks and hosts intelligently.
Future patches will recommend a specific nomenclature, but for now as
long as the brick paths and hostnames follow a padded, incrementing
integer pattern, with a common prefix, you shouldn't see any problems.
|
|
|
|
|
|
|
| |
While the module can still be used in a simple way:
* It is pretty complicated at this point. It does some advanced stuff.
* I wanted to avoid confusion with gluster::simple which is coming soon.
|
|
|
|
|
|
| |
Puppet-gluster now correctly picks the operating-version value from a
table of known version -> value correspondences. Future value additions
should be added to this table.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
This adds experimental support for automatic firewalling. Initially, we
don't know the other hosts (until they are exported and collected) so we
start with an blank firewall to all hosts. After hosts start checking
in, we start only allowing specify host ips. For the volume building, we
can't predict (AFAICT) which ports will be used until after the volume
is started, so we initially allow all ports inbound, until the fact gets
the data from the started volume and uses those specific ports. This
naturally takes multiple puppet runs to complete.
|
|
If you would like to be ultra lazy and not specify any uuid's manually,
the puppet module can now generate them on your behalf. This will take
at least two puppet runs because of the distributed nature of gluster
and because the uuid facts must be exported to all the nodes for
peering.
Please note that if you rebuild a node from scratch, you probably won't
get the same UUID. You can either set it manually, or paste one in the
/var/lib/puppet/tmp/gluster/uuid/uuid file. Watch the formatting!
|