| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
If you choose your IP addresses manually, this won't affect you. If
you're automatically deploying Puppet-Gluster with Vagrant, this will
probably be the missing piece that makes your build more automatic.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This adds VRRP integration to puppet-gluster. All you need to do is
set vrrp => true, and set a vip, and the rest should happen
automatically. The shared keepalived password is built by a distributed
password selection algorithm that I made up. Feel free to review this if
you'd like. It's probably as secure as your puppet server and clients
are. If you'd prefer to specify each token manually, you can do so in
the gluster::host password argument, or you can set one global vrrp
password in the gluster::server or gluster::simple classes. There's a
chance that you'll see a bit of VRRP flip-flop when you add/remove hosts
because the distributed password should change. The benefit is that by
default you don't need to set or manage any of those passwords!
This doesn't add firewalling so that the VIP can be used by clients.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch adds preliminary FSM support. This will be used and abused
more extensively in later patches. Automatic brick ordering is an
advanced feature and is meant for experienced puppet users. Changing the
available bricks before the cluster is built is not currently supported.
For that type of magic, please wait for gluster::elastic.
This feature expects that you name your bricks and hosts intelligently.
Future patches will recommend a specific nomenclature, but for now as
long as the brick paths and hostnames follow a padded, incrementing
integer pattern, with a common prefix, you shouldn't see any problems.
|
|
|
|
|
|
|
| |
While the module can still be used in a simple way:
* It is pretty complicated at this point. It does some advanced stuff.
* I wanted to avoid confusion with gluster::simple which is coming soon.
|
|
|
|
|
|
| |
Puppet-gluster now correctly picks the operating-version value from a
table of known version -> value correspondences. Future value additions
should be added to this table.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
This adds experimental support for automatic firewalling. Initially, we
don't know the other hosts (until they are exported and collected) so we
start with an blank firewall to all hosts. After hosts start checking
in, we start only allowing specify host ips. For the volume building, we
can't predict (AFAICT) which ports will be used until after the volume
is started, so we initially allow all ports inbound, until the fact gets
the data from the started volume and uses those specific ports. This
naturally takes multiple puppet runs to complete.
|
|
If you would like to be ultra lazy and not specify any uuid's manually,
the puppet module can now generate them on your behalf. This will take
at least two puppet runs because of the distributed nature of gluster
and because the uuid facts must be exported to all the nodes for
peering.
Please note that if you rebuild a node from scratch, you probably won't
get the same UUID. You can either set it manually, or paste one in the
/var/lib/puppet/tmp/gluster/uuid/uuid file. Watch the formatting!
|