| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
|
| |
This should really be switched based on the operating gluster version. A
table of values and gluster versions is needed. The gluster version
could be created as a fact.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
This adds experimental support for automatic firewalling. Initially, we
don't know the other hosts (until they are exported and collected) so we
start with an blank firewall to all hosts. After hosts start checking
in, we start only allowing specify host ips. For the volume building, we
can't predict (AFAICT) which ports will be used until after the volume
is started, so we initially allow all ports inbound, until the fact gets
the data from the started volume and uses those specific ports. This
naturally takes multiple puppet runs to complete.
|
| |
|
| |
|
|
|
|
|
|
|
| |
This patch adds the beginning of better property management. Many
properties need types and testing filled in to work properly. This is
preliminary support to make it easier for others to test and offer
patches for options they use.
|
|
|
|
|
|
|
| |
Appropriate firewalling support is a hard thing in gluster if you take
in to account all the bootstrapping problems of what needs to be open
before subsequent things can work. Hopefully this patch is a good first
step in finally doing the right things.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Gluster creates this folder on volume creation. Whether this is new in
version 3.4 or not is unknown. If any of the hosts have this folder
prior to volume creation, then volume creation will fail.
It is not sure if this is needed when normal bricks on separate file
systems are being used. If someone wants to donate some hardware, I'm
happy to test this.
|
| |
|
|
|
|
| |
At the moment, this is redundant, and not needed.
|
|
|
|
|
|
|
|
|
|
|
|
| |
If you would like to be ultra lazy and not specify any uuid's manually,
the puppet module can now generate them on your behalf. This will take
at least two puppet runs because of the distributed nature of gluster
and because the uuid facts must be exported to all the nodes for
peering.
Please note that if you rebuild a node from scratch, you probably won't
get the same UUID. You can either set it manually, or paste one in the
/var/lib/puppet/tmp/gluster/uuid/uuid file. Watch the formatting!
|
|
|
|
|
|
| |
This avoids the constant flip flops you'll see during puppet runs.
I think glusterd might be setting them correctly, but puppet kept
changing them back to the default. All fixed now :)
|
|
|
|
|
| |
The added example should make this obvious. Heed the warning in using
this feature. I find it most useful for rapid prototyping using vm's.
|
|
|
|
|
|
|
|
|
|
|
|
| |
This adds proper (optional) ping checks with fping and gluster peer
status checks to ensure the peer are available before a volume create
command. This required rewriting of the xml.py hack which helps puppet
interface with the xml formatted gluster cli output. In addition,
downstream commands such as volume::property gained checks to ensure the
volume was present beforehand. While it is not obvious, it should be
noted that because of the distributed nature of glusterfs, more than one
puppet run will be required for complete deployment. With these patches,
individual runs shouldn't ever end in temporary error as they used too.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
This patch adds experimental gluster::wrapper support. This should eventually be the best way to use puppet-gluster. Unfortunately this has been largely untested because it requires newer ruby and puppet features not yet available in CentOS 6.x Please test and enjoy.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
bodepd!
|
| |
|
| |
|
| |
|
|
|