| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
| |
Not sure how this bug ever happened. Did the format of the xml ever
change? In any case, now it's squashed.
|
|
|
|
| |
Avoids any chance of a race due to modifying the file in place.
|
|
|
|
|
|
|
| |
* This will make magic things happen faster.
* This doesn't give you an option to disable it.
* This doesn't let you set the timeout.
* This isn't necessarily complete. There might be more notify's needed.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Introduced a split between repo management and version choosing.
You can now:
* Choose a package version or leave it at the default (latest).
If you choose a package version it must include the release string.
eg: in foobar-3.2.1-42.el6 the release is 42.el6
This doesn't check to see if your version is valid!
* Choose whether you want a gluster repo added automatically.
If you did specify a version, it will pick the correct repo.
This doesn't check that the repo for your os/version exists!
|
|
|
|
| |
This prevents an unnecessary template change on puppet runs.
|
|
|
|
| |
This should help users figure out they have a DNS problem sooner.
|
|
|
|
|
| |
Support for other operating systems will have to come later, even if
this needs to be refactored. For now, CentOS/RHEL are automatic.
|
| |
|
| |
|
|
|
|
| |
Github's markdown parser apparently can't figure out comments correctly.
|
|
|
|
|
|
|
| |
Run 'make docs' to generate an up-to-date .pdf of the documentation.
Ironically, one reason I first started writing Puppet code, was so that
I wouldn't have to write as much documentation anymore.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
If a user decides to manually set a host UUID, then save that UUID so
that subsequent removal of manual UUID settings won't cause the UUID to
change. This is useful if you want to start using automatic UUID's when
you weren't previously.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch adds preliminary FSM support. This will be used and abused
more extensively in later patches. Automatic brick ordering is an
advanced feature and is meant for experienced puppet users. Changing the
available bricks before the cluster is built is not currently supported.
For that type of magic, please wait for gluster::elastic.
This feature expects that you name your bricks and hosts intelligently.
Future patches will recommend a specific nomenclature, but for now as
long as the brick paths and hostnames follow a padded, incrementing
integer pattern, with a common prefix, you shouldn't see any problems.
|
|
|
|
|
|
|
| |
While the module can still be used in a simple way:
* It is pretty complicated at this point. It does some advanced stuff.
* I wanted to avoid confusion with gluster::simple which is coming soon.
|
|
|
|
| |
This lets us specify the VIP in ::server, and inherit it in all volumes.
|
| |
|
|
|
|
|
|
| |
Puppet-gluster now correctly picks the operating-version value from a
table of known version -> value correspondences. Future value additions
should be added to this table.
|
|
|
|
|
|
|
|
|
|
| |
This is usually only relevant right after initial peering if we're doing
a hot (clean) puppet build. This probably happens because puppet is
eager to run things as soon as possible, and after glusterd is
restarted, it takes a short moment for glusterd to see the other nodes
as fully peered. Ideally puppet should really only check the $onlyif
conditions *right* before it runs the command. I think that it might be
cheating here and running it in parallel slightly before... Who knows.
|
|
|
|
|
|
| |
This moves the command into a separate file. This also adds temporary
saving of stdout and stderr to /tmp for easy debugging of command
output.
|
|
|
|
| |
It seems the propagate up isn't 100% reliable.
|
|
|
|
|
|
| |
This should really be switched based on the operating gluster version. A
table of values and gluster versions is needed. The gluster version
could be created as a fact.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
This adds experimental support for automatic firewalling. Initially, we
don't know the other hosts (until they are exported and collected) so we
start with an blank firewall to all hosts. After hosts start checking
in, we start only allowing specify host ips. For the volume building, we
can't predict (AFAICT) which ports will be used until after the volume
is started, so we initially allow all ports inbound, until the fact gets
the data from the started volume and uses those specific ports. This
naturally takes multiple puppet runs to complete.
|
| |
|
| |
|
|
|
|
|
|
|
| |
This patch adds the beginning of better property management. Many
properties need types and testing filled in to work properly. This is
preliminary support to make it easier for others to test and offer
patches for options they use.
|
|
|
|
|
|
|
| |
Appropriate firewalling support is a hard thing in gluster if you take
in to account all the bootstrapping problems of what needs to be open
before subsequent things can work. Hopefully this patch is a good first
step in finally doing the right things.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Gluster creates this folder on volume creation. Whether this is new in
version 3.4 or not is unknown. If any of the hosts have this folder
prior to volume creation, then volume creation will fail.
It is not sure if this is needed when normal bricks on separate file
systems are being used. If someone wants to donate some hardware, I'm
happy to test this.
|
| |
|
|
|
|
| |
At the moment, this is redundant, and not needed.
|
|
|
|
|
|
|
|
|
|
|
|
| |
If you would like to be ultra lazy and not specify any uuid's manually,
the puppet module can now generate them on your behalf. This will take
at least two puppet runs because of the distributed nature of gluster
and because the uuid facts must be exported to all the nodes for
peering.
Please note that if you rebuild a node from scratch, you probably won't
get the same UUID. You can either set it manually, or paste one in the
/var/lib/puppet/tmp/gluster/uuid/uuid file. Watch the formatting!
|
|
|
|
|
|
| |
This avoids the constant flip flops you'll see during puppet runs.
I think glusterd might be setting them correctly, but puppet kept
changing them back to the default. All fixed now :)
|
|
|
|
|
| |
The added example should make this obvious. Heed the warning in using
this feature. I find it most useful for rapid prototyping using vm's.
|
|
|
|
|
|
|
|
|
|
|
|
| |
This adds proper (optional) ping checks with fping and gluster peer
status checks to ensure the peer are available before a volume create
command. This required rewriting of the xml.py hack which helps puppet
interface with the xml formatted gluster cli output. In addition,
downstream commands such as volume::property gained checks to ensure the
volume was present beforehand. While it is not obvious, it should be
noted that because of the distributed nature of glusterfs, more than one
puppet run will be required for complete deployment. With these patches,
individual runs shouldn't ever end in temporary error as they used too.
|
| |
|
| |
|
|
|
|
| |
At the moment, the NFS and IPA portions are not HA.
|
| |
|