| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
doesn't exist on Ubuntu systems, see : http://askubuntu.com/questions/138972/what-is-the-equivalent-user-for-nobodynobody-from-centos
|
|
|
|
| |
This is needed in order to run "onlyif" option properly.
|
|
|
|
|
| |
Workaround for : "Stopping volume make its data inaccessible. Do you want to continue? (y/n)"
See https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.0/html/Installation_Guide/ch08.html
|
|
|
|
|
| |
This patch also generalizes the service name, so that other operating
systems will also benefit from this patch by dropping in a yaml file.
|
|
|
|
| |
This patch includes program paths.
|
|
|
|
| |
This patch includes package names.
|
|
|
|
| |
Although if you remove all the features, it's not as awesome anymore :)
|
|
|
|
| |
Small bug due to lesser used code path, now squashed!
|
|
|
|
|
|
|
|
|
|
| |
* Don't use this feature unless you _really_ know what you're doing.
* Managing chained volumes is much harder than managing normal ones.
* If some of the volumes in the cluster use this, and others don't, then
you'll probably have an even crazier time with management.
* Please verify my algorithm and feel free to suggest changes.
* Some edge cases haven't been tested.
* This patch breaks out brick layout ordering into individual functions.
|
| |
|
|
|
|
|
|
|
|
| |
This adds support for setting volume set groups which are groups of
properties that are set all at once on a volume. This is managed in a
clever way, so that if the definition of what a certain group contains
gets updated by the package manager, your volumes will get updated too,
on the next puppet run.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
* Rename gluster::client to gluster::mount
* Add support to gluster::mount
* Add client machines and mounts to vagrant setup
* Fixed version interface for gluster::mount and gluster::server
* Improved firewall support for gluster::mount
* Update examples to use gluster::mount instead of gluster::client
* Update documentation
* Other small fixes
|
|
|
|
| |
Puppet-Gluster, now with Vagrant! - Initial release. Happy hacking!
|
|
|
|
|
|
|
| |
* If VIP comes and goes, the create script gets added/deleted too.
* If VIP isn't present on first run (it usually isn't) then the
Exec['again'] won't get run because it's inside the VIP check.
* This also consolidates the two identical conditional blocks.
|
|
|
|
| |
I've updated wrapper.pp too, but I haven't tested it recently.
|
|
|
|
| |
Avoids any chance of a race due to modifying the file in place.
|
|
|
|
|
|
|
| |
* This will make magic things happen faster.
* This doesn't give you an option to disable it.
* This doesn't let you set the timeout.
* This isn't necessarily complete. There might be more notify's needed.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch adds preliminary FSM support. This will be used and abused
more extensively in later patches. Automatic brick ordering is an
advanced feature and is meant for experienced puppet users. Changing the
available bricks before the cluster is built is not currently supported.
For that type of magic, please wait for gluster::elastic.
This feature expects that you name your bricks and hosts intelligently.
Future patches will recommend a specific nomenclature, but for now as
long as the brick paths and hostnames follow a padded, incrementing
integer pattern, with a common prefix, you shouldn't see any problems.
|
|
|
|
|
|
|
| |
While the module can still be used in a simple way:
* It is pretty complicated at this point. It does some advanced stuff.
* I wanted to avoid confusion with gluster::simple which is coming soon.
|
|
|
|
| |
This lets us specify the VIP in ::server, and inherit it in all volumes.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
This is usually only relevant right after initial peering if we're doing
a hot (clean) puppet build. This probably happens because puppet is
eager to run things as soon as possible, and after glusterd is
restarted, it takes a short moment for glusterd to see the other nodes
as fully peered. Ideally puppet should really only check the $onlyif
conditions *right* before it runs the command. I think that it might be
cheating here and running it in parallel slightly before... Who knows.
|
|
|
|
|
|
| |
This moves the command into a separate file. This also adds temporary
saving of stdout and stderr to /tmp for easy debugging of command
output.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
This adds experimental support for automatic firewalling. Initially, we
don't know the other hosts (until they are exported and collected) so we
start with an blank firewall to all hosts. After hosts start checking
in, we start only allowing specify host ips. For the volume building, we
can't predict (AFAICT) which ports will be used until after the volume
is started, so we initially allow all ports inbound, until the fact gets
the data from the started volume and uses those specific ports. This
naturally takes multiple puppet runs to complete.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Gluster creates this folder on volume creation. Whether this is new in
version 3.4 or not is unknown. If any of the hosts have this folder
prior to volume creation, then volume creation will fail.
It is not sure if this is needed when normal bricks on separate file
systems are being used. If someone wants to donate some hardware, I'm
happy to test this.
|
|
|
|
|
|
|
|
|
|
|
|
| |
This adds proper (optional) ping checks with fping and gluster peer
status checks to ensure the peer are available before a volume create
command. This required rewriting of the xml.py hack which helps puppet
interface with the xml formatted gluster cli output. In addition,
downstream commands such as volume::property gained checks to ensure the
volume was present beforehand. While it is not obvious, it should be
noted that because of the distributed nature of glusterfs, more than one
puppet run will be required for complete deployment. With these patches,
individual runs shouldn't ever end in temporary error as they used too.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
bodepd!
|
| |
|
| |
|
|
|