| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The start command doesnt restart the tier deamon if the deamon
is running at one node. hence to bring up the tierd on the nodes
where the deamon is down, the force command is implemented.
It skips the check for tierd running.
Change-Id: I0037d3e5ecfe56637d0da201a97903c435d26436
BUG: 1292112
Signed-off-by: hari gowtham <hgowtham@redhat.com>
Reviewed-on: http://review.gluster.org/12983
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Dan Lambright <dlambrig@redhat.com>
Tested-by: Dan Lambright <dlambrig@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For detach tier, the validation was done using the string "detach-tier"
but the new commands used has the string "tier". Making the string use
"tier" to compare, creates problem as the tier status and tier detach
have the keyword "tier". So tier detach and tier status were separated.
and strtok was used to prevent the condition from passing when the
volume name has a substring of "tier". (only the second word from the
string is got and checked if the feature is tier)
Problem: new detach tier command doesnt throw warnings like
"not a tier volume" or " detach tier not started" respectively
instead it prints empty output.
Fix: while validate the volume is checked if its a tiered volume
if yes it is checked if the detach tier is started, else a warning
is thrown respectively.
Change-Id: I94246d53b18ab0e9406beaf459eaddb7c5b766c2
BUG: 1288517
Signed-off-by: hari gowtham <hgowtham@redhat.com>
Reviewed-on: http://review.gluster.org/12883
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When user execute bitrot scrub status command then gluster
is not giving correct value of Number of Scrubbed files,
Number of Unsigned files, Last completed scrub time,
Duration of last scrub.
With this patch scrub status will give correct value for
all the above fields.
Change-Id: Ic966f76d22db5b0c889e6386a1c2219afbda1f49
BUG: 1285989
Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com>
Signed-off-by: Kotresh HR <khiremat@redhat.com>
Reviewed-on: http://review.gluster.org/12776
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Venky Shankar <vshankar@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
As of now quota 'list/list-objects' will list the usage only if limit is
set for every directory else it will fail with ENOATTR(If inode/inode-quota
is already configured for the first time).
Feature:
With the patch we are enhancing this command to list the usage even
if quota limit is not set but still the user has to configure
inode/inode-quota for the first time.
Example:
Consider we have /client/dir and /client1(absolute path from mount point):
Quota limit is set only on /client. when we try listing /client/dir or /client1,
it shows "Limit not set".
Fix:
The patch fixes this by showing "used space" in case of list command and
shows "file_count" & "dir_count" in case of list-objects command. This works
fine with xml output as well.
Change-Id: I68b08ec77a583b3c7f39fe4d6b15d3d77adb095a
BUG: 1284752
Signed-off-by: Manikandan Selvaganesh <mselvaga@redhat.com>
Reviewed-on: http://review.gluster.org/12741
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: I8a8e27b4d6c35ea5e57bd0b556fd2c6ab7b496ab
BUG: 1285968
Signed-off-by: Manikandan Selvaganesh <mselvaga@redhat.com>
Reviewed-on: http://review.gluster.org/12771
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Saravanakumar Arumugam <sarumuga@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Venky Shankar <vshankar@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If the volume was not a tiered volume then empty status was being
printed instead of an error message.
Change-Id: I13ccb16e1562966976a48d9365ced4c8a124de59
BUG: 1284357
Signed-off-by: hari gowtham <hgowtham@redhat.com>
Reviewed-on: http://review.gluster.org/12713
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Tested-by: Dan Lambright <dlambrig@redhat.com>
Reviewed-by: Dan Lambright <dlambrig@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Enhances the cli output for arbiter volumes as requested in the BZ.
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Change-Id: I28cc34d7d19def043d54291cede25a58dbcc5051
BUG: 1285288
Reviewed-on: http://review.gluster.org/12747
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently scrub status command is not displaying list of all the bad files. All
the bad files are avaliable in the bitd daemon.
With this patch it will dispaly list of all the bad file's in the scrub
status command.
Change-Id: If09babafaf5d7cf158fa79119abbf5b986027748
BUG: 1207627
Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com>
Reviewed-on: http://review.gluster.org/12720
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
CLI command for bitrot scrub status will be :
gluster volume bitrot <volname> scrub status
Above command will show the statistics of bitrot scrubber.
Upon execution of this command it will show some common
scrubber tunable value of volume <VOLNAME> followed by
statistics of scrubber statistics of individual nodes.
sample ouput for single node:
Volume name : <VOLNAME>
State of scrub: Active
Scrub frequency: biweekly
Bitrot error log location: /var/log/glusterfs/bitd.log
Scrubber error log location: /var/log/glusterfs/scrub.log
=========================================================
Node name:
Number of Scrubbed files:
Number of Unsigned files:
Last completed scrub time:
Duration of last scrub:
Error count:
=========================================================
This is just infrastructure. list of bad file, last scrub
time, error count value will be taken care by
http://review.gluster.org/#/c/12503/ and
http://review.gluster.org/#/c/12654/ patches.
Change-Id: I3ed3c7057c9d0c894233f4079a7f185d90c202d1
BUG: 1207627
Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com>
Reviewed-on: http://review.gluster.org/10231
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: I15a1a637090f1cc2f200d5c3582317e4aa3cf334
BUG: 1278927
Signed-off-by: Mohamed Ashiq <mliyazud@redhat.com>
Reviewed-on: http://review.gluster.org/12532
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Dan Lambright <dlambrig@redhat.com>
Tested-by: Dan Lambright <dlambrig@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
1) Glusterd doesn't remember about arbiter information of replica volume in
store. When glusterd goes down and comes backup, arbiter volumes will
become replica volumes.
2) Glusterd doesn't import/export arbiter information to/from the other peers.
3) Volume info doesn't show any arbiter count in the output.
Fix:
1) Persist arbiter information in glusterd-store
2) Import/Export arbiter information of the volume
3) Change volume info output to show arbiter count.
Change-Id: I2db81e73d2694b01f7d07b08a17b41ad5a55c361
BUG: 1276675
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: http://review.gluster.org/12475
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the message after attach tier is saying rebalance.
It is changed according to tiering.
Change-Id: I1834511f86483fa60f404d7defe5be59c025e9d6
BUG: 1277081
Signed-off-by: hari gowtham <hgowtham@redhat.com>
Reviewed-on: http://review.gluster.org/12488
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Dan Lambright <dlambrig@redhat.com>
Tested-by: Dan Lambright <dlambrig@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When a quota is disable and the clean-up process terminated
without completely cleaning-up the quota xattrs.
Now when quota is enabled again, this can mess-up the accounting
A version number is suffixed for all quota xattrs and this version
number is specific to marker xaltor, i.e when quota xattrs are
requested by quotad/client marker will remove the version suffix in the
key before sending the response
Change-Id: I1ca2c11460645edba0f6b68db70d476d8d26e1eb
BUG: 1272411
Signed-off-by: vmallika <vmallika@redhat.com>
Reviewed-on: http://review.gluster.org/12386
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Manikandan Selvaganesh <mselvaga@redhat.com>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
various xlators and other components are invoking system calls
directly instead of using the libglusterfs/syscall.[ch] wrappers.
If not using the system call wrappers there should be a comment
in the source explaining why the wrapper isn't used.
Change-Id: I1f47820534c890a00b452fa61f7438eb2b3f667c
BUG: 1267967
Signed-off-by: Kaleb S. KEITHLEY <kkeithle@redhat.com>
Reviewed-on: http://review.gluster.org/12276
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently, when 'gluster v quota <VOLNAME> list' command is issued
after an rm -rf on /run/gluster/vol/<directory>, quota output header is
not shown. It is because the list_count was properly calculated with
'gluster v quota <VOLNAME> remove /path' and not with an rm -rf. The patch
fixes this issue.
Change-Id: I5266a8b0b9322b7db1b9e1d6b0327065931f4bcb
BUG: 1269375
Signed-off-by: Manikandan Selvaganesh <mselvaga@redhat.com>
Reviewed-on: http://review.gluster.org/12345
Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com>
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: Ibcbad94c091a9c24fe5aff2d7e8bcd9ac88da7bf
BUG: 1248521
Signed-off-by: Manikandan Selvaganesh <mselvaga@redhat.com>
Reviewed-on: http://review.gluster.org/12337
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currrently, 'gluster v quota <VOLNAME> list' command rounds off the
available space and shows it to the user. Now, 'gluster v quota
<VOLNAME> list --xml' command is modified to show the exact available
space in bytes.
Change-Id: I3772e036a2537c1df12f22cf32dfe4ac7940988f
BUG: 1261404
Signed-off-by: Manikandan Selvaganesh <mselvaga@redhat.com>
Reviewed-on: http://review.gluster.org/12137
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently the tier feature piggy backs off the rebalance command
syntax to obtain status and this is clumsy. Introduce a new
tier command that can do tier specific operations, starting
with volume status to display counters.
Old commands:
gluster volume attach-tier <vol> [replica count] {bricklist..}
gluster volume detach-tier <vol> {start|stop|commit}
New commands:
gluster volume tier <vol> attach [replica count] {bricklist} |
detach {start|stop|commit} |
status
Change-Id: Ic07b3c6260588162de7d34380f8cbd3d8a7f35d3
BUG: 1255693
Signed-off-by: Dan Lambright <dlambrig@redhat.com>
Reviewed-on: http://review.gluster.org/11984
Reviewed-by: Gaurav Kumar Garg <ggarg@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: snapshot delete all command fails with --xml option
Fix: Provided xml support for delete all command
Change-Id: I77cad131473a9160e188c783f442b6a38a37f758
BUG: 1257533
Signed-off-by: Rajesh Joseph <rjoseph@redhat.com>
Reviewed-on: http://review.gluster.org/12027
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Avra Sengupta <asengupt@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There is a problem in current CLI framework
CLI holds the lock when processing command.
When processing quota list command, below sequence of steps executed in the
same thread and causing deadlock
1) CLI holds the lock
2) Send rpc_clnt_submit request to quotad for quota usage
3) If quotad is down, rpc_clnt_submit invokes cbk function with error
4) cbk function cli_quotad_getlimit_cbk tries to hold lock to broadcast
the results and hangs, because same thread has already holding the lock
This patch fixes the problem by creating seperate thread for
broadcasting the result
Change-Id: I53be006eadf6aaf348083d9168535530d70a8ab3
BUG: 1242819
Signed-off-by: vmallika <vmallika@redhat.com>
Reviewed-on: http://review.gluster.org/11990
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Display the size equivalent to the soft limit percentage
in gluster v quota <volname> list <path> and
gluster v quota <volname> list-objects <path> command
Change-Id: I31ee82e9e836068348cf9458dcaf13f043d9fd87
BUG: 1248521
Signed-off-by: Manikandan Selvaganesh <mselvaga@redhat.com>
Reviewed-on: http://review.gluster.org/11808
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Vijaikumar Mallikarjuna <vmallika@redhat.com>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: I74417471d7d2a86f198037d88dbf7d072c4349c3
BUG: 1218960
Signed-off-by: Sakshi <sabansal@redhat.com>
Reviewed-on: http://review.gluster.org/10475
Reviewed-by: Gaurav Kumar Garg <ggarg@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: N Balachandran <nbalacha@redhat.com>
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
During quota-update process if inode info is present in size-xattr and
missing in contri-xattrs, then in function '_mq_get_metadata', we set
contri-size as zero (on error -2, which means usage info present, but inode info missing).
With this we are calculating wrong delta and updating the same.
With this patch we are ignoring errors if inode info in xattrs are missing
Change-Id: I7940a0e299b8bb425b5b43746b1f13f775c7fb92
BUG: 1241153
Signed-off-by: vmallika <vmallika@redhat.com>
Reviewed-on: http://review.gluster.org/11583
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Tested-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Resource create for the added node referenced a variable
new_node that was never passed. This led to a wrong schema
type in the cib file and hence the added node always ended
up in failed state. And also, resources were wrongly
created twice and led to more errors. I have fixed the variable
name and deleted the repetitive invocation of the recreate-resource
function.
The new node has to be added to the existing ganesha-ha config
file for correct behaviour during subsequent add-node operations.
This edited file has to be copied to all the other cluster nodes.
I have added a fix for this as well.
Change-Id: Ie55138e2657d22298d89db1c08f2e17930686bd6
BUG: 1233246
Signed-off-by: Meghana M <mmadhusu@redhat.com>
Reviewed-on: http://review.gluster.org/11316
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Reviewed-by: soumya k <skoduri@redhat.com>
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
glusterd crashed when doing "detach-tier commit force" on a
non-tiered volume.
Change-Id: I884771893bb80bec46ae8642c2cfd7e54ab116a6
BUG: 1228112
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
Reviewed-on: http://review.gluster.org/11081
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Reviewed-by: Joseph Fernandes
Reviewed-by: Dan Lambright <dlambrig@redhat.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of including config.h in each file, and have the additional
config.h included from the compiler commandline (-include option).
When a .c file tests for a certain #define, and config.h was not
included, incorrect assumtions were made. With this change, it can not
happen again.
BUG: 1222319
Change-Id: I4f9097b8740b81ecfe8b218d52ca50361f74cb64
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: http://review.gluster.org/10808
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Tested-by: NetBSD Build System
Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem 1:
volume info shows Cold Bricks instead of Tier type
eg:
Volume Name: patchy2
Type: Tier
Volume ID: 28c25b8d-b8a1-45dc-b4b7-cbd0b344f58f
Status: Started
Number of Bricks: 3
Transport-type: tcp
Hot Tier :
Hot Tier Type : Distribute
Number of Bricks: 1
Brick1: 10.70.1.35:/home/brick43
Cold Bricks:
Cold Tier Type : Distribute
Number of Bricks: 2
Brick2: 10.70.1.35:/home/brick19
Brick3: 10.70.1.35:/home/brick16
Options Reconfigured:
Problem 2: Detach-tier sending enums of Rebalance
detach-tier has it's own Enum to send with detach-tier command,
using that enums will make more appropriate.
Problem 3:
Wrongly sets hot_brick count during the dictionary copying for response
Change-Id: Icc054a999a679456881bc70511470d32ff8a86e4
BUG: 1211264
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
Reviewed-on: http://review.gluster.org/10768
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Tested-by: NetBSD Build System
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
>> gluster volume info patchy
Volume Name: patchy
Type: Tier
Volume ID: 8bf1a1ca-6417-484f-821f-18973a7502a8
Status: Created
Number of Bricks: 8
Transport-type: tcp
Hot Tier :
Hot Tier Type : Replicate
Number of Bricks: 1 x 2 = 2
Brick1: hostname:/home/brick30
Brick2: hostname:/home/brick31
Cold Bricks:
Cold Tier Type : Disperse
Number of Bricks: 1 x (4 + 2) = 6
Brick3: hostname:/home/brick20
Brick4: hostname:/home/brick21
Brick5: hostname:/home/brick23
Brick6: hostname:/home/brick24
Brick7: hostname:/home/brick25
Brick8: hostname:/home/brick26
Change-Id: I7b9025af81263ebecd641b4b6897b20db8b67195
BUG: 1212400
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
Reviewed-on: http://review.gluster.org/10339
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
cli commands display the brick information without a way
to distinguish hot tier, and cold tier.
This patch will change all the cli related output, without
changing the corresponding xml output.
This patch will change following things
>> gluster volume info
Volume Name: patchy
Type: Tier
Volume ID: 7745d367-811a-4fe9-a500-d04e7afa94bf
Status: Created
Number of Bricks: 3 x 2 = 6
Transport-type: tcp
Hot Bricks:
Brick1: hostname:/home/brick21
Brick2: hostname:/home/brick20
Cold Bricks:
Brick3: hostname:/home/brick19
Brick4: hostname:/home/brick16
Brick5: hostname:/home/brick17
Brick6: hostname:/home/brick18
>>gluster volume status
Status of volume: patchy
Gluster process TCP Port RDMA Port Online
Pid
------------------------------------------------------------------------------
Hot Bricks:
Brick hostname:/home/brick21 49152 0 Y
4690
Brick hostname:/home/brick20 49153 0 Y
4707
Cold Bricks:
Brick hostname:/home/brick19 49154 0 Y
4724
Brick hostname:/home/brick16 49155 0 Y
4741
Brick hostname:/home/brick17 49156 0 Y
4758
Brick hostname:/home/brick18 49157 0 Y
4775
NFS Server on localhost 2049 0 Y
4793
Task Status of Volume patchy
------------------------------------------------------------------------------
There are no active volume tasks
>>gluster volume status pathy detail
Status of volume: patchy
Hot Bricks:
------------------------------------------------------------------------------
Brick : Brick hostname:/home/brick21
TCP Port : 49162
RDMA Port : 0
Online : Y
Pid : 22677
File System : ext4
Device :
/dev/mapper/luks-cd077c56-42ba-44b1-8195-f214b9bc990c
Mount Options : rw,seclabel,relatime,data=ordered
Inode Size : 256
Disk Space Free : 127.3GB
Total Disk Space : 165.4GB
Inode Count : 11026432
Free Inodes : 10998043
------------------------------------------------------------------------------
Brick : Brick hostname:/home/brick20
TCP Port : 49161
RDMA Port : 0
Online : Y
Pid : 22660
File System : ext4
Device :
/dev/mapper/luks-cd077c56-42ba-44b1-8195-f214b9bc990c
Mount Options : rw,seclabel,relatime,data=ordered
Inode Size : 256
Disk Space Free : 127.3GB
Total Disk Space : 165.4GB
Inode Count : 11026432
Free Inodes : 10998043
Cold Bricks:
------------------------------------------------------------------------------
Brick : Brick hostname:/home/brick19
TCP Port : 49157
RDMA Port : 0
Online : Y
Pid : 22501
File System : ext4
Device :
/dev/mapper/luks-cd077c56-42ba-44b1-8195-f214b9bc990c
Mount Options : rw,seclabel,relatime,data=ordered
Inode Size : 256
Disk Space Free : 127.3GB
Total Disk Space : 165.4GB
Inode Count : 11026432
Free Inodes : 10998043
------------------------------------------------------------------------------
Brick : Brick hostname:/home/brick16
TCP Port : 49158
RDMA Port : 0
Online : Y
Pid : 22518
File System : ext4
Device :
/dev/mapper/luks-cd077c56-42ba-44b1-8195-f214b9bc990c
Mount Options : rw,seclabel,relatime,data=ordered
Inode Size : 256
Disk Space Free : 127.3GB
Total Disk Space : 165.4GB
Inode Count : 11026432
Free Inodes : 10998043
------------------------------------------------------------------------------
Brick : Brick hostname:/home/brick17
TCP Port : 49159
RDMA Port : 0
Online : Y
Pid : 22535
File System : ext4
Device :
/dev/mapper/luks-cd077c56-42ba-44b1-8195-f214b9bc990c
Mount Options : rw,seclabel,relatime,data=ordered
Inode Size : 256
Disk Space Free : 127.3GB
Total Disk Space : 165.4GB
Inode Count : 11026432
Free Inodes : 10998043
------------------------------------------------------------------------------
Brick : Brick hostname:/home/brick18
TCP Port : 49160
RDMA Port : 0
Online : Y
Pid : 22552
File System : ext4
Device :
/dev/mapper/luks-cd077c56-42ba-44b1-8195-f214b9bc990c
Mount Options : rw,seclabel,relatime,data=ordered
Inode Size : 256
Disk Space Free : 127.3GB
Total Disk Space : 165.4GB
Inode Count : 11026432
Free Inodes : 10998043
Change-Id: I7d584eb8782129c12876cce2ba8ffba6c0a620bd
BUG: 1206546
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
Reviewed-on: http://review.gluster.org/10328
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fix for handling cli output for attach-tier and
detach-tier
Change-Id: I4d17f4b09612754fe1b8cec6c2e14927029b9678
BUG: 1211562
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
Reviewed-on: http://review.gluster.org/10284
Reviewed-by: Dan Lambright <dlambrig@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Tested-by: NetBSD Build System
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This fix adds support to view the number of promoted or demoted
files from the cli. The mechanism is isolmorphic to checking
the status of volumes being rebalanced.
gluster volume rebalance <vol> tier status
Change-Id: I1b11ca27355ceec36c488967c23531202030e205
BUG: 1213063
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
Signed-off-by: Dan Lambright <dlambrig@redhat.com>
Reviewed-on: http://review.gluster.org/10292
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Replace-brick operation with data migration support have been
deprecated from gluster.
With this fix replace brick command will support only one commad
gluster volume replace-brick <VOLNAME> <SOURCE-BRICK> <NEW-BRICK> {commit force}
Change-Id: Ib81d49e5d8e7eaa4ccb5830cfec2bc081191b43b
BUG: 1094119
Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com>
Reviewed-on: http://review.gluster.org/10101
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Global option gluster features.ganesha enable
writes into the global 'option' file. The snapshot
feature also writes into the same file.
To handle concurrent multiple transactions correctly,
a new lock has to be introduced on this file.
Every operation using this file needs
to contest for the new lock type.
Change-Id: Ia8a324d2a466717b39f2700599edd9f345b939a9
BUG: 1200254
Signed-off-by: Meghana Madhusudhan <mmadhusu@redhat.com>
Reviewed-on: http://review.gluster.org/10130
Reviewed-by: Avra Sengupta <asengupt@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: soumya k <skoduri@redhat.com>
Tested-by: NetBSD Build System
Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
After attaching tier, we have to start tier rebalance process.
This patch is to trigger tier start along with attch-tier.
Change-Id: I39380f95123f0087a82213ef263f9f33adcc5adc
BUG: 1214222
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
Reviewed-on: http://review.gluster.org/10363
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Dan Lambright <dlambrig@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Discussion in gluster-devel
http://www.gluster.org/pipermail/gluster-devel/2015-April/044301.html
MASTER NODE - Master Volume Node
MASTER VOL - Master Volume name
MASTER BRICK - Master Volume Brick
SLAVE USER - Slave User to which Geo-rep session is established
SLAVE - <SLAVE_NODE>::<SLAVE_VOL> used in Geo-rep Create command
SLAVE NODE - Slave Node to which Master worker is connected
STATUS - Worker Status(Created, Initializing, Active, Passive, Faulty,
Paused, Stopped)
CRAWL STATUS - Crawl type(Hybrid Crawl, History Crawl, Changelog Crawl)
LAST_SYNCED - Last Synced Time(Local Time in CLI output and UTC in XML output)
ENTRY - Number of entry Operations pending.(Resets on worker restart)
DATA - Number of Data operations pending(Resets on worker restart)
META - Number of Meta operations pending(Resets on worker restart)
FAILURES - Number of Failures
CHECKPOINT TIME - Checkpoint set Time(Local Time in CLI output and UTC
in XML output)
CHECKPOINT COMPLETED - Yes/No or N/A
CHECKPOINT COMPLETION TIME - Checkpoint Completed Time(Local Time in CLI
output and UTC in XML output)
XML output:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
cliOutput>
geoRep>
volume>
name>
sessions>
session>
session_slave>
pair>
master_node>
master_brick>
slave_user>
slave/>
slave_node>
status>
crawl_status>
entry>
data>
meta>
failures>
checkpoint_completed>
master_node_uuid>
last_synced>
checkpoint_time>
checkpoint_completion_time>
BUG: 1212410
Change-Id: I944a6c3c67f1e6d6baf9670b474233bec8f61ea3
Signed-off-by: Aravinda VK <avishwan@redhat.com>
Reviewed-on: http://review.gluster.org/10121
Tested-by: NetBSD Build System
Reviewed-by: Kotresh HR <khiremat@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently when quota limit is set, corresponding gfid
is set in quota.conf. This patch supports storing
inode-quota limits in quota.conf and also stores
additional byte for each gfid to differentiate
between usage quota limit and inode quota limit.
Change-Id: I444d7399407594edd280e640681679a784d4c46a
BUG: 1202244
Signed-off-by: vmallika <vmallika@redhat.com>
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Reviewed-on: http://review.gluster.org/10069
Tested-by: NetBSD Build System
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Command gluster volume status <VOLNAME> should show the status of bitrot
and scrubber daemon and its pid information.
Along with displaying bitrot and scrubber daemon information in gluster
volume status command there should be command to show its individual status
separately.
Command to show individual status of bitrot and scrubber daemon will
following.
command to show only bitd daemon information will be
gluster volume status <VOLNAME> bitd
command to show only scrubber daemon information
gluster volume status <VOLNAME> scrub
Change-Id: Id86aae1156c8c599347c98e2a538f294d37376e4
BUG: 1209752
Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com>
Reviewed-on: http://review.gluster.org/10175
Reviewed-by: Kaushal M <kaushal@redhat.com>
Tested-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: Iabe99c06166578fc90121e7cfdca4a6a3f5328ae
BUG: 1211132
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: http://review.gluster.org/10229
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
xml parsing of voltype should be inline with the cli
Change-Id: I41ddddac00d07f03b56a041e1c3f5a132fbd7393
BUG: 1212398
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: http://review.gluster.org/10271
Tested-by: NetBSD Build System
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: Ia8d061de5be1343cc10a945f6cf011686a770d33
BUG: 1211594
Signed-off-by: Humble Devassy Chirammal <hchiramm@redhat.com>
Reviewed-on: http://review.gluster.org/10144
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Tested-by: NetBSD Build System
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
These commands work in a manner analagous to rebalancing when removing a
brick. The existing migration daemon detects "detach start" and switches
to moving data off the hot tier. While in this state all lookups are
directed to the cold tier.
gluster v detach-tier <vol> start
gluster v detach-tier <vol> commit
The status and stop cli commands shall be submitted separately.
Change-Id: I24fda5cc3ba74f5fb8aa9a3234ad51f18b80a8a0
BUG: 1205540
Signed-off-by: Dan Lambright <dlambrig@redhat.com>
Signed-off-by: root <root@localhost.localdomain>
Signed-off-by: Dan Lambright <dlambrig@redhat.com>
Reviewed-on: http://review.gluster.org/10108
Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
Tested-by: NetBSD Build System
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
with the inode quota feature, quota size is now
increased from 64bit to 192bits which contains
values of 'file size', 'file count' and 'dir count'
This change in quota size xattr needs to be handled
in disperse xattr aggregation
Signed-off-by: vmallika <vmallika@redhat.com>
Change-Id: I5fd28aa9f5b8b6cba83a98360236417a97ac16ee
BUG: 1207967
Reviewed-on: http://review.gluster.org/10112
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Sachin Pandit <spandit@redhat.com>
Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Tested-by: Raghavendra G <rgowdapp@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Coverity CID: 1124451
Change-Id: I7b2901fdd2ace4666f9e2c6deaf3838322a1c6ff
BUG: 789278
Signed-off-by: Nandaja Varma <nvarma@redhat.com>
Reviewed-on: http://review.gluster.org/9579
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously when user start remove-brick operation on a volume then by
giving non-existing brick for remove-brick status/stop command it was
showing remove-brick status/stoping remove-brick operation on a volume.
With this fix it will validate bricks which user have given for
remove-brick status/stop command and if bricks are part of volume then
it will show statistics of remove-brick operation otherwise it will show
error "Incorrect brick <brick_name> for <volume_name>".
Change-Id: I151284ef78c25f52d1b39cdbd71ebfb9eb4b8471
BUG: 1121584
Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com>
Reviewed-on: http://review.gluster.org/9681
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Tested-by: Krishnan Parthasarathi <kparthas@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
and dead code removal
CID: 1124609
CID: 1124596
CID: 1124471
CID: 1124475
CID: 1124476
The pointer variables are checked before
dereferencing and the dead code is removed
BUG: 789278
Change-Id: Ia532733a64401d71ccf1f2b6e434d7bc910e0ed1
Signed-off-by: arao <arao@redhat.com>
Reviewed-on: http://review.gluster.org/10083
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Here value of fd can never have a negative value if it is to be
closed. So changing the check from if (fd) to if (fd >= 0)
Coverity CIDs:
1124652
1124653
Change-Id: I8491afa93bab10acd2c2e01993a7f7468ca9ff87
BUG: 789278
Signed-off-by: Nandaja Varma <nvarma@redhat.com>
Reviewed-on: http://review.gluster.org/9577
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Change-Id: I5976777adf770d42aa33ebbe3833fb14c1ff658e
BUG: 1206535
Signed-off-by: vmallika <vmallika@redhat.com>
Reviewed-on: http://review.gluster.org/10026
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Sachin Pandit <spandit@redhat.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A tiered volume is a normal volume with some number of new bricks
representing "hot" storage. The "hot" bricks can be attached or
detached dynamically to a normal volume. When this happens, a new graph
is constructed. The root of the new graph is an instance of the tier
translator. One subvolume of the tier translator leads to the old volume,
and another leads to the new hot bricks.
attach-tier <VOLNAME> [<replica> <COUNT>] <NEW-BRICK> ... [force]
volume detach-tier <VOLNAME> [replica <COUNT>] <BRICK>
... <start|stop|status|commit|force>
gluster volume rebalance <volume> tier start
gluster volume rebalance <volume> tier stop
gluster volume rebalance <volume> tier status
The "tier start" CLI command starts a server side daemon. The daemon
initiates file level migration based on caching policies. The daemon's
status can be monitored and stopped.
Note development on the "tier status" command is incomplete. It will be
added in a subsequent patch.
When the "hot" storage is detached, the tier translator is removed
from the graph and the tiered volume reverts to its original state as
described in the volume's info file.
For more background and design see the feature page [1].
[1]
http://www.gluster.org/community/documentation/index.php/Features/data-classification
Change-Id: Ic8042ce37327b850b9e199236e5be3dae95d2472
BUG: 1194753
Signed-off-by: Dan Lambright <dlambrig@redhat.com>
Reviewed-on: http://review.gluster.org/9753
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
Tested-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
CLI command for bitrot features.
volume bitrot <volname> enable|disable
Above command will enable/disable bitrot feature for particular volume.
BUG: 1170075
Change-Id: Ie84002ef7f479a285688fdae99c7afa3e91b8b99
Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com>
Signed-off-by: Anand nekkunti <anekkunt@redhat.com>
Signed-off-by: Dominic P Geevarghese <dgeevarg@redhat.com>
Reviewed-on: http://review.gluster.org/9866
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
==========================================================================
Inode quota
==========================================================================
= Currently, the only way to retrieve the number of files/objects in a =
= directory or volume is to do a crawl of the entire directory/volume. =
= This is expensive and is not scalable. =
= =
= The proposed mechanism will provide an easier alternative to determine =
= the count of files/objects in a directory or volume. =
= =
= The new mechanism proposes to store count of objects/files as part of =
= an extended attribute of a directory. Each directory's extended =
= attribute value will indicate the number of files/objects present =
= in a tree with the directory being considered as the root of the tree. =
= =
= The count value can be accessed by performing a getxattr(). =
= Cluster translators like afr, dht and stripe will perform aggregation =
= of count values from various bricks when getxattr() happens on the key =
= associated with file/object count. =
A new interface is introduced:
------------------------------
limit-objects : limit the number of inodes at directory level
list-objects : list the directories where the limit is set
remove-objects : remove the limit from the directory
==========================================================================
CLI COMMAND:
gluster volume quota <volname> limit-objects <path> <number> [<percent>]
* <number> is a hard-limit for number of objects limitation for path "<path>"
If hard-limit is exceeded, creation of file/directory is no longer
permitted.
* <percent> is a soft-limit for number of objects creation for path "<path>"
If soft-limit is exceeded, a warning is issued for each creation.
CLI COMMAND:
gluster volume quota <volname> remove-objects [path]
==========================================================================
CLI COMMAND:
gluster volume quota <volname> list-objects [path] ...
Sample output:
------------------
Path Hard-limit Soft-limit Used Available
Soft-limit exceeded?
Hard-limit exceeded?
------------------------------------------------------------------------
--------------------------------------
/dir 10 80% 10 0
Yes
Yes
==========================================================================
[root@snapshot-28 dir]# ls
a b file11 file12 file13 file14 file15 file16 file17
[root@snapshot-28 dir]# touch a1
touch: cannot touch `a1': Disk quota exceeded
* Nine files are created in directory "dir" and directory is included in
* the
count too. Hence the limit "10" is reached and further file creation
fails
==========================================================================
Note: We have also done some re-factoring in cli for volume name
validation. New function cli_validate_volname is created
==========================================================================
Change-Id: I1823497de4f790a2a20ebb1770293472ea33ee2b
BUG: 1190108
Signed-off-by: Sachin Pandit <spandit@redhat.com>
Signed-off-by: vmallika <vmallika@redhat.com>
Reviewed-on: http://review.gluster.org/9769
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Vijay Bellur <vbellur@redhat.com>
|