summaryrefslogtreecommitdiffstats
path: root/__root__/doc/rgmanager-pacemaker
diff options
context:
space:
mode:
Diffstat (limited to '__root__/doc/rgmanager-pacemaker')
-rw-r--r--__root__/doc/rgmanager-pacemaker/00.intro.txt80
-rw-r--r--__root__/doc/rgmanager-pacemaker/01.cluster.txt77
-rw-r--r--__root__/doc/rgmanager-pacemaker/02.resources.txt530
-rw-r--r--__root__/doc/rgmanager-pacemaker/03.groups.txt272
4 files changed, 959 insertions, 0 deletions
diff --git a/__root__/doc/rgmanager-pacemaker/00.intro.txt b/__root__/doc/rgmanager-pacemaker/00.intro.txt
new file mode 100644
index 0000000..f42f19e
--- /dev/null
+++ b/__root__/doc/rgmanager-pacemaker/00.intro.txt
@@ -0,0 +1,80 @@
+IN THE LIGHT OF RGMANAGER-PACEMAKER CONVERSION: 00/INTRO
+
+Copyright 2016 Red Hat, Inc., Jan Pokorný <jpokorny @at@ Red Hat .dot. com>
+Permission is granted to copy, distribute and/or modify this document
+under the terms of the GNU Free Documentation License, Version 1.3
+or any later version published by the Free Software Foundation;
+with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts.
+A copy of the license is included in the section entitled "GNU
+Free Documentation License".
+
+
+Prerequisities and conventions
+==============================
+
+Optionally, basic knowledge of LTL logic [1].
+The meaning of symbols used (mind the ASCII range) goes, ordered by
+descending precedence priority, like this:
+
+. a-z ... booleans representing satisfaction of the connected claim
+. () ... braces (changing evaluation order of enclosed expression)
+. union ... set union, written as a function for 3+ sets
+. intersection ... set intersection, written as a function for 3+ sets
+. \ ... set difference
+. in ... set's item selector
+. ~ ... negation
+. X,G,F,U,R ... temporal operators (LTL)
+. AND ... conjuction
+. OR ... disjunction
+. exists ... existential quantifier (predicate logic)
+. for all ... universal quantifier (predicate logic)
+. -> ... implication (--> as "maps to" in function signature context)
+
+There are also following sets assumed:
+
+{} ... empty set
+2^Z ... potential set of a set denoted with Z
+NODES ... set of all nodes
+RESOURCES ... set of all resources/services
+
+and these functions:
+
+RUNNABLE: NODES --> RESOURCES
+... all resources that can run on given node
+SCORE: RESOURCES x NODES --> {0, 1, ...}
+... order of preference for given resource to run on given node (without
+ contribution of preference implied by the examined property)
+ALTER(ARGS)
+... alteration of the cluster behavior wrt. arguments
+intersection, union
+... see above
+max
+... given set of values, return maximum
+
+and these predicates:
+
+ACTIVE(A) ... node A is active cluster member
+RUNNING(A, B) ... node A runs resource B (assumes B in RUNNABLE(A))
+
+and this contradiction:
+
+exists A1, A2 in NODES: A1 != A2, B in RESOURCES:
+ RUNNING(A1, B) AND RUNNING(A2, B)
+[given unique resource is expected to run on atmost a single node,
+ we don't consider Pacemaker's clones here at all]
+
+
+Notes
+-----
+
+- discreteness of the events in the LTL models is chosen quite deliberately,
+ per common sense and "best fit", for the sake of simplicity
+ (author is by no means expert in this field)
+
+
+
+References
+==========
+
+[1] http://en.wikipedia.org/wiki/Linear_temporal_logic
+: vim: set ft=rst: <-- not exactly, but better than nothing
diff --git a/__root__/doc/rgmanager-pacemaker/01.cluster.txt b/__root__/doc/rgmanager-pacemaker/01.cluster.txt
new file mode 100644
index 0000000..b4cbceb
--- /dev/null
+++ b/__root__/doc/rgmanager-pacemaker/01.cluster.txt
@@ -0,0 +1,77 @@
+IN THE LIGHT OF RGMANAGER-PACEMAKER CONVERSION: 01/CLUSTER PROPERTIES
+
+Copyright 2014 Red Hat, Inc., Jan Pokorný <jpokorny @at@ Red Hat .dot. com>
+Permission is granted to copy, distribute and/or modify this document
+under the terms of the GNU Free Documentation License, Version 1.3
+or any later version published by the Free Software Foundation;
+with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts.
+A copy of the license is included in the section entitled "GNU
+Free Documentation License".
+
+
+Preface
+=======
+
+This document elaborates on how selected cluster properties formalized
+by the means of LTL logic maps to particular RGManager (R) and
+Pacemaker (P) configuration arrangements. Due to the purpose of this
+document, "selected" here means set of properties one commonly uses in
+case of the former cluster resource manager (R).
+
+Properties are categorised, each is further dissected based on
+the property variants (basically holds or doesn't, but can be more
+convoluted), and for each variants, the LTL model and R+P specifics
+are provided.
+
+
+Outline
+-------
+
+Other cluster properties, PROPERTY(CLUSTER)
+. FUNCTION
+
+
+
+Other cluster properties
+========================
+
+Is-functioning cluster property
+-------------------------------
+
+FUNCTION(CLUSTER) ::= FUNCTION(CLUSTER, TRUE)
+ | FUNCTION(CLUSTER, FALSE)
+. FUNCTION(CLUSTER, TRUE) ... is functioning
+. FUNCTION(CLUSTER, FALSE) ... is not
+
+notes
+. it is assumed cluster stack keeps running in both cases(!)
+. see also 02/resource: MANAGED
+
+R: driven by RGManager allowance/disallowance in cluster.conf
+ - `/cluster/rm/@disabled`
+
+P: driven by `stop-all-resources` (?)
+
+FUNCTION(CLUSTER, TRUE) [1. is functioning]
+~~~~~~~~~~~~~~~~~~~~~~~
+
+R: `@disabled` either not specified or `0`
+
+P: default, no need for that, otherwise specifying `stop-all-resources`
+ as `false`
+ # pcs property set stop-all-resources=
+ # pcs property set stop-all-resources=false
+
+FUNCTION(CLUSTER, FALSE) [2. is not functioning]
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+R: `@disabled` specified as `1` (nonzero?)
+
+P: driven by specifying `stop-all-resources` as `true`
+ # pcs property set stop-all-resources=true
+
+
+References
+==========
+
+: vim: set ft=rst: <-- not exactly, but better than nothing
diff --git a/__root__/doc/rgmanager-pacemaker/02.resources.txt b/__root__/doc/rgmanager-pacemaker/02.resources.txt
new file mode 100644
index 0000000..1a224fe
--- /dev/null
+++ b/__root__/doc/rgmanager-pacemaker/02.resources.txt
@@ -0,0 +1,530 @@
+IN THE LIGHT OF RGMANAGER-PACEMAKER CONVERSION: 02/CLUSTERED RESOURCE PROPERTIES
+
+Copyright 2016 Red Hat, Inc., Jan Pokorný <jpokorny @at@ Red Hat .dot. com>
+Permission is granted to copy, distribute and/or modify this document
+under the terms of the GNU Free Documentation License, Version 1.3
+or any later version published by the Free Software Foundation;
+with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts.
+A copy of the license is included in the section entitled "GNU
+Free Documentation License".
+
+
+Preface
+=======
+
+This document elaborates on how selected resource relationship properties
+(denoting the run-time behavior) formalized by the means of LTL logic maps
+to particular RGManager (R) and Pacemaker (P) configuration arrangements.
+Due to the purpose of this document, "selected" here means set of
+properties one commonly uses in case of the former cluster resource
+manager (R).
+
+Properties are categorised, each is further dissected based on
+the property variants (basically holds or doesn't, but can be more
+convoluted), and for each variants, the LTL model and R+P specifics
+are provided.
+
+
+Outline
+-------
+
+Resource-resource interaction properites, PROPERTY(RESOURCE1, RESOURCE2)
+. ORDERING
+. COOCCURRENCE
+Relative resource-node assignment properties, PROPERTY(RESOURCE)
+. STICKY
+. EXCLUSIVE
+Explicit resource-node assignment properties, PROPERTY(RESOURCE, NODE)
+. AFFINITY
+Other resource properties, PROPERTY(RESOURCE)
+. RECOVERY --> see RECOVERY(GROUP) and FAILURE-ISOLATION(GROUP, RESOURCE)
+. ENABLED --> Pacemaker only, same as ENABLED(GROUP)
+
+# XXX: service ref=... + single node failover domains vs. clone
+
+
+
+Resource-resource interaction properites
+========================================
+
+Generally a relation expressed by a predicate PROPERTY(RESOURCE1, RESOURCE2),
+implying modification of the behavior of cluster wrt. resource-resource
+pair:
+
+PROPERTY(RESOURCE1, RESOURCE2) -> ALTER((RESOURCE1, RESOURCE2))
+
+
+Ordering (time-based interaction dependence of the resources/their states)
+--------------------------------------------------------------------------
+
+ORDERING ::= ORDERING(RESOURCE1, RESOURCE2, NONE)
+ | ORDERING(RESOURCE1, RESOURCE2, WEAK)
+ | ORDERING(RESOURCE1, RESOURCE2, STRONG)
+ | ORDERING(RESOURCE1, RESOURCE2, ASYMMETRIC)
+. ORDERING(RESOURCE1, RESOURCE2, NONE) ... no time-based dependency
+ of the states
+. ORDERING(RESOURCE1, RESOURCE2, WEAK) ... start of 2nd preceeded by start
+ of 1st (stop viceversa/LIFO),
+ but only when both events happen
+ at the same time
+. ORDERING(RESOURCE1, RESOURCE2, STRONG) ... runtime of 2nd won't exceed
+ runtime of 1st
+. ORDERING(RESOURCE1, RESOURCE2, ASYMMETRIC)
+ ... runtime of 2nd won't exceed
+ runtime of 1st (as STRONG) only
+ at the start phase (XXX then
+ the lifetimes are not
+ correlated???)
+
+assumed variables (+ constraints) to be combined with proper preconditions:
+. A1, A2 in NODES
+. B in RUNNABLE(A1), C in RUNNABLE(A2)
+
+propositional variables:
+. a ... RUNNING(A1, B)
+. b ... RUNNING(A2, C)
+. c ... intention RUNNING(A1, B)
+. d ... intention RUNNING(A2, C)
+
+R: driven by
+ XXX implicit and explicit ordering
+
+P: driven by `order` constraint (or `group` arrangement)
+
+ORDERING(B, C, WEAK) [2. weak ordering]
+~~~~~~~~~~~~~~~~~~~~
+~a AND ~b AND c AND d -> (X a OR TRUE) AND X X b [start: B, then C]
+a AND b AND ~c AND ~d -> X ~b AND X X ~a [stop: C, then B]
+
+R: TBD
+
+P: driven by specifying `kind` as `Optional` (or `score` as `0`)
+ # pcs constraint order B then C kind=Optional
+ or
+ # pcs constraint order B then C score=0
+
+ORDERING(B, C, STRONG) [3. strong ordering]
+~~~~~~~~~~~~~~~~~~~~~~
+~a AND ~b AND c AND d -> X a AND X X b [see weak ordering]
+~a AND ~b AND d -> ~a AND ~b AND c AND d [stronger, follows as per previous???]
+a AND b AND ~c AND ~d -> X ~b AND X X ~a [see weak ordering]
+a AND b AND ~c -> a AND b AND ~c AND ~d [stronger, follows as per previous]
+
+R: TBD
+
+P: driven by omitting both `kind` and `score` (or specifying `kind` as
+ `Mandatory` or `score` as non-zero)
+ or (b) using `group` (which implies also `colocation` constraint)
+ - (a)
+ # pcs constraint order B then C
+ # pcs constraint order B then C kind=Mandatory
+ or
+ # pcs constraint order B then C score=1
+ # XXX pcs constraint order set B C
+ - (b)
+ # XXX pcs resource group add SOMENAME C/RESOURCE2 B/RESOURCE1
+
+ORDERING(B, C, ASYMMETRIC) [4. asymmetric ordering]
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+~a AND ~b AND c AND d -> X a AND X X b [see weak ordering]
+~a AND ~b AND d -> ~a AND ~b AND c AND d [stronger, follows as per previous???]
+
+R: TBD
+
+P: driven by specifying `symmetrical` as `false` (default is `true`)
+ # pcs constraint order B then C symmetrical=false
+
+
+Cooccurrence (location-based interaction dependence of the resources)
+---------------------------------------------------------------------
+
+COOCCURRENCE ::= COOCURRENCE(RESOURCE1, RESOURCE2, NONE)
+ | COOCURRENCE(RESOURCE1, RESOURCE2, POSITIVE)
+ | COOCURRENCE(RESOURCE1, RESOURCE2, NEGATIVE)
+ | COOCURRENCE(RESOURCE1, RESOURCE2, SCORE),
+ SCORE in {..., -1, 0, 1, ...}, 0~NONE, -INF~NEGATIVE,
+ +INF~POSITIVE
+. COOCCURRENCE(RESOURCE1, RESOURCE2, NONE) ... not any occurrence
+ relationship (default)
+. COOCCURRENCE(RESOURCE1, RESOURCE2, POSITIVE) ... positive occurrence
+ relationship (flat model)
+. COOCCURRENCE(RESOURCE1, RESOURCE2, NEGATIVE) ... negative occurrence
+ relationship (flat model)
+. COOCCURRENCE(RESOURCE1, RESOURCE2, SCORE) ... score-based/advisory
+ occurrence relationship
+
+note: COOCCURRENCE relation between RESOURCE1 and RESOURCE2 is not symmetric,
+ RESOURCE1 is "dependent", RESOURCE2 is "leader"
+
+assumed variables (+ constraints) to be combined with proper preconditions:
+. A1, A2 in NODES
+. B, C in RUNNABLE(A1) intersection RUNNABLE(A2)
+
+propositional variables:
+. a ... RUNNING(A1, B)
+. b ... RUNNING(A2, C)
+. c ... A1 == A2
+. d ... RUNNING(A2, B)
+
+R: driven by various arrangements
+
+P: driven by `colocation` constraint (or `group` arrangement)
+
+COOCCURRENCE(B/RESOURCE1, C/RESOURCE2, NONE) [1. no cooccurrence]
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+R: default
+
+P: default, no need for that, otherwise specifying `score` as `0`
+ # pcs constraint colocation add B/RESOURCE1 with C/RESOURCE2 0
+
+COOCCURRENCE(B/RESOURCE1, C/RESOURCE2, POSITIVE) [2. positive cooccurrence]
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+a AND b -> c [basic positive cooccurrence condition]
+X b -> X d ["dependent" follows its "leader"]
+
+R: driven by grouping set of sequentially dependent resources
+ either using subsequent nesting so as to preserve the predestined
+ order unconditionally (referred to as **explicit ordering**)
+ or just enumerating them on the same level within the hierarchy
+ to allow for (reasonably preselected) reordering as designated
+ (via service.sh metadata) rules (referred to as **implicit ordering**)
+ hierarchically (in general, but using just the latter provision
+ it can be plain flat) into service/vm stanza
+ - `__independent_subtree` must not be used
+
+P: driven by (a) `colocation` constraint, specifying `INFINITY`,
+ or (b) using `group` (which implies also `order` constraint)
+ - (a)
+ # pcs constraint colocation add B/RESOURCE1 with C/RESOURCE2
+ # pcs constraint colocation add B/RESOURCE1 with C/RESOURCE2 INFINITY
+ # pcs constraint colocation set C/RESOURCE2 B/RESOURCE1
+ - (b)
+ # pcs resource group add SOMENAME C/RESOURCE2 B/RESOURCE1
+
+COOCCURRENCE(B/RESOURCE1, C/RESOURCE2, NEGATIVE) [3. negative cooccurrence]
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+a AND b -> ~c [basic negative cooccurrence condition]
+X b -> X ~d ["dependent" escapes its "leader"]
+
+R: driven/emulated solely by disjunct failover domains
+ - XXX and possibly with `follow_service.sl` in `central_processing` mode
+
+P: driven by `colocation` constraint, specifying `-INFINITY`
+ - using -INFINITY as value
+ # pcs constraint colocation add B/RESOURCE1 with C/RESOURCE2 -INFINITY
+ # pcs constraint colocation set C/RESOURCE2 B/RESOURCE1 setoptions score=-INFINITY
+
+COOCURENCE(B/RESOURCE1, C/RESOURCE2, SCORE) [4. score-based/advisory occurrence]
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+TBD
+
+R: XXX not supported unless a way to emulate this using `central_processing`
+ (?) or sharing the same `ordered` failover domain (?)
+
+P: driven by `colocation` constraint, specifying `SCORE` (value)
+ # pcs constraint colocation add B/RESOURCE1 with c/RESOURCE2 SCORE
+ # pcs constraint colocation set C/RESOURCE2 B/RESOURCE1 setoptions score=SCORE
+
+
+
+Relative resource-node assignment properties
+============================================
+
+Generally a relation expressed by a predicate PROPERTY(RESOURCE),
+implying modification of the behavior of cluster wrt. the node
+running RESOURCE:
+
+PROPERTY(RESOURCE) AND RUNNING(NODE, RESOURCE) -> ALTER(NODE)
+
+
+Resource stickiness property (not moving back to preferred location)
+--------------------------------------------------------------------
+
+STICKY ::= STICKY(RESOURCE, FALSE)
+ | STICKY(RESOURCE, TRUE)
+ | STICKY(RESOURCE, STICKINESS), STICKINESS in {0, 1, 2, ...},
+ 0~FALSE
+. STICKY(RESOURCE, FALSE) ... unsticky/cruising resource (default)
+. STICKY(RESOURCE, TRUE) ... sticky resource (flat model)
+. STICKY(RESOURCE, STICKINESS) ... sticky resource (prioritized model)
+
+assumed variables (+ constraints) to be combined with proper preconditions:
+. A1, A2 in NODES
+. B in RUNNABLE(A1) AND B in RUNNABLE(A2)
+. SCORE(B, A1) < SCORE(B, A2)
+
+propositional variables:
+. a ... RUNNING(A1, B)
+. b ... ACTIVE(A2)
+. c ... RUNNING(A2, B) resource X running (relocation if A1 != A2)
+
+R: driven by `/cluster/rm/failoverdomains/failoverdomain/@nofailback`
+ - note: only applies to service/vm (not primitive resources)
+P: driven by `stickiness` parameter
+ - group is a sum of stickiness values of underlying resources
+
+
+STICKY(B/RESOURCE, FALSE) [1. model of unsticky/cruising resource]
+~~~~~~~~~~~~~~~~~~~~~~~~~
+a AND b -> X c
+
+R: default, no need for that, otherwise specifying `@nofailback` as `0`
+
+P: default, no need for that, otherwise specifying `stickiness` as `0`
+ # pcs resource meta B/RESOURCE stickiness=
+ # pcs resource meta B/RESOURCE stickiness=0
+
+STICKY(B/RESOURCE, TRUE) [2. model of sticky resource]
+~~~~~~~~~~~~~~~~~~~~~~~~
+a AND b -> X a
+
+R: driven by specifying `@nofailback` as positive number
+
+P: driven by specifying `stickiness` as `INFINITY`
+ # pcs resource meta B/RESOURCE stickiness=INFINITY
+
+STICKY(B/RESOURCE, STICKINESS) [3. model of sticky resource with priorities]
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+TBD
+
+R: XXX not supported unless a way to emulate this using ordered failover
+ domains(?)
+
+P: driven by specifying `stickiness` as `STICKINESS` (value)
+ # pcs resource meta B/RESOURCE stickiness=STICKINESS
+
+
+Node-exclusiveness resource property (optionally with priority-based preemption)
+--------------------------------------------------------------------------------
+
+EXCLUSIVE ::= EXCLUSIVE(RESOURCE, FALSE)
+ | EXCLUSIVE(RESOURCE, TRUE)
+ | EXCLUSIVE(RESOURCE, PRIORITY), PRIORITY in {0, 1, ...}, 0~FALSE
+. EXCLUSIVE(RESOURCE, FALSE) ... non-exclusive resource (default)
+. EXCLUSIVE(RESOURCE, TRUE) ... exclusive (flat model)
+. EXCLUSIVE(RESOURCE, PRIORITY) ... exclusive (prioritized pre-emptive model)
+
+assumed variables (+ constraints) to be combined with proper preconditions:
+. A in NODES
+. B, C in RUNNABLE(A)
+. I, J in {0, 1, ...}: I > J
+
+R: driven by `/cluster/rm/<service>/@exclusive`
+ - note: only applies to service/vm (not primitive resources)
+P: driven by `utilization` constraint
+ (or possibly by a set of `colocation` constraints:
+ for all r in RESOURCES\RESOURCE: COOCCURRENCE(RESOURCE, r, -INF) [1]
+ (see [3. negative cooccurrence])
+
+[1. model of non-node-exclusive/co-occurrence-positive resource, based on 2.]
+for all B' in RUNNABLE(A): EXCLUSIVE(B', FALSE) [implies EXCLUSIVE(B, FALSE)]
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+~(a OR b) OR ~(~a -> ~b R ~a AND ~b -> ~a R ~b)
+= ~(a OR b) OR ~(~a -> ~b R ~a) OR ~(~b -> ~a R ~b)
+= ~(a OR b) OR ~(a OR ~b R ~a) OR ~(b OR ~a R ~b)
+= ~(a OR b) OR ~a AND ~(~b R ~a) OR ~b AND ~(~a R ~b)
+= ~(a OR b) OR ~a AND ~(~(a U b)) OR ~b AND ~(~(b U a))
+= ~(a OR b) OR ~a AND a U b OR ~b AND b U a
+= ~(a OR b) OR b OR a
+= ~a AND ~b OR b OR a [= true, i.e. no restriction wrt. modelled property, QED]
+. a ... RUNNING(A, B)
+. b ... exists B' in 2^RUNNABLE(A)\{}: RUNNING(A, B')
+
+R: default (`@exclusive` = 0)
+
+P: default (no such purposefully full utilization of node resources
+ specified)
+ - pcs: support is arriving
+ (https://bugzilla.redhat.com/show_bug.cgi?id=1158500)
+
+[2. flat model of node-exclusive/co-occurrence-less resource, no preemption]
+EXCLUSIVE(B, TRUE)
+~~~~~~~~~~~~~~~~~~
+~(a OR b) OR (~a -> ~b R ~a) AND (~b -> ~a R ~b) [mutual exclusion]
+. a ... RUNNING(A, B)
+. b ... exists YS in 2^(RUNNABLE(A)\{B})\{}: for all Y in YS: RUNNING(A, Y)
+ (assuming valid state, i.e., exclusiveness property
+ would be recursively satisfied also within YS)
+
+R: driven by the arrangement:
+ - not `central_processing mode` and `@exclusive` != 0
+
+P: driven by the arrangement:
+ - resources (presumably uniformly, but broken down to utilization
+ of comprising primitives) require full utilization of what
+ resources (presumably uniformly) each node provides
+
+[3. model of prioritized pre-emptive node-exclusive resource]
+EXCLUSIVE(B, I)
+EXCLUSIVE(C, J)
+~~~~~~~~~~~~~~~
+~(a OR c) OR (~a -> ~c R ~a) AND (~c -> ~a R ~c), [mutual exclusion]
+(b AND X a) -> (X ~b) [exlusivness priority wins]
+. a ... RUNNING(A, B)
+. b ... RUNNING(C, B)
+. c ... exists YS in 2^({Y | Y in RUNNABLE(A)
+ AND (EXCLUSIVE(Y, FALSE)
+ OR exists K > I: EXCLUSIVE(Y, K))}\{B}
+ )\{}: for all Y' in YS: RUNNING(A, Y')
+ (assuming valid state, i.e., exclusiveness property
+ would be recursively satisfied also within YS)
+
+R: driven by the arrangement:
+ - only in `central_processing mode`
+ - https://access.redhat.com/site/node/47037
+ - also see https://bugzilla.redhat.com/show_bug.cgi?id=744052#c4
+ (also relevant to Pacemaker)
+ - exclusive resource with value of the respective parameter specifying
+ priority in the inverse sense (1 is highest, XXX or 0?)
+
+P: driven by the arrangement:
+ - see 1. + prioritization defined by the means of priority per primitive,
+ - https://access.redhat.com/site/solutions/65542
+ - for 2., 3.:
+ - however modelling "exclusive resource cannot be started on node,
+ with non-exclusive resources already running" seems to be close
+ to impossible (XXX or in a intrusive way, like setting default
+ utilization + priority for those resources not overriding
+ these defaults)?
+
+
+
+Explicit resource-node(s) assignment property
+=============================================
+
+Generally a relation expressed by a predicate PROPERTY(RESOURCE, NODE),
+implying modification of the behavior of cluster wrt. resource-node
+pair:
+
+PROPERTY(RESOURCE, NODE) -> ALTER((RESOURCE, NODE))
+
+
+Resource-node affinity
+----------------------
+
+AFFINITY ::= AFFINITY(RESOURCE, NODE, NONE)
+ | AFFINITY(RESOURCE, NODE, FALSE)
+ | AFFINITY(RESOURCE, NODE, TRUE)
+ | AFFINITY(RESOURCE, NODE, WEIGHT), WEIGHT in {..., -1, 0, 1, ...},
+ 0~NONE, -INF~FALSE, +INF~TRUE
+. AFFINITY(RESOURCE, NODE, NONE) ... no special affinity (default)
+. AFFINITY(RESOURCE, NODE, FALSE) ... anti-affinity (the node cannot run)
+. AFFINITY(RESOURCE, NODE, TRUE) ... node forms a set of executive nodes
+. AFFINITY(RESOURCE, NODE, WEIGHT) ... prioritized model
+
+assumed variables (+ constraints) to be combined with proper preconditions:
+. AN in NODES^2\{}
+. B in intersection( { RUNNABLE(A) | A in AN } )
+
+propositional variables:
+. a ... RUNNING(A', B)
+. b ... A' in AN
+. c ... STICKY(B, FALSE) <-- see above
+. d ... A' in {A'' | A'' in AN: WEIGHT(B, A'') == max({WEIGHT(B, A''')
+ | A''' in AN})}
+
+R: driven by the failoved domains arrangement for given service/vm
+ - XXX check that RGManager indeed behaves like Pacemaker's
+ `symmetric-cluster`
+P: driven by `location` constraint
+ - only default `symmetric-cluster` is considered
+
+exists A in AN: AFFINITY(B, A, NONE) [1. no resource-node affinity]
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+R: driven by general arrangement patterns (possibilities with "NONE"):
+ - if for all A in AN: AFFINITY(B, A, NONE):
+ - nodes AN form attached failover domain, and this is either not `ordered`
+ or with the same `priority` for ordering
+ - alternatively, service/vm B has no failoverdomain explicitly attached
+ - else if for all A in AN: AFFINITY(B, A, NONE) OR AFFINITY(B, A, TRUE):
+ - nodes AN for which A in AN: AFFINITY(B, A, TRUE) form attached failover
+ domain, and this is not `restricted` (to allow those "NONE" nodes to
+ be ever used)
+ - or nodes AN form attached failover domain, and it is `ordered` with
+ two levels of priorities (for TRUE higher, for NONE lower)
+ - else if for all A in AN: AFFINITY(B, A, NONE) OR AFFINITY(B, A, FALSE):
+ - nodes AN for which A in AN: AFFINITY(B, A, NONE) form attached failover
+ domain, and this is either not `ordered` or with the same priority for
+ ordering, and has to be `restricted` to avoid slipping nodes
+ A in AN: AFFINITY(B, A, FALSE)
+ - else if for all A in AN: AFFINITY(B, A, NONE) OR AFFINITY(B, A, FALSE)
+ OR AFFINITY(B, A, TRUE):
+ - intersection of the previous two cases
+ - else if for all A in AN: AFFINITY(B, A, NONE) OR AFFINITY(B, A, WEIGHT),
+ -INFINITY < WEIGHT < INFINITY:
+ - either any WEIGHT < 0 becomes truncated to 0 (hence dropping losing
+ part of the configuration), or if such one present, the whole range
+ is rescaled to 0 < WEIGHT < INFINITY/numeric equivalent
+ and those member nodes having no affinity in this case have
+ lowest priority (hence highest `priority` value)
+ - either way, leads to AN forming attached failover domain, and it has
+ to be `ordered` with priorities as per previous point
+ - else?
+
+P: default, no need for that, otherwise specifying `score` as `0`
+ # pcs constraint location B prefers A= (??? to remove it as such)
+ # pcs constraint location B prefers A=0
+ # pcs contraint location add SOMEID B A 0
+
+exists A in AN: AFFINITY(B, A, FALSE) [2. antagonist resource-node affinity]
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+X a -> X ~b
+
+R: driven by this arrangement:
+ - if for all A in AN: AFFINITY(B, A, FALSE):
+ - nodes NODES\AN form attached failover domain, which has to
+ be `restricted` to prevent slipping onto any node in AN
+ - else:
+ - combine using patterns as in 1.
+
+P: by specifying (directly or indirectly) `score` as `-INFINITY`
+ # pcs constraint location B avoids A
+ # pcs constraint location B avoids A=INFINITY
+ # pcs constraint location B prefers A=-INFINITY (???)
+ # pcs contraint location add SOMEID B A -INFINITY
+ # crm_resource --ban --resource B --host A ...
+
+exists A in AN: AFFINITY(B, A, TRUE) [3. (positive) resource-node affinity]
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+X a -> X b
+
+R: driven by this arrangement:
+ - if for all A in AN: AFFINITY(B, A, TRUE):
+ - nodes AN form selected failover domain
+ - else:
+ - combine using patterns as in 1.
+
+P: by specifying (directly or indirectly) `score` as `INFINITY`
+ # pcs constraint location B prefers A
+ # pcs constraint location B prefers A=INFINITY
+ # pcs constraint location B avoids A=-INFINITY (???)
+ # pcs contraint location add SOMEID B A INFINITY
+
+[4. model of resource-node affinity with priorities]
+for all A in AN: AFFINITY(B, A, -INFINITY < WEIGHT(B,A) < INFINITY)
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+X a AND c -> X d [when not sticky, resource will run on the node
+ this resource has greatest affinity to]
+
+R: driven by this arrangement:
+ - see the respective pattern at 1.
+
+P: by specifying `score` as WEIGHT
+ # pcs constraint location B {prefers|avoids} A=WEIGHT(B,A)
+ # pcs constraint location add SOMEID B A WEIGHT(B,A)
+
+
+Initial exclusive resource-node affinity
+----------------------------------------
+
+TBD:
+https://bugzilla.redhat.com/994215#RFE--Control-which-node-a-service-autostarts-on
+
+
+
+References
+==========
+
+[1] http://oss.clusterlabs.org/pipermail/users/2016-January/002197.html
+
+: vim: set ft=rst: <-- not exactly, but better than nothing
diff --git a/__root__/doc/rgmanager-pacemaker/03.groups.txt b/__root__/doc/rgmanager-pacemaker/03.groups.txt
new file mode 100644
index 0000000..d6e95be
--- /dev/null
+++ b/__root__/doc/rgmanager-pacemaker/03.groups.txt
@@ -0,0 +1,272 @@
+IN THE LIGHT OF RGMANAGER-PACEMAKER CONVERSION: 03/RESOURCE GROUP PROPERTIES
+
+Copyright 2016 Red Hat, Inc., Jan Pokorný <jpokorny @at@ Red Hat .dot. com>
+Permission is granted to copy, distribute and/or modify this document
+under the terms of the GNU Free Documentation License, Version 1.3
+or any later version published by the Free Software Foundation;
+with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts.
+A copy of the license is included in the section entitled "GNU
+Free Documentation License".
+
+
+Preface
+=======
+
+This document elaborates on how selected resource group internal
+relationship properties (denoting the run-time behavior) formalized
+by the means of LTL logic maps to particular RGManager (R) and
+Pacemaker (P) configuration arrangements.
+Due to the purpose of this document, "selected" here means set of
+properties one commonly uses in case of the former cluster resource
+manager (R).
+
+Properties are categorised, each is further dissected based on
+the property variants (basically holds or doesn't, but can be more
+convoluted), and for each variants, the LTL model and R+P specifics
+are provided (when possible or practical).
+
+
+Outline
+-------
+
+Group properties derived from resource properties
+Group member vs. rest of group properties, PROPERTY(GROUP, RESOURCE)
+. FAILURE-ISOLATION
+Other group properties, PROPERTY(GROUP)
+
+
+
+Group properties derived from resource properties
+=================================================
+
+Resource group (group) is an ordered set of resources:
+
+GROUP ::= { RESOURCE1, ..., RESOURCEn },
+ RESOURCE1 < RESOURCE 2
+ ...
+ RESOURCEn-1 < RESOURCE n
+
+and is a product of two resource properties applied for each
+subsequent pair of resources in linear fashion:
+
+. ORDERING
+ ORDERING(RESOURCE1, RESOURCE2, STRONG)
+ ...
+ ORDERING(RESOURCEn-1, RESOURCEn, STRONG)
+
+. COOCCURRENCE
+ COOCCURRENCE(RESOURCE1, RESOURCE2, POSITIVE)
+ ...
+ COOCCURRENCE(RESOURCEn-1, RESOURCEn, POSITIVE)
+
+As the set is ordered, let's introduce two shortcut functions:
+
+. BEFORE(GROUP, RESOURCE) -> { R | for all R in GROUP, r < RESOURCE }
+. AFTER(GROUP, RESOURCE) -> { R | for all R in GROUP, r > RESOURCE }
+
+
+
+Group member vs. rest of group properties
+=========================================
+
+Generally a relation expressed by a predicate PROPERTY(GROUP, RESOURCE),
+assuming RESOURCE in GROUP, implying modification of the behavior of
+cluster wrt. group-resource pair:
+
+PROPERTY(GROUP, RESOURCE) -> ALTER(BEFORE(GROUP, RESOURCE))
+
+
+Independence between failing resource and its group predecessors
+----------------------------------------------------------------
+
+FAILURE-ISOLATION ::= FAILURE-ISOLATION(GROUP, RESOURCE, NONE)
+ | FAILURE-ISOLATION(GROUP, RESOURCE, TRY-RESTART)
+ | FAILURE-ISOLATION(GROUP, RESOURCE, STOP)
+. FAILURE-ISOLATION(GROUP, RESOURCE, NONE) ... RESOURCE failure leads to
+ recovery of the whole group
+. FAILURE-ISOLATION(GROUP, RESOURCE, TRY-RESTART)
+ ... RESOURCE failure leads to
+ (bounded) local restarts
+ of RESOURCE and its successor
+ (AFTER(GROUP, RESOURCE)) first
+. FAILURE-ISOLATION(GROUP, RESOURCE, STOP) ... RESOURCE failure leads to
+ stopping and disabling
+ of RESOURCE and its successor
+ (AFTER(GROUP, RESOURCE))
+
+R: driven by `__independent_subtree` property of RESOURCE within GROUP
+
+P: in part, driven by `on-fail` property of `monitor` and `stop` operations
+ for RESOURCE
+
+FAILURE-ISOLATION(GROUP, RESOURCE, NONE) [1. recovery the group]
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+R: default, no need for that, othewise specifying `@__independent_subtree`
+ as `0` for RESOURCE within GROUP
+
+P: specifying `migration-threshold` 1 (+default `on-fail` values)
+ for RESOURCE, but only if original recovery policy was `relocate`,
+ so better not to do anything otherwise???
+
+
+FAILURE-ISOLATION(GROUP, RESOURCE, TRY-RESTART) [2. begin with local restarts]
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+R: specifying `@__independent_subtree` as `1` or `yes`
+ + `@__max_restarts` and `__restart_expire_time`
+
+P: specifying `migration-threshold` as a value between 2 and INFINITY
+ (inclusive) (+default `on-fail` values) for RESOURCE, but only if
+ original recovery policy was `relocate`, so better not to do anything
+ otherwise???
+
+FAILURE-ISOLATION(GROUP, RESOURCE, STOP) [3. disable unconditionally]
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+R: specifying `@__independent_subtree` as `2` or `non-critical`
+
+P: default `on-fail` values modulo `ignore` for `monitor` (or `status`)
+ operation and `stop` for `stop`) for RESOURCE ???
+
+
+
+Other group properties
+=========================
+
+Recovery policy group property
+---------------------------------
+
+RECOVERY ::= RECOVERY(GROUP, RESTART-ONLY)
+ | RECOVERY(GROUP, RESTART-UNTIL1, MAX-RESTARTS)
+ | RECOVERY(GROUP, RESTART-UNTIL2, MAX-RESTARTS, EXPIRE-TIME)
+ | RECOVERY(GROUP, RELOCATE)
+ | RECOVERY(GROUP, DISABLE)
+. RECOVERY(GROUP, RESTART) ... "attempt to restart in place", unlimited
+. RECOVERY(GROUP, RESTART-UNTIL1, MAX-RESTARTS)
+ ... ditto, but after MAX-RESTARTS attempts
+ (for the whole period of group-node
+ assignment) attempt to relocate
+. RECOVERY(GROUP, RESTART-UNTIL2, MAX-RESTARTS, EXPIRE-TIME)
+ ... ditto, but after MAX-RESTARTS attempts
+ accumulated within EXPIRE-TIME windows,
+ attempt to relocate
+. RECOVERY(GROUP, RELOCATE) ... move to another node
+. RECOVERY(GROUP, DISABLE) ... do not attempt anything, stop
+
+R: driven by `/cluster/rm/(service|vm)/@recovery`
+
+P: driven by OCF RA return code and/or `migration-threshold`
+
+RECOVERY(GROUP, RESTART-ONLY) [1. restart in place, unlimited]
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+R: default, no need for that, otherwise specifying `@recovery` as `restart`
+ (and not specifying none of `@max_restarts`, `@restart_expire_time`,
+ or keeping `@max_restarts` at zero!)
+
+P: default, no need for that, otherwise specifying `migration-threshold`
+ as `INFINITY` (or zero?; can be overriden by OCF RA return code, anyway?)
+
+RECOVERY(GROUP, RESTART-UNTIL1, MAX-RESTARTS) [2. restart + absolute limit]
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+R: driven by specifying `@max_restarts` as `MAX-RESTARTS` (value, non-positive
+ number boils down to case 1.)
+ - and, optionally, specifying `@recovery` as `restart` (or not at all!)
+
+P: driven by specifying `migration-threshold` as `MAX-RESTARTS` (value,
+ presumably non-negative, `INFINITY` or zero? boil down to case 1.)
+ (but can be overriden by OCF RA return code, anyway?)
+
+[3. restart + relative limit for number of restarts/period]
+RECOVERY(GROUP, RESTART-UNTIL2, MAX-RESTARTS, EXPIRE-TIME)
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+R: driven by specifying `@max_restarts` as `MAX-RESTARTS` (value, non-positive
+ number boils down to case 1.) and `@restart_expire_time`
+ as `EXPIRE-TIME` (value, negative after expansion boils down to the
+ case 1., zero to case 2.)
+ - and, optionally, specifying `@recovery` as `restart` (or not at all!)
+
+P: driven by specifying `migration-threshold` as `MAX-RESTARTS` (value,
+ presumably non-negative, `INFINITY` or zero? boil down to case 1.) and
+ `failure-timeout` as `EXPIRE-TIME` (value, presumably positive, zero
+ boils down to case 2.)
+ (but can be overriden by OCF RA return code, anyway?)
+
+RECOVERY(GROUP, RELOCATE) [4. move to another node]
+~~~~~~~~~~~~~~~~~~~~~~~~~
+
+R: driven by specifying `@recovery` as `relocate`
+
+P: driven by specifying `migration-threshold` as 1
+ (or possibly negative number?; regardless of `failure-timeout`)
+ (but can be overriden by OCF RA return code, anyway?)
+
+RECOVERY(GROUP, DISABLE) [5. no more attempt]
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+R: driven by specifying `@recovery` as `disable`
+
+P: can only be achieved in case of AFFINITY(GROUP, NODE, FALSE)
+ for all nodes except one and specifying `migration-threshold`
+ as `1` because upon single failure, remaining
+ AFFINITY(RESOURCE, NODE, FALSE) rule for yet-enabled NODE will
+ be added, effectively preventing RESOURCE to run anywhere
+
+
+Is-enabled group property
+-------------------------
+
+ENABLED ::= ENABLED(GROUP, TRUE)
+ | ENABLED(GROUP, FALSE)
+. ENABLED(GROUP, TRUE) ... group is enabled (default assumption)
+. ENABLED(GROUP, FALSE) ... group is disabled
+
+notes
+. see also 01/cluster: FUNCTION
+
+R: except for static disabling of everything (RGManager avoidance),
+ can be partially driven by `/cluster/rm/(service|vm)/@autostart`
+ and/or run-time modification using `clusvcadm`
+ (or at least it is close???)
+
+P: via `target-role` (or possibly `is-managed`) meta-attribute [1]
+
+ENABLED(GROUP, TRUE) [1. group is enabled]
+~~~~~~~~~~~~~~~~~~~~~~~
+
+R: (partially) driven by specifying `@autostart` as non-zero
+ (has to be sequence of digits for sure, though!)
+ - default, no need for that
+ # clusvcadm -U GROUP <-- whole service/vm only
+
+P: default, no need for that, otherwise specifying `target-role` as `Started`
+ (or possibly `is-managed` as `true`)
+ # pcs resource enable GROUP
+ # pcs resource meta GROUP target-role=
+ # pcs resource meta GROUP target-role=Started
+ or
+ # pcs resource manage GROUP
+ # pcs resource meta GROUP is-managed=
+ # pcs resource meta GROUP is-managed=true
+
+ENABLED(GROUP, FALSE) [2. group is disabled]
+~~~~~~~~~~~~~~~~~~~~~
+
+R: (partially?) driven by specifying `@autostart` as `0` (or `no`)
+ # clusvcadm -Z GROUP <-- whole service/vm only
+
+P: # pcs resource disable GROUP
+ # pcs resource meta GROUP target-role=Stopped
+ or
+ # pcs resource unmanage GROUP
+ # pcs resource meta GROUP is-managed=false
+
+
+
+References
+==========
+
+: vim: set ft=rst: <-- not exactly, but better than nothing