Report generated on 31-Oct-2020 at 01:14:09 by pytest-html v2.1.1
389-ds-base | 2.0.0.0-20201031gitcdaa81c.fc32 |
Packages | {"pluggy": "0.13.1", "py": "1.9.0", "pytest": "5.4.3"} |
Platform | Linux-5.7.7-200.fc32.x86_64-x86_64-with-glibc2.2.5 |
Plugins | {"html": "2.1.1", "libfaketime": "0.1.2", "metadata": "1.10.0"} |
Python | 3.8.6 |
cyrus-sasl | 2.1.27-4.fc32 |
nspr | 4.29.0-1.fc32 |
nss | 3.57.0-1.fc32 |
openldap | 2.4.47-5.fc32 |
2058 tests ran in 17860.54 seconds.
(Un)check the boxes to filter the results.
1968 passed, 21 skipped, 62 failed, 11 errors, 20 expected failures, 8 unexpected passesResult | Test | Duration | Links |
---|---|---|---|
No results found. Try to check the filters | |||
Error | suites/replication/cleanallruv_test.py::test_clean_restart::teardown | 129.89 | |
def fin(): try: # Restart the masters and rerun cleanallruv for inst in topology_m4.ms.values(): inst.restart() cruv_task = CleanAllRUVTask(topology_m4.ms["master1"]) cruv_task.create(properties={ 'replica-id': m4rid, 'replica-base-dn': DEFAULT_SUFFIX, 'replica-force-cleaning': 'no', }) cruv_task.wait() except ldap.UNWILLING_TO_PERFORM: # In some casse we already cleaned rid4, so if we fail, it's okay pass restore_master4(topology_m4) # Make sure everything works. > repl.test_replication_topology(topology_m4.ms.values()) suites/replication/cleanallruv_test.py:179: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/local/lib/python3.8/site-packages/lib389/replica.py:2531: in test_replication_topology self.test_replication(a, b, timeout) /usr/local/lib/python3.8/site-packages/lib389/replica.py:2517: in test_replication self.wait_for_replication(from_instance, to_instance, timeout) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.replica.ReplicationManager object at 0x7f61c33baa60> from_instance = <lib389.DirSrv object at 0x7f61c33ec940> to_instance = <lib389.DirSrv object at 0x7f61c3395640>, timeout = 20 def wait_for_replication(self, from_instance, to_instance, timeout=20): """Wait for a replication event to occur from instance to instance. This shows some point of synchronisation has occured. :param from_instance: The instance whos state we we want to check from :type from_instance: lib389.DirSrv :param to_instance: The instance whos state we want to check matches from. :type to_instance: lib389.DirSrv :param timeout: Fail after timeout seconds. :type timeout: int """ # Touch something then wait_for_replication. from_groups = Groups(from_instance, basedn=self._suffix, rdn=None) to_groups = Groups(to_instance, basedn=self._suffix, rdn=None) from_group = from_groups.get('replication_managers') to_group = to_groups.get('replication_managers') change = str(uuid.uuid4()) from_group.replace('description', change) for i in range(0, timeout): desc = to_group.get_attr_val_utf8('description') if change == desc: self._log.info("SUCCESS: Replication from %s to %s is working" % (from_instance.ldapuri, to_instance.ldapuri)) return True self._log.info("Retry: Replication from %s to %s is NOT working (expect %s / got description=%s)" % (from_instance.ldapuri, to_instance.ldapuri, change, desc)) time.sleep(1) self._log.info("FAIL: Replication from %s to %s is NOT working (expect %s / got description=%s)" % (from_instance.ldapuri, to_instance.ldapuri, change, desc)) > raise Exception("Replication did not sync in time!") E Exception: Replication did not sync in time! /usr/local/lib/python3.8/site-packages/lib389/replica.py:2501: Exception -------------------------------Captured log setup------------------------------- [35mDEBUG [0m tests.suites.replication.cleanallruv_test:cleanallruv_test.py:153 Wait a bit before the reset - it is required for the slow machines [35mDEBUG [0m tests.suites.replication.cleanallruv_test:cleanallruv_test.py:155 -------------- BEGIN RESET of m4 ----------------- [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is NOT working (expect 91517cc5-1389-40b8-88af-eac4acee701e / got description=6d98fd23-b029-4cf7-8af3-09045d403f8e) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 is NOT working (expect d7dfc440-5cba-400a-baa6-d62d20360992 / got description=91517cc5-1389-40b8-88af-eac4acee701e) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 is NOT working (expect fc7f5abf-263d-477b-a236-df919dbbbd59 / got description=d7dfc440-5cba-400a-baa6-d62d20360992) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 372422fe-c134-4c3c-8705-175a01634b94 / got description=fc7f5abf-263d-477b-a236-df919dbbbd59) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 is NOT working (expect cb394ea4-8f02-4f24-ae77-f1b194014754 / got description=372422fe-c134-4c3c-8705-175a01634b94) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 is NOT working (expect 02865025-4885-4426-b358-1e7e0e27a0da / got description=cb394ea4-8f02-4f24-ae77-f1b194014754) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect bcf89d0c-87de-43fa-aa3c-c0ce8cbbcaf6 / got description=02865025-4885-4426-b358-1e7e0e27a0da) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is working [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 is NOT working (expect 88c8ceb7-8d27-4975-9e03-d162125bb04a / got description=7122f6e1-0858-4ef6-8914-3a5ec860abf9) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 6855ea6f-d6f4-44af-a217-b472a9d43cff / got description=88c8ceb7-8d27-4975-9e03-d162125bb04a) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is NOT working (expect c76fa1b1-ad36-4f92-af89-07f2e5fcc370 / got description=6855ea6f-d6f4-44af-a217-b472a9d43cff) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 is NOT working (expect 8590b280-2a37-4696-9200-2904fe082e4d / got description=c76fa1b1-ad36-4f92-af89-07f2e5fcc370) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 is working [35mDEBUG [0m tests.suites.replication.cleanallruv_test:cleanallruv_test.py:181 -------------- FINISH RESET of m4 ----------------- -------------------------------Captured log call-------------------------------- [32mINFO [0m tests.suites.replication.cleanallruv_test:cleanallruv_test.py:256 Running test_clean_restart... [32mINFO [0m tests.suites.replication.cleanallruv_test:cleanallruv_test.py:259 test_clean: disable master 4... [32mINFO [0m tests.suites.replication.cleanallruv_test:cleanallruv_test.py:71 test_clean: remove all the agreements to master 4... [32mINFO [0m tests.suites.replication.cleanallruv_test:cleanallruv_test.py:267 test_clean: run the cleanAllRUV task... [32mINFO [0m tests.suites.replication.cleanallruv_test:cleanallruv_test.py:292 test_clean_restart: check all the masters have been cleaned... [32mINFO [0m tests.suites.replication.cleanallruv_test:cleanallruv_test.py:85 check_ruvs for replica dc=example,dc=com:1 (suffix:rid) [32mINFO [0m tests.suites.replication.cleanallruv_test:cleanallruv_test.py:85 check_ruvs for replica dc=example,dc=com:2 (suffix:rid) [32mINFO [0m tests.suites.replication.cleanallruv_test:cleanallruv_test.py:85 check_ruvs for replica dc=example,dc=com:3 (suffix:rid) [32mINFO [0m tests.suites.replication.cleanallruv_test:cleanallruv_test.py:296 test_clean_restart PASSED, restoring master 4... -----------------------------Captured log teardown------------------------------ [32mINFO [0m lib389.replica:replica.py:2084 SUCCESS: bootstrap to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 completed [32mINFO [0m lib389.replica:replica.py:2365 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 is was created [32mINFO [0m lib389.replica:replica.py:2365 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is was created [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 is NOT working (expect f81d3861-9742-49d5-bcad-ebbf818829b8 / got description=8590b280-2a37-4696-9200-2904fe082e4d) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 07b8080a-d4cf-4886-a8e5-187a5f947882 / got description=f81d3861-9742-49d5-bcad-ebbf818829b8) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 07b8080a-d4cf-4886-a8e5-187a5f947882 / got description=f81d3861-9742-49d5-bcad-ebbf818829b8) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 07b8080a-d4cf-4886-a8e5-187a5f947882 / got description=f81d3861-9742-49d5-bcad-ebbf818829b8) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is working [32mINFO [0m lib389.replica:replica.py:2153 SUCCESS: joined master from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 [32mINFO [0m lib389.replica:replica.py:2365 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 is was created [32mINFO [0m lib389.replica:replica.py:2365 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 is was created [32mINFO [0m lib389.replica:replica.py:2365 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is was created [32mINFO [0m lib389.replica:replica.py:2365 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 is was created [32mINFO [0m tests.suites.replication.cleanallruv_test:cleanallruv_test.py:148 Master 4 has been successfully restored. [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is NOT working (expect a6a03463-12e2-4255-976f-82e8fd123a21 / got description=f81d3861-9742-49d5-bcad-ebbf818829b8) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 is NOT working (expect ba956b87-611e-41a9-8cb1-15b3dfab2e06 / got description=a6a03463-12e2-4255-976f-82e8fd123a21) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 is NOT working (expect a57e26e0-909b-4c87-bb18-3713e10b26e9 / got description=ba956b87-611e-41a9-8cb1-15b3dfab2e06) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 37dddf1e-abce-4436-8c75-11fb1fd51402 / got description=a57e26e0-909b-4c87-bb18-3713e10b26e9) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 is NOT working (expect c1e28090-b576-431f-bb07-4a479870ce82 / got description=37dddf1e-abce-4436-8c75-11fb1fd51402) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 is NOT working (expect 2a250435-96d1-4335-86e4-a5271980d1c7 / got description=c1e28090-b576-431f-bb07-4a479870ce82) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 29f6e719-c58d-4c70-9f52-7931a52c473f / got description=2a250435-96d1-4335-86e4-a5271980d1c7) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is NOT working (expect 0321a86f-6e48-4231-a690-05d5b741bc2b / got description=29f6e719-c58d-4c70-9f52-7931a52c473f) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 is NOT working (expect d2d998b3-180e-43c4-b92d-cb19ea408323 / got description=0321a86f-6e48-4231-a690-05d5b741bc2b) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 18845081-24ff-421f-9611-8627e5119a80 / got description=d2d998b3-180e-43c4-b92d-cb19ea408323) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 18845081-24ff-421f-9611-8627e5119a80 / got description=d2d998b3-180e-43c4-b92d-cb19ea408323) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 18845081-24ff-421f-9611-8627e5119a80 / got description=d2d998b3-180e-43c4-b92d-cb19ea408323) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 18845081-24ff-421f-9611-8627e5119a80 / got description=d2d998b3-180e-43c4-b92d-cb19ea408323) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 18845081-24ff-421f-9611-8627e5119a80 / got description=d2d998b3-180e-43c4-b92d-cb19ea408323) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 18845081-24ff-421f-9611-8627e5119a80 / got description=d2d998b3-180e-43c4-b92d-cb19ea408323) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 18845081-24ff-421f-9611-8627e5119a80 / got description=d2d998b3-180e-43c4-b92d-cb19ea408323) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 18845081-24ff-421f-9611-8627e5119a80 / got description=d2d998b3-180e-43c4-b92d-cb19ea408323) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 18845081-24ff-421f-9611-8627e5119a80 / got description=d2d998b3-180e-43c4-b92d-cb19ea408323) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 18845081-24ff-421f-9611-8627e5119a80 / got description=d2d998b3-180e-43c4-b92d-cb19ea408323) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 18845081-24ff-421f-9611-8627e5119a80 / got description=d2d998b3-180e-43c4-b92d-cb19ea408323) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 18845081-24ff-421f-9611-8627e5119a80 / got description=d2d998b3-180e-43c4-b92d-cb19ea408323) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 18845081-24ff-421f-9611-8627e5119a80 / got description=d2d998b3-180e-43c4-b92d-cb19ea408323) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 18845081-24ff-421f-9611-8627e5119a80 / got description=d2d998b3-180e-43c4-b92d-cb19ea408323) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 18845081-24ff-421f-9611-8627e5119a80 / got description=d2d998b3-180e-43c4-b92d-cb19ea408323) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 18845081-24ff-421f-9611-8627e5119a80 / got description=d2d998b3-180e-43c4-b92d-cb19ea408323) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 18845081-24ff-421f-9611-8627e5119a80 / got description=d2d998b3-180e-43c4-b92d-cb19ea408323) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 18845081-24ff-421f-9611-8627e5119a80 / got description=d2d998b3-180e-43c4-b92d-cb19ea408323) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 18845081-24ff-421f-9611-8627e5119a80 / got description=d2d998b3-180e-43c4-b92d-cb19ea408323) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 18845081-24ff-421f-9611-8627e5119a80 / got description=d2d998b3-180e-43c4-b92d-cb19ea408323) [32mINFO [0m lib389.replica:replica.py:2500 FAIL: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 18845081-24ff-421f-9611-8627e5119a80 / got description=d2d998b3-180e-43c4-b92d-cb19ea408323) | |||
Error | suites/replication/cleanallruv_test.py::test_clean_force::setup | 34.45 | |
request = <SubRequest 'm4rid' for <Function test_clean_force>> topology_m4 = <lib389.topologies.TopologyMain object at 0x7f61c3381160> @pytest.fixture() def m4rid(request, topology_m4): log.debug("Wait a bit before the reset - it is required for the slow machines") time.sleep(5) log.debug("-------------- BEGIN RESET of m4 -----------------") repl = ReplicationManager(DEFAULT_SUFFIX) > repl.test_replication_topology(topology_m4.ms.values()) suites/replication/cleanallruv_test.py:157: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/local/lib/python3.8/site-packages/lib389/replica.py:2531: in test_replication_topology self.test_replication(a, b, timeout) /usr/local/lib/python3.8/site-packages/lib389/replica.py:2517: in test_replication self.wait_for_replication(from_instance, to_instance, timeout) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.replica.ReplicationManager object at 0x7f61c3443d00> from_instance = <lib389.DirSrv object at 0x7f61c33ec940> to_instance = <lib389.DirSrv object at 0x7f61c3395640>, timeout = 20 def wait_for_replication(self, from_instance, to_instance, timeout=20): """Wait for a replication event to occur from instance to instance. This shows some point of synchronisation has occured. :param from_instance: The instance whos state we we want to check from :type from_instance: lib389.DirSrv :param to_instance: The instance whos state we want to check matches from. :type to_instance: lib389.DirSrv :param timeout: Fail after timeout seconds. :type timeout: int """ # Touch something then wait_for_replication. from_groups = Groups(from_instance, basedn=self._suffix, rdn=None) to_groups = Groups(to_instance, basedn=self._suffix, rdn=None) from_group = from_groups.get('replication_managers') to_group = to_groups.get('replication_managers') change = str(uuid.uuid4()) from_group.replace('description', change) for i in range(0, timeout): desc = to_group.get_attr_val_utf8('description') if change == desc: self._log.info("SUCCESS: Replication from %s to %s is working" % (from_instance.ldapuri, to_instance.ldapuri)) return True self._log.info("Retry: Replication from %s to %s is NOT working (expect %s / got description=%s)" % (from_instance.ldapuri, to_instance.ldapuri, change, desc)) time.sleep(1) self._log.info("FAIL: Replication from %s to %s is NOT working (expect %s / got description=%s)" % (from_instance.ldapuri, to_instance.ldapuri, change, desc)) > raise Exception("Replication did not sync in time!") E Exception: Replication did not sync in time! /usr/local/lib/python3.8/site-packages/lib389/replica.py:2501: Exception -------------------------------Captured log setup------------------------------- [35mDEBUG [0m tests.suites.replication.cleanallruv_test:cleanallruv_test.py:153 Wait a bit before the reset - it is required for the slow machines [35mDEBUG [0m tests.suites.replication.cleanallruv_test:cleanallruv_test.py:155 -------------- BEGIN RESET of m4 ----------------- [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is NOT working (expect 5c1c52de-2f74-4f07-90c6-68cb76cc0e18 / got description=d2d998b3-180e-43c4-b92d-cb19ea408323) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 is NOT working (expect e589923e-57e2-4660-ada5-c07e12e003bb / got description=5c1c52de-2f74-4f07-90c6-68cb76cc0e18) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 is NOT working (expect 3c4bc354-579d-437e-8918-376b61ae43a2 / got description=e589923e-57e2-4660-ada5-c07e12e003bb) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect ef6df2a4-25ea-4cc0-b4c3-e27e5e11db5a / got description=3c4bc354-579d-437e-8918-376b61ae43a2) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 is NOT working (expect c99400dc-be92-4ece-96dc-56c743c0856a / got description=ef6df2a4-25ea-4cc0-b4c3-e27e5e11db5a) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 is NOT working (expect 3cbc0740-440e-4303-9c43-cf8646911ae7 / got description=c99400dc-be92-4ece-96dc-56c743c0856a) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 0da93b2b-2c2d-49d2-b1fb-cb4705aea145 / got description=3cbc0740-440e-4303-9c43-cf8646911ae7) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is NOT working (expect 3ee23333-aca0-4810-ab72-db2dea40d941 / got description=0da93b2b-2c2d-49d2-b1fb-cb4705aea145) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 is NOT working (expect 3b015067-066f-426a-88cb-96ec70ff30bd / got description=3ee23333-aca0-4810-ab72-db2dea40d941) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect d883a309-407a-4218-978e-3218d2c67987 / got description=3b015067-066f-426a-88cb-96ec70ff30bd) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect d883a309-407a-4218-978e-3218d2c67987 / got description=3b015067-066f-426a-88cb-96ec70ff30bd) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect d883a309-407a-4218-978e-3218d2c67987 / got description=3b015067-066f-426a-88cb-96ec70ff30bd) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect d883a309-407a-4218-978e-3218d2c67987 / got description=3b015067-066f-426a-88cb-96ec70ff30bd) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect d883a309-407a-4218-978e-3218d2c67987 / got description=3b015067-066f-426a-88cb-96ec70ff30bd) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect d883a309-407a-4218-978e-3218d2c67987 / got description=3b015067-066f-426a-88cb-96ec70ff30bd) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect d883a309-407a-4218-978e-3218d2c67987 / got description=3b015067-066f-426a-88cb-96ec70ff30bd) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect d883a309-407a-4218-978e-3218d2c67987 / got description=3b015067-066f-426a-88cb-96ec70ff30bd) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect d883a309-407a-4218-978e-3218d2c67987 / got description=3b015067-066f-426a-88cb-96ec70ff30bd) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect d883a309-407a-4218-978e-3218d2c67987 / got description=3b015067-066f-426a-88cb-96ec70ff30bd) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect d883a309-407a-4218-978e-3218d2c67987 / got description=3b015067-066f-426a-88cb-96ec70ff30bd) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect d883a309-407a-4218-978e-3218d2c67987 / got description=3b015067-066f-426a-88cb-96ec70ff30bd) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect d883a309-407a-4218-978e-3218d2c67987 / got description=3b015067-066f-426a-88cb-96ec70ff30bd) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect d883a309-407a-4218-978e-3218d2c67987 / got description=3b015067-066f-426a-88cb-96ec70ff30bd) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect d883a309-407a-4218-978e-3218d2c67987 / got description=3b015067-066f-426a-88cb-96ec70ff30bd) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect d883a309-407a-4218-978e-3218d2c67987 / got description=3b015067-066f-426a-88cb-96ec70ff30bd) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect d883a309-407a-4218-978e-3218d2c67987 / got description=3b015067-066f-426a-88cb-96ec70ff30bd) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect d883a309-407a-4218-978e-3218d2c67987 / got description=3b015067-066f-426a-88cb-96ec70ff30bd) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect d883a309-407a-4218-978e-3218d2c67987 / got description=3b015067-066f-426a-88cb-96ec70ff30bd) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect d883a309-407a-4218-978e-3218d2c67987 / got description=3b015067-066f-426a-88cb-96ec70ff30bd) [32mINFO [0m lib389.replica:replica.py:2500 FAIL: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect d883a309-407a-4218-978e-3218d2c67987 / got description=3b015067-066f-426a-88cb-96ec70ff30bd) | |||
Error | suites/replication/cleanallruv_test.py::test_abort::setup | 34.37 | |
request = <SubRequest 'm4rid' for <Function test_abort>> topology_m4 = <lib389.topologies.TopologyMain object at 0x7f61c3381160> @pytest.fixture() def m4rid(request, topology_m4): log.debug("Wait a bit before the reset - it is required for the slow machines") time.sleep(5) log.debug("-------------- BEGIN RESET of m4 -----------------") repl = ReplicationManager(DEFAULT_SUFFIX) > repl.test_replication_topology(topology_m4.ms.values()) suites/replication/cleanallruv_test.py:157: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/local/lib/python3.8/site-packages/lib389/replica.py:2531: in test_replication_topology self.test_replication(a, b, timeout) /usr/local/lib/python3.8/site-packages/lib389/replica.py:2517: in test_replication self.wait_for_replication(from_instance, to_instance, timeout) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.replica.ReplicationManager object at 0x7f61c34d82e0> from_instance = <lib389.DirSrv object at 0x7f61c33ec940> to_instance = <lib389.DirSrv object at 0x7f61c3395640>, timeout = 20 def wait_for_replication(self, from_instance, to_instance, timeout=20): """Wait for a replication event to occur from instance to instance. This shows some point of synchronisation has occured. :param from_instance: The instance whos state we we want to check from :type from_instance: lib389.DirSrv :param to_instance: The instance whos state we want to check matches from. :type to_instance: lib389.DirSrv :param timeout: Fail after timeout seconds. :type timeout: int """ # Touch something then wait_for_replication. from_groups = Groups(from_instance, basedn=self._suffix, rdn=None) to_groups = Groups(to_instance, basedn=self._suffix, rdn=None) from_group = from_groups.get('replication_managers') to_group = to_groups.get('replication_managers') change = str(uuid.uuid4()) from_group.replace('description', change) for i in range(0, timeout): desc = to_group.get_attr_val_utf8('description') if change == desc: self._log.info("SUCCESS: Replication from %s to %s is working" % (from_instance.ldapuri, to_instance.ldapuri)) return True self._log.info("Retry: Replication from %s to %s is NOT working (expect %s / got description=%s)" % (from_instance.ldapuri, to_instance.ldapuri, change, desc)) time.sleep(1) self._log.info("FAIL: Replication from %s to %s is NOT working (expect %s / got description=%s)" % (from_instance.ldapuri, to_instance.ldapuri, change, desc)) > raise Exception("Replication did not sync in time!") E Exception: Replication did not sync in time! /usr/local/lib/python3.8/site-packages/lib389/replica.py:2501: Exception -------------------------------Captured log setup------------------------------- [35mDEBUG [0m tests.suites.replication.cleanallruv_test:cleanallruv_test.py:153 Wait a bit before the reset - it is required for the slow machines [35mDEBUG [0m tests.suites.replication.cleanallruv_test:cleanallruv_test.py:155 -------------- BEGIN RESET of m4 ----------------- [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is NOT working (expect 5f80fa0e-9299-4e6a-9d85-98335ac1311f / got description=3b015067-066f-426a-88cb-96ec70ff30bd) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 is NOT working (expect f4c73853-72ee-48be-aa01-bc5fa53aaee5 / got description=5f80fa0e-9299-4e6a-9d85-98335ac1311f) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 is NOT working (expect 39885e74-87f3-49d0-ad16-ba5fb974334d / got description=f4c73853-72ee-48be-aa01-bc5fa53aaee5) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect dff44408-79db-4580-bd67-4590b651e596 / got description=39885e74-87f3-49d0-ad16-ba5fb974334d) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 is NOT working (expect de80f882-c2b4-45b8-b246-77057d2b6cb8 / got description=dff44408-79db-4580-bd67-4590b651e596) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 is NOT working (expect e1524d5e-e071-452f-a1df-3195a8f3f390 / got description=de80f882-c2b4-45b8-b246-77057d2b6cb8) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect fcdc9d0a-7120-4dea-9137-efae043f555e / got description=e1524d5e-e071-452f-a1df-3195a8f3f390) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is NOT working (expect cb945856-6417-465a-b8be-6da91ec33b00 / got description=fcdc9d0a-7120-4dea-9137-efae043f555e) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 is NOT working (expect 550c3245-9522-49cb-a236-1da13581c0e7 / got description=cb945856-6417-465a-b8be-6da91ec33b00) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 89951600-d951-45f1-a737-b5c90d22b2ab / got description=550c3245-9522-49cb-a236-1da13581c0e7) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 89951600-d951-45f1-a737-b5c90d22b2ab / got description=550c3245-9522-49cb-a236-1da13581c0e7) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 89951600-d951-45f1-a737-b5c90d22b2ab / got description=550c3245-9522-49cb-a236-1da13581c0e7) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 89951600-d951-45f1-a737-b5c90d22b2ab / got description=550c3245-9522-49cb-a236-1da13581c0e7) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 89951600-d951-45f1-a737-b5c90d22b2ab / got description=550c3245-9522-49cb-a236-1da13581c0e7) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 89951600-d951-45f1-a737-b5c90d22b2ab / got description=550c3245-9522-49cb-a236-1da13581c0e7) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 89951600-d951-45f1-a737-b5c90d22b2ab / got description=550c3245-9522-49cb-a236-1da13581c0e7) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 89951600-d951-45f1-a737-b5c90d22b2ab / got description=550c3245-9522-49cb-a236-1da13581c0e7) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 89951600-d951-45f1-a737-b5c90d22b2ab / got description=550c3245-9522-49cb-a236-1da13581c0e7) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 89951600-d951-45f1-a737-b5c90d22b2ab / got description=550c3245-9522-49cb-a236-1da13581c0e7) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 89951600-d951-45f1-a737-b5c90d22b2ab / got description=550c3245-9522-49cb-a236-1da13581c0e7) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 89951600-d951-45f1-a737-b5c90d22b2ab / got description=550c3245-9522-49cb-a236-1da13581c0e7) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 89951600-d951-45f1-a737-b5c90d22b2ab / got description=550c3245-9522-49cb-a236-1da13581c0e7) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 89951600-d951-45f1-a737-b5c90d22b2ab / got description=550c3245-9522-49cb-a236-1da13581c0e7) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 89951600-d951-45f1-a737-b5c90d22b2ab / got description=550c3245-9522-49cb-a236-1da13581c0e7) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 89951600-d951-45f1-a737-b5c90d22b2ab / got description=550c3245-9522-49cb-a236-1da13581c0e7) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 89951600-d951-45f1-a737-b5c90d22b2ab / got description=550c3245-9522-49cb-a236-1da13581c0e7) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 89951600-d951-45f1-a737-b5c90d22b2ab / got description=550c3245-9522-49cb-a236-1da13581c0e7) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 89951600-d951-45f1-a737-b5c90d22b2ab / got description=550c3245-9522-49cb-a236-1da13581c0e7) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 89951600-d951-45f1-a737-b5c90d22b2ab / got description=550c3245-9522-49cb-a236-1da13581c0e7) [32mINFO [0m lib389.replica:replica.py:2500 FAIL: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 89951600-d951-45f1-a737-b5c90d22b2ab / got description=550c3245-9522-49cb-a236-1da13581c0e7) | |||
Error | suites/replication/cleanallruv_test.py::test_abort_restart::setup | 34.39 | |
request = <SubRequest 'm4rid' for <Function test_abort_restart>> topology_m4 = <lib389.topologies.TopologyMain object at 0x7f61c3381160> @pytest.fixture() def m4rid(request, topology_m4): log.debug("Wait a bit before the reset - it is required for the slow machines") time.sleep(5) log.debug("-------------- BEGIN RESET of m4 -----------------") repl = ReplicationManager(DEFAULT_SUFFIX) > repl.test_replication_topology(topology_m4.ms.values()) suites/replication/cleanallruv_test.py:157: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/local/lib/python3.8/site-packages/lib389/replica.py:2531: in test_replication_topology self.test_replication(a, b, timeout) /usr/local/lib/python3.8/site-packages/lib389/replica.py:2517: in test_replication self.wait_for_replication(from_instance, to_instance, timeout) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.replica.ReplicationManager object at 0x7f61c33bc730> from_instance = <lib389.DirSrv object at 0x7f61c33ec940> to_instance = <lib389.DirSrv object at 0x7f61c3395640>, timeout = 20 def wait_for_replication(self, from_instance, to_instance, timeout=20): """Wait for a replication event to occur from instance to instance. This shows some point of synchronisation has occured. :param from_instance: The instance whos state we we want to check from :type from_instance: lib389.DirSrv :param to_instance: The instance whos state we want to check matches from. :type to_instance: lib389.DirSrv :param timeout: Fail after timeout seconds. :type timeout: int """ # Touch something then wait_for_replication. from_groups = Groups(from_instance, basedn=self._suffix, rdn=None) to_groups = Groups(to_instance, basedn=self._suffix, rdn=None) from_group = from_groups.get('replication_managers') to_group = to_groups.get('replication_managers') change = str(uuid.uuid4()) from_group.replace('description', change) for i in range(0, timeout): desc = to_group.get_attr_val_utf8('description') if change == desc: self._log.info("SUCCESS: Replication from %s to %s is working" % (from_instance.ldapuri, to_instance.ldapuri)) return True self._log.info("Retry: Replication from %s to %s is NOT working (expect %s / got description=%s)" % (from_instance.ldapuri, to_instance.ldapuri, change, desc)) time.sleep(1) self._log.info("FAIL: Replication from %s to %s is NOT working (expect %s / got description=%s)" % (from_instance.ldapuri, to_instance.ldapuri, change, desc)) > raise Exception("Replication did not sync in time!") E Exception: Replication did not sync in time! /usr/local/lib/python3.8/site-packages/lib389/replica.py:2501: Exception -------------------------------Captured log setup------------------------------- [35mDEBUG [0m tests.suites.replication.cleanallruv_test:cleanallruv_test.py:153 Wait a bit before the reset - it is required for the slow machines [35mDEBUG [0m tests.suites.replication.cleanallruv_test:cleanallruv_test.py:155 -------------- BEGIN RESET of m4 ----------------- [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is NOT working (expect 4bb16770-28d0-423f-9b05-c7a41913757c / got description=550c3245-9522-49cb-a236-1da13581c0e7) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 is NOT working (expect 60558c5b-58e4-48bc-8c50-9c86b2721e65 / got description=4bb16770-28d0-423f-9b05-c7a41913757c) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 is NOT working (expect cedbcb00-5d65-47fc-9c1c-e0630815c6d6 / got description=60558c5b-58e4-48bc-8c50-9c86b2721e65) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect c982518d-6e28-4d16-9030-ea059a9c907b / got description=cedbcb00-5d65-47fc-9c1c-e0630815c6d6) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 is NOT working (expect f3af31db-b1ad-4329-958f-5d6811bd98a7 / got description=c982518d-6e28-4d16-9030-ea059a9c907b) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 is NOT working (expect e9eb867c-e8bf-4e5c-bc37-b08726b70396 / got description=f3af31db-b1ad-4329-958f-5d6811bd98a7) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 91445b86-c443-48cd-a52b-a553c40a2f20 / got description=e9eb867c-e8bf-4e5c-bc37-b08726b70396) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is NOT working (expect 7b093249-dcb2-4832-b27e-fcdde08eae69 / got description=91445b86-c443-48cd-a52b-a553c40a2f20) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 is NOT working (expect 4047eea7-9963-4679-b8b1-5047b9560a75 / got description=7b093249-dcb2-4832-b27e-fcdde08eae69) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 3deea14a-fda7-4ca1-8a96-72bd1e8c8f99 / got description=4047eea7-9963-4679-b8b1-5047b9560a75) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 3deea14a-fda7-4ca1-8a96-72bd1e8c8f99 / got description=4047eea7-9963-4679-b8b1-5047b9560a75) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 3deea14a-fda7-4ca1-8a96-72bd1e8c8f99 / got description=4047eea7-9963-4679-b8b1-5047b9560a75) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 3deea14a-fda7-4ca1-8a96-72bd1e8c8f99 / got description=4047eea7-9963-4679-b8b1-5047b9560a75) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 3deea14a-fda7-4ca1-8a96-72bd1e8c8f99 / got description=4047eea7-9963-4679-b8b1-5047b9560a75) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 3deea14a-fda7-4ca1-8a96-72bd1e8c8f99 / got description=4047eea7-9963-4679-b8b1-5047b9560a75) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 3deea14a-fda7-4ca1-8a96-72bd1e8c8f99 / got description=4047eea7-9963-4679-b8b1-5047b9560a75) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 3deea14a-fda7-4ca1-8a96-72bd1e8c8f99 / got description=4047eea7-9963-4679-b8b1-5047b9560a75) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 3deea14a-fda7-4ca1-8a96-72bd1e8c8f99 / got description=4047eea7-9963-4679-b8b1-5047b9560a75) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 3deea14a-fda7-4ca1-8a96-72bd1e8c8f99 / got description=4047eea7-9963-4679-b8b1-5047b9560a75) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 3deea14a-fda7-4ca1-8a96-72bd1e8c8f99 / got description=4047eea7-9963-4679-b8b1-5047b9560a75) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 3deea14a-fda7-4ca1-8a96-72bd1e8c8f99 / got description=4047eea7-9963-4679-b8b1-5047b9560a75) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 3deea14a-fda7-4ca1-8a96-72bd1e8c8f99 / got description=4047eea7-9963-4679-b8b1-5047b9560a75) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 3deea14a-fda7-4ca1-8a96-72bd1e8c8f99 / got description=4047eea7-9963-4679-b8b1-5047b9560a75) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 3deea14a-fda7-4ca1-8a96-72bd1e8c8f99 / got description=4047eea7-9963-4679-b8b1-5047b9560a75) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 3deea14a-fda7-4ca1-8a96-72bd1e8c8f99 / got description=4047eea7-9963-4679-b8b1-5047b9560a75) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 3deea14a-fda7-4ca1-8a96-72bd1e8c8f99 / got description=4047eea7-9963-4679-b8b1-5047b9560a75) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 3deea14a-fda7-4ca1-8a96-72bd1e8c8f99 / got description=4047eea7-9963-4679-b8b1-5047b9560a75) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 3deea14a-fda7-4ca1-8a96-72bd1e8c8f99 / got description=4047eea7-9963-4679-b8b1-5047b9560a75) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 3deea14a-fda7-4ca1-8a96-72bd1e8c8f99 / got description=4047eea7-9963-4679-b8b1-5047b9560a75) [32mINFO [0m lib389.replica:replica.py:2500 FAIL: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 3deea14a-fda7-4ca1-8a96-72bd1e8c8f99 / got description=4047eea7-9963-4679-b8b1-5047b9560a75) | |||
Error | suites/replication/cleanallruv_test.py::test_abort_certify::setup | 34.42 | |
request = <SubRequest 'm4rid' for <Function test_abort_certify>> topology_m4 = <lib389.topologies.TopologyMain object at 0x7f61c3381160> @pytest.fixture() def m4rid(request, topology_m4): log.debug("Wait a bit before the reset - it is required for the slow machines") time.sleep(5) log.debug("-------------- BEGIN RESET of m4 -----------------") repl = ReplicationManager(DEFAULT_SUFFIX) > repl.test_replication_topology(topology_m4.ms.values()) suites/replication/cleanallruv_test.py:157: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/local/lib/python3.8/site-packages/lib389/replica.py:2531: in test_replication_topology self.test_replication(a, b, timeout) /usr/local/lib/python3.8/site-packages/lib389/replica.py:2517: in test_replication self.wait_for_replication(from_instance, to_instance, timeout) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.replica.ReplicationManager object at 0x7f61c3319340> from_instance = <lib389.DirSrv object at 0x7f61c33ec940> to_instance = <lib389.DirSrv object at 0x7f61c3395640>, timeout = 20 def wait_for_replication(self, from_instance, to_instance, timeout=20): """Wait for a replication event to occur from instance to instance. This shows some point of synchronisation has occured. :param from_instance: The instance whos state we we want to check from :type from_instance: lib389.DirSrv :param to_instance: The instance whos state we want to check matches from. :type to_instance: lib389.DirSrv :param timeout: Fail after timeout seconds. :type timeout: int """ # Touch something then wait_for_replication. from_groups = Groups(from_instance, basedn=self._suffix, rdn=None) to_groups = Groups(to_instance, basedn=self._suffix, rdn=None) from_group = from_groups.get('replication_managers') to_group = to_groups.get('replication_managers') change = str(uuid.uuid4()) from_group.replace('description', change) for i in range(0, timeout): desc = to_group.get_attr_val_utf8('description') if change == desc: self._log.info("SUCCESS: Replication from %s to %s is working" % (from_instance.ldapuri, to_instance.ldapuri)) return True self._log.info("Retry: Replication from %s to %s is NOT working (expect %s / got description=%s)" % (from_instance.ldapuri, to_instance.ldapuri, change, desc)) time.sleep(1) self._log.info("FAIL: Replication from %s to %s is NOT working (expect %s / got description=%s)" % (from_instance.ldapuri, to_instance.ldapuri, change, desc)) > raise Exception("Replication did not sync in time!") E Exception: Replication did not sync in time! /usr/local/lib/python3.8/site-packages/lib389/replica.py:2501: Exception -------------------------------Captured log setup------------------------------- [35mDEBUG [0m tests.suites.replication.cleanallruv_test:cleanallruv_test.py:153 Wait a bit before the reset - it is required for the slow machines [35mDEBUG [0m tests.suites.replication.cleanallruv_test:cleanallruv_test.py:155 -------------- BEGIN RESET of m4 ----------------- [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is NOT working (expect ce796e76-38a4-47be-a943-b9d7ca717e5d / got description=4047eea7-9963-4679-b8b1-5047b9560a75) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 is NOT working (expect 690eafd8-e83e-43aa-af6c-4369efe32c5d / got description=ce796e76-38a4-47be-a943-b9d7ca717e5d) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 is NOT working (expect edae28a8-5561-4725-91a7-2b7b6f54e645 / got description=690eafd8-e83e-43aa-af6c-4369efe32c5d) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 3d1d02a1-759e-4265-b526-7dbefcec2ebe / got description=edae28a8-5561-4725-91a7-2b7b6f54e645) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 is NOT working (expect f64c9a84-9a92-4976-9ff2-9dbef6d9267c / got description=3d1d02a1-759e-4265-b526-7dbefcec2ebe) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 is NOT working (expect 86223196-8c74-4a47-af5d-8818b539addf / got description=f64c9a84-9a92-4976-9ff2-9dbef6d9267c) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 2df1be45-b319-48bf-bf6a-f5ad17d8de94 / got description=86223196-8c74-4a47-af5d-8818b539addf) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is NOT working (expect 9df1534a-7309-4dd7-a60e-5dcb88b09ba9 / got description=2df1be45-b319-48bf-bf6a-f5ad17d8de94) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 is NOT working (expect 4c162ff2-1aab-4ab8-bb75-34b7b1ab0003 / got description=9df1534a-7309-4dd7-a60e-5dcb88b09ba9) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 8296b0bf-cace-4a37-b9b9-d83432a17e45 / got description=4c162ff2-1aab-4ab8-bb75-34b7b1ab0003) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 8296b0bf-cace-4a37-b9b9-d83432a17e45 / got description=4c162ff2-1aab-4ab8-bb75-34b7b1ab0003) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 8296b0bf-cace-4a37-b9b9-d83432a17e45 / got description=4c162ff2-1aab-4ab8-bb75-34b7b1ab0003) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 8296b0bf-cace-4a37-b9b9-d83432a17e45 / got description=4c162ff2-1aab-4ab8-bb75-34b7b1ab0003) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 8296b0bf-cace-4a37-b9b9-d83432a17e45 / got description=4c162ff2-1aab-4ab8-bb75-34b7b1ab0003) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 8296b0bf-cace-4a37-b9b9-d83432a17e45 / got description=4c162ff2-1aab-4ab8-bb75-34b7b1ab0003) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 8296b0bf-cace-4a37-b9b9-d83432a17e45 / got description=4c162ff2-1aab-4ab8-bb75-34b7b1ab0003) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 8296b0bf-cace-4a37-b9b9-d83432a17e45 / got description=4c162ff2-1aab-4ab8-bb75-34b7b1ab0003) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 8296b0bf-cace-4a37-b9b9-d83432a17e45 / got description=4c162ff2-1aab-4ab8-bb75-34b7b1ab0003) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 8296b0bf-cace-4a37-b9b9-d83432a17e45 / got description=4c162ff2-1aab-4ab8-bb75-34b7b1ab0003) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 8296b0bf-cace-4a37-b9b9-d83432a17e45 / got description=4c162ff2-1aab-4ab8-bb75-34b7b1ab0003) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 8296b0bf-cace-4a37-b9b9-d83432a17e45 / got description=4c162ff2-1aab-4ab8-bb75-34b7b1ab0003) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 8296b0bf-cace-4a37-b9b9-d83432a17e45 / got description=4c162ff2-1aab-4ab8-bb75-34b7b1ab0003) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 8296b0bf-cace-4a37-b9b9-d83432a17e45 / got description=4c162ff2-1aab-4ab8-bb75-34b7b1ab0003) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 8296b0bf-cace-4a37-b9b9-d83432a17e45 / got description=4c162ff2-1aab-4ab8-bb75-34b7b1ab0003) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 8296b0bf-cace-4a37-b9b9-d83432a17e45 / got description=4c162ff2-1aab-4ab8-bb75-34b7b1ab0003) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 8296b0bf-cace-4a37-b9b9-d83432a17e45 / got description=4c162ff2-1aab-4ab8-bb75-34b7b1ab0003) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 8296b0bf-cace-4a37-b9b9-d83432a17e45 / got description=4c162ff2-1aab-4ab8-bb75-34b7b1ab0003) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 8296b0bf-cace-4a37-b9b9-d83432a17e45 / got description=4c162ff2-1aab-4ab8-bb75-34b7b1ab0003) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 8296b0bf-cace-4a37-b9b9-d83432a17e45 / got description=4c162ff2-1aab-4ab8-bb75-34b7b1ab0003) [32mINFO [0m lib389.replica:replica.py:2500 FAIL: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 8296b0bf-cace-4a37-b9b9-d83432a17e45 / got description=4c162ff2-1aab-4ab8-bb75-34b7b1ab0003) | |||
Error | suites/replication/cleanallruv_test.py::test_stress_clean::setup | 34.46 | |
request = <SubRequest 'm4rid' for <Function test_stress_clean>> topology_m4 = <lib389.topologies.TopologyMain object at 0x7f61c3381160> @pytest.fixture() def m4rid(request, topology_m4): log.debug("Wait a bit before the reset - it is required for the slow machines") time.sleep(5) log.debug("-------------- BEGIN RESET of m4 -----------------") repl = ReplicationManager(DEFAULT_SUFFIX) > repl.test_replication_topology(topology_m4.ms.values()) suites/replication/cleanallruv_test.py:157: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/local/lib/python3.8/site-packages/lib389/replica.py:2531: in test_replication_topology self.test_replication(a, b, timeout) /usr/local/lib/python3.8/site-packages/lib389/replica.py:2517: in test_replication self.wait_for_replication(from_instance, to_instance, timeout) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.replica.ReplicationManager object at 0x7f61c35eb580> from_instance = <lib389.DirSrv object at 0x7f61c33ec940> to_instance = <lib389.DirSrv object at 0x7f61c3395640>, timeout = 20 def wait_for_replication(self, from_instance, to_instance, timeout=20): """Wait for a replication event to occur from instance to instance. This shows some point of synchronisation has occured. :param from_instance: The instance whos state we we want to check from :type from_instance: lib389.DirSrv :param to_instance: The instance whos state we want to check matches from. :type to_instance: lib389.DirSrv :param timeout: Fail after timeout seconds. :type timeout: int """ # Touch something then wait_for_replication. from_groups = Groups(from_instance, basedn=self._suffix, rdn=None) to_groups = Groups(to_instance, basedn=self._suffix, rdn=None) from_group = from_groups.get('replication_managers') to_group = to_groups.get('replication_managers') change = str(uuid.uuid4()) from_group.replace('description', change) for i in range(0, timeout): desc = to_group.get_attr_val_utf8('description') if change == desc: self._log.info("SUCCESS: Replication from %s to %s is working" % (from_instance.ldapuri, to_instance.ldapuri)) return True self._log.info("Retry: Replication from %s to %s is NOT working (expect %s / got description=%s)" % (from_instance.ldapuri, to_instance.ldapuri, change, desc)) time.sleep(1) self._log.info("FAIL: Replication from %s to %s is NOT working (expect %s / got description=%s)" % (from_instance.ldapuri, to_instance.ldapuri, change, desc)) > raise Exception("Replication did not sync in time!") E Exception: Replication did not sync in time! /usr/local/lib/python3.8/site-packages/lib389/replica.py:2501: Exception -------------------------------Captured log setup------------------------------- [35mDEBUG [0m tests.suites.replication.cleanallruv_test:cleanallruv_test.py:153 Wait a bit before the reset - it is required for the slow machines [35mDEBUG [0m tests.suites.replication.cleanallruv_test:cleanallruv_test.py:155 -------------- BEGIN RESET of m4 ----------------- [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is NOT working (expect d94acbab-535a-48df-93a9-f31a01d94965 / got description=4c162ff2-1aab-4ab8-bb75-34b7b1ab0003) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 is NOT working (expect b8e8e19c-8c69-44fd-a3a4-d662de6dd618 / got description=d94acbab-535a-48df-93a9-f31a01d94965) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 is NOT working (expect 2f75686c-d982-473b-ab7a-dab89d8ebfa6 / got description=b8e8e19c-8c69-44fd-a3a4-d662de6dd618) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect ac6912c8-d1d4-4304-9f1e-c6ec3badd164 / got description=2f75686c-d982-473b-ab7a-dab89d8ebfa6) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 is NOT working (expect 91514488-ca17-4126-b21a-801e136eb886 / got description=ac6912c8-d1d4-4304-9f1e-c6ec3badd164) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 is NOT working (expect 8bd8a5f9-7596-46f1-8b6d-ecf7a31f936f / got description=91514488-ca17-4126-b21a-801e136eb886) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect b8bde283-b8e7-41c5-acd3-af546e2ded9b / got description=8bd8a5f9-7596-46f1-8b6d-ecf7a31f936f) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is NOT working (expect da499a95-2e30-4ba3-94e4-876027ebc477 / got description=b8bde283-b8e7-41c5-acd3-af546e2ded9b) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 is NOT working (expect b4d9435d-f820-4d8a-b6aa-53061a36a15e / got description=da499a95-2e30-4ba3-94e4-876027ebc477) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 714d6400-cec3-40df-b93c-c426f376fc37 / got description=b4d9435d-f820-4d8a-b6aa-53061a36a15e) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 714d6400-cec3-40df-b93c-c426f376fc37 / got description=b4d9435d-f820-4d8a-b6aa-53061a36a15e) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 714d6400-cec3-40df-b93c-c426f376fc37 / got description=b4d9435d-f820-4d8a-b6aa-53061a36a15e) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 714d6400-cec3-40df-b93c-c426f376fc37 / got description=b4d9435d-f820-4d8a-b6aa-53061a36a15e) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 714d6400-cec3-40df-b93c-c426f376fc37 / got description=b4d9435d-f820-4d8a-b6aa-53061a36a15e) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 714d6400-cec3-40df-b93c-c426f376fc37 / got description=b4d9435d-f820-4d8a-b6aa-53061a36a15e) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 714d6400-cec3-40df-b93c-c426f376fc37 / got description=b4d9435d-f820-4d8a-b6aa-53061a36a15e) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 714d6400-cec3-40df-b93c-c426f376fc37 / got description=b4d9435d-f820-4d8a-b6aa-53061a36a15e) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 714d6400-cec3-40df-b93c-c426f376fc37 / got description=b4d9435d-f820-4d8a-b6aa-53061a36a15e) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 714d6400-cec3-40df-b93c-c426f376fc37 / got description=b4d9435d-f820-4d8a-b6aa-53061a36a15e) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 714d6400-cec3-40df-b93c-c426f376fc37 / got description=b4d9435d-f820-4d8a-b6aa-53061a36a15e) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 714d6400-cec3-40df-b93c-c426f376fc37 / got description=b4d9435d-f820-4d8a-b6aa-53061a36a15e) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 714d6400-cec3-40df-b93c-c426f376fc37 / got description=b4d9435d-f820-4d8a-b6aa-53061a36a15e) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 714d6400-cec3-40df-b93c-c426f376fc37 / got description=b4d9435d-f820-4d8a-b6aa-53061a36a15e) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 714d6400-cec3-40df-b93c-c426f376fc37 / got description=b4d9435d-f820-4d8a-b6aa-53061a36a15e) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 714d6400-cec3-40df-b93c-c426f376fc37 / got description=b4d9435d-f820-4d8a-b6aa-53061a36a15e) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 714d6400-cec3-40df-b93c-c426f376fc37 / got description=b4d9435d-f820-4d8a-b6aa-53061a36a15e) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 714d6400-cec3-40df-b93c-c426f376fc37 / got description=b4d9435d-f820-4d8a-b6aa-53061a36a15e) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 714d6400-cec3-40df-b93c-c426f376fc37 / got description=b4d9435d-f820-4d8a-b6aa-53061a36a15e) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 714d6400-cec3-40df-b93c-c426f376fc37 / got description=b4d9435d-f820-4d8a-b6aa-53061a36a15e) [32mINFO [0m lib389.replica:replica.py:2500 FAIL: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 714d6400-cec3-40df-b93c-c426f376fc37 / got description=b4d9435d-f820-4d8a-b6aa-53061a36a15e) | |||
Error | suites/replication/cleanallruv_test.py::test_multiple_tasks_with_force::setup | 34.37 | |
request = <SubRequest 'm4rid' for <Function test_multiple_tasks_with_force>> topology_m4 = <lib389.topologies.TopologyMain object at 0x7f61c3381160> @pytest.fixture() def m4rid(request, topology_m4): log.debug("Wait a bit before the reset - it is required for the slow machines") time.sleep(5) log.debug("-------------- BEGIN RESET of m4 -----------------") repl = ReplicationManager(DEFAULT_SUFFIX) > repl.test_replication_topology(topology_m4.ms.values()) suites/replication/cleanallruv_test.py:157: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/local/lib/python3.8/site-packages/lib389/replica.py:2531: in test_replication_topology self.test_replication(a, b, timeout) /usr/local/lib/python3.8/site-packages/lib389/replica.py:2517: in test_replication self.wait_for_replication(from_instance, to_instance, timeout) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.replica.ReplicationManager object at 0x7f61c3627400> from_instance = <lib389.DirSrv object at 0x7f61c33ec940> to_instance = <lib389.DirSrv object at 0x7f61c3395640>, timeout = 20 def wait_for_replication(self, from_instance, to_instance, timeout=20): """Wait for a replication event to occur from instance to instance. This shows some point of synchronisation has occured. :param from_instance: The instance whos state we we want to check from :type from_instance: lib389.DirSrv :param to_instance: The instance whos state we want to check matches from. :type to_instance: lib389.DirSrv :param timeout: Fail after timeout seconds. :type timeout: int """ # Touch something then wait_for_replication. from_groups = Groups(from_instance, basedn=self._suffix, rdn=None) to_groups = Groups(to_instance, basedn=self._suffix, rdn=None) from_group = from_groups.get('replication_managers') to_group = to_groups.get('replication_managers') change = str(uuid.uuid4()) from_group.replace('description', change) for i in range(0, timeout): desc = to_group.get_attr_val_utf8('description') if change == desc: self._log.info("SUCCESS: Replication from %s to %s is working" % (from_instance.ldapuri, to_instance.ldapuri)) return True self._log.info("Retry: Replication from %s to %s is NOT working (expect %s / got description=%s)" % (from_instance.ldapuri, to_instance.ldapuri, change, desc)) time.sleep(1) self._log.info("FAIL: Replication from %s to %s is NOT working (expect %s / got description=%s)" % (from_instance.ldapuri, to_instance.ldapuri, change, desc)) > raise Exception("Replication did not sync in time!") E Exception: Replication did not sync in time! /usr/local/lib/python3.8/site-packages/lib389/replica.py:2501: Exception -------------------------------Captured log setup------------------------------- [35mDEBUG [0m tests.suites.replication.cleanallruv_test:cleanallruv_test.py:153 Wait a bit before the reset - it is required for the slow machines [35mDEBUG [0m tests.suites.replication.cleanallruv_test:cleanallruv_test.py:155 -------------- BEGIN RESET of m4 ----------------- [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is NOT working (expect faa45c72-ab46-469f-80e2-fb2b6187ab83 / got description=b4d9435d-f820-4d8a-b6aa-53061a36a15e) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 is NOT working (expect 9b5f4d8b-6723-4c22-9811-093dbce1f205 / got description=faa45c72-ab46-469f-80e2-fb2b6187ab83) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 is NOT working (expect 018fdced-bb9b-4200-834a-1c76b9842b5f / got description=9b5f4d8b-6723-4c22-9811-093dbce1f205) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 649ade7a-8a10-47c0-b47b-6e9dee3b4efd / got description=018fdced-bb9b-4200-834a-1c76b9842b5f) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 is NOT working (expect ea4327c6-fba5-4c60-b906-fef593764077 / got description=649ade7a-8a10-47c0-b47b-6e9dee3b4efd) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 is NOT working (expect f2dcc85f-4c78-445a-abc3-4e0687f79202 / got description=ea4327c6-fba5-4c60-b906-fef593764077) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect d94df254-0546-4b5c-a11d-2d1a9eb9bb89 / got description=f2dcc85f-4c78-445a-abc3-4e0687f79202) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is NOT working (expect 44933e07-5fe4-4b1a-9db2-ac1a1d5db039 / got description=d94df254-0546-4b5c-a11d-2d1a9eb9bb89) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 is NOT working (expect 1e079897-22f0-4c7f-b98a-6f4531ebdbcd / got description=44933e07-5fe4-4b1a-9db2-ac1a1d5db039) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 723be627-c308-4510-b12b-1ecc56d26d60 / got description=1e079897-22f0-4c7f-b98a-6f4531ebdbcd) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 723be627-c308-4510-b12b-1ecc56d26d60 / got description=1e079897-22f0-4c7f-b98a-6f4531ebdbcd) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 723be627-c308-4510-b12b-1ecc56d26d60 / got description=1e079897-22f0-4c7f-b98a-6f4531ebdbcd) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 723be627-c308-4510-b12b-1ecc56d26d60 / got description=1e079897-22f0-4c7f-b98a-6f4531ebdbcd) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 723be627-c308-4510-b12b-1ecc56d26d60 / got description=1e079897-22f0-4c7f-b98a-6f4531ebdbcd) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 723be627-c308-4510-b12b-1ecc56d26d60 / got description=1e079897-22f0-4c7f-b98a-6f4531ebdbcd) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 723be627-c308-4510-b12b-1ecc56d26d60 / got description=1e079897-22f0-4c7f-b98a-6f4531ebdbcd) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 723be627-c308-4510-b12b-1ecc56d26d60 / got description=1e079897-22f0-4c7f-b98a-6f4531ebdbcd) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 723be627-c308-4510-b12b-1ecc56d26d60 / got description=1e079897-22f0-4c7f-b98a-6f4531ebdbcd) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 723be627-c308-4510-b12b-1ecc56d26d60 / got description=1e079897-22f0-4c7f-b98a-6f4531ebdbcd) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 723be627-c308-4510-b12b-1ecc56d26d60 / got description=1e079897-22f0-4c7f-b98a-6f4531ebdbcd) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 723be627-c308-4510-b12b-1ecc56d26d60 / got description=1e079897-22f0-4c7f-b98a-6f4531ebdbcd) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 723be627-c308-4510-b12b-1ecc56d26d60 / got description=1e079897-22f0-4c7f-b98a-6f4531ebdbcd) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 723be627-c308-4510-b12b-1ecc56d26d60 / got description=1e079897-22f0-4c7f-b98a-6f4531ebdbcd) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 723be627-c308-4510-b12b-1ecc56d26d60 / got description=1e079897-22f0-4c7f-b98a-6f4531ebdbcd) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 723be627-c308-4510-b12b-1ecc56d26d60 / got description=1e079897-22f0-4c7f-b98a-6f4531ebdbcd) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 723be627-c308-4510-b12b-1ecc56d26d60 / got description=1e079897-22f0-4c7f-b98a-6f4531ebdbcd) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 723be627-c308-4510-b12b-1ecc56d26d60 / got description=1e079897-22f0-4c7f-b98a-6f4531ebdbcd) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 723be627-c308-4510-b12b-1ecc56d26d60 / got description=1e079897-22f0-4c7f-b98a-6f4531ebdbcd) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 723be627-c308-4510-b12b-1ecc56d26d60 / got description=1e079897-22f0-4c7f-b98a-6f4531ebdbcd) [32mINFO [0m lib389.replica:replica.py:2500 FAIL: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 723be627-c308-4510-b12b-1ecc56d26d60 / got description=1e079897-22f0-4c7f-b98a-6f4531ebdbcd) | |||
Error | tickets/ticket48973_test.py::test_ticket48973_init::setup | 1.39 | |
request = <SubRequest 'topology' for <Function test_ticket48973_init>> @pytest.fixture(scope="module") def topology(request): # Creating standalone instance ... standalone = DirSrv(verbose=False) args_instance[SER_HOST] = HOST_STANDALONE args_instance[SER_PORT] = PORT_STANDALONE args_instance[SER_SERVERID_PROP] = SERVERID_STANDALONE args_instance[SER_CREATION_SUFFIX] = DEFAULT_SUFFIX args_standalone = args_instance.copy() standalone.allocate(args_standalone) instance_standalone = standalone.exists() if instance_standalone: standalone.delete() > standalone.create() /export/tests/tickets/ticket48973_test.py:52: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/local/lib/python3.8/site-packages/lib389/__init__.py:838: in create self._createDirsrv(version) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:808: in _createDirsrv sds.create_from_args(general, slapd, backends, None) /usr/local/lib/python3.8/site-packages/lib389/instance/setup.py:663: in create_from_args self._prepare_ds(general, slapd, backends) /usr/local/lib/python3.8/site-packages/lib389/instance/setup.py:594: in _prepare_ds assert_c(slapd['root_dn'] is not None, "Configuration root_dn in section [slapd] not found") _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ condition = False, msg = 'Configuration root_dn in section [slapd] not found' def assert_c(condition, msg="Assertion Failed"): """This is the same as assert, but assert is compiled out when optimisation is enabled. This prevents compiling out. """ if not condition: > raise AssertionError(msg) E AssertionError: Configuration root_dn in section [slapd] not found /usr/local/lib/python3.8/site-packages/lib389/utils.py:1243: AssertionError -------------------------------Captured log setup------------------------------- [32mINFO [0m lib389.SetupDs:setup.py:658 Starting installation... | |||
Error | tickets/ticket48973_test.py::test_ticket48973_ces_not_indexed::setup | 0.00 | |
request = <SubRequest 'topology' for <Function test_ticket48973_init>> @pytest.fixture(scope="module") def topology(request): # Creating standalone instance ... standalone = DirSrv(verbose=False) args_instance[SER_HOST] = HOST_STANDALONE args_instance[SER_PORT] = PORT_STANDALONE args_instance[SER_SERVERID_PROP] = SERVERID_STANDALONE args_instance[SER_CREATION_SUFFIX] = DEFAULT_SUFFIX args_standalone = args_instance.copy() standalone.allocate(args_standalone) instance_standalone = standalone.exists() if instance_standalone: standalone.delete() > standalone.create() /export/tests/tickets/ticket48973_test.py:52: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/local/lib/python3.8/site-packages/lib389/__init__.py:838: in create self._createDirsrv(version) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:808: in _createDirsrv sds.create_from_args(general, slapd, backends, None) /usr/local/lib/python3.8/site-packages/lib389/instance/setup.py:663: in create_from_args self._prepare_ds(general, slapd, backends) /usr/local/lib/python3.8/site-packages/lib389/instance/setup.py:594: in _prepare_ds assert_c(slapd['root_dn'] is not None, "Configuration root_dn in section [slapd] not found") _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ condition = False, msg = 'Configuration root_dn in section [slapd] not found' def assert_c(condition, msg="Assertion Failed"): """This is the same as assert, but assert is compiled out when optimisation is enabled. This prevents compiling out. """ if not condition: > raise AssertionError(msg) E AssertionError: Configuration root_dn in section [slapd] not found /usr/local/lib/python3.8/site-packages/lib389/utils.py:1243: AssertionError | |||
Error | tickets/ticket48973_test.py::test_ticket48973_homeDirectory_indexing::setup | 0.00 | |
request = <SubRequest 'topology' for <Function test_ticket48973_init>> @pytest.fixture(scope="module") def topology(request): # Creating standalone instance ... standalone = DirSrv(verbose=False) args_instance[SER_HOST] = HOST_STANDALONE args_instance[SER_PORT] = PORT_STANDALONE args_instance[SER_SERVERID_PROP] = SERVERID_STANDALONE args_instance[SER_CREATION_SUFFIX] = DEFAULT_SUFFIX args_standalone = args_instance.copy() standalone.allocate(args_standalone) instance_standalone = standalone.exists() if instance_standalone: standalone.delete() > standalone.create() /export/tests/tickets/ticket48973_test.py:52: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/local/lib/python3.8/site-packages/lib389/__init__.py:838: in create self._createDirsrv(version) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:808: in _createDirsrv sds.create_from_args(general, slapd, backends, None) /usr/local/lib/python3.8/site-packages/lib389/instance/setup.py:663: in create_from_args self._prepare_ds(general, slapd, backends) /usr/local/lib/python3.8/site-packages/lib389/instance/setup.py:594: in _prepare_ds assert_c(slapd['root_dn'] is not None, "Configuration root_dn in section [slapd] not found") _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ condition = False, msg = 'Configuration root_dn in section [slapd] not found' def assert_c(condition, msg="Assertion Failed"): """This is the same as assert, but assert is compiled out when optimisation is enabled. This prevents compiling out. """ if not condition: > raise AssertionError(msg) E AssertionError: Configuration root_dn in section [slapd] not found /usr/local/lib/python3.8/site-packages/lib389/utils.py:1243: AssertionError | |||
Error | tickets/ticket48973_test.py::test_ticket48973_homeDirectory_caseExactIA5Match_caseIgnoreIA5Match_indexing::setup | 0.00 | |
request = <SubRequest 'topology' for <Function test_ticket48973_init>> @pytest.fixture(scope="module") def topology(request): # Creating standalone instance ... standalone = DirSrv(verbose=False) args_instance[SER_HOST] = HOST_STANDALONE args_instance[SER_PORT] = PORT_STANDALONE args_instance[SER_SERVERID_PROP] = SERVERID_STANDALONE args_instance[SER_CREATION_SUFFIX] = DEFAULT_SUFFIX args_standalone = args_instance.copy() standalone.allocate(args_standalone) instance_standalone = standalone.exists() if instance_standalone: standalone.delete() > standalone.create() /export/tests/tickets/ticket48973_test.py:52: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/local/lib/python3.8/site-packages/lib389/__init__.py:838: in create self._createDirsrv(version) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:808: in _createDirsrv sds.create_from_args(general, slapd, backends, None) /usr/local/lib/python3.8/site-packages/lib389/instance/setup.py:663: in create_from_args self._prepare_ds(general, slapd, backends) /usr/local/lib/python3.8/site-packages/lib389/instance/setup.py:594: in _prepare_ds assert_c(slapd['root_dn'] is not None, "Configuration root_dn in section [slapd] not found") _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ condition = False, msg = 'Configuration root_dn in section [slapd] not found' def assert_c(condition, msg="Assertion Failed"): """This is the same as assert, but assert is compiled out when optimisation is enabled. This prevents compiling out. """ if not condition: > raise AssertionError(msg) E AssertionError: Configuration root_dn in section [slapd] not found /usr/local/lib/python3.8/site-packages/lib389/utils.py:1243: AssertionError | |||
Failed | suites/acl/keywords_part2_test.py::test_access_from_certain_network_only_ip | 3.80 | |
topo = <lib389.topologies.TopologyMain object at 0x7f61d5300b50> add_user = None, aci_of_user = None def test_access_from_certain_network_only_ip(topo, add_user, aci_of_user): """ User can access the data when connecting from certain network only as per the ACI. :id: 4ec38296-7ac5-11e8-9816-8c16451d917b :setup: Standalone Server :steps: 1. Add test entry 2. Add ACI 3. User should follow ACI role :expectedresults: 1. Entry should be added 2. Operation should succeed 3. Operation should succeed """ # Turn access log buffering off to make less time consuming topo.standalone.config.set('nsslapd-accesslog-logbuffering', 'off') # Find the ip from ds logs , as we need to know the exact ip used by ds to run the instances. # Wait till Access Log is generated topo.standalone.restart() # Add ACI domain = Domain(topo.standalone, DEFAULT_SUFFIX) domain.add("aci", f'(target = "ldap:///{IP_OU_KEY}")(targetattr=\"*\")(version 3.0; aci "IP aci"; ' f'allow(all)userdn = "ldap:///{NETSCAPEIP_KEY}" and ip = "::1" ;)') # create a new connection for the test conn = UserAccount(topo.standalone, NETSCAPEIP_KEY).bind(PW_DM) # Perform Operation org = OrganizationalUnit(conn, IP_OU_KEY) > org.replace("seeAlso", "cn=1") suites/acl/keywords_part2_test.py:76: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/local/lib/python3.8/site-packages/lib389/_mapped_object.py:280: in replace self.set(key, value, action=ldap.MOD_REPLACE) /usr/local/lib/python3.8/site-packages/lib389/_mapped_object.py:446: in set return self._instance.modify_ext_s(self._dn, [(action, key, value)], /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:613: in modify_ext_s resp_type, resp_data, resp_msgid, resp_ctrls = self.result3(msgid,all=1,timeout=self.timeout) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:764: in result3 resp_type, resp_data, resp_msgid, decoded_resp_ctrls, retoid, retval = self.result4( /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:774: in result4 ldap_result = self._ldap_call(self._l.result4,msgid,all,timeout,add_ctrls,add_intermediates,add_extop) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:340: in _ldap_call reraise(exc_type, exc_value, exc_traceback) /usr/local/lib64/python3.8/site-packages/ldap/compat.py:46: in reraise raise exc_value _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.DirSrv object at 0x7f61d6cb30a0> func = <built-in method result4 of LDAP object at 0x7f61d537ec00> args = (3, 1, -1, 0, 0, 0), kwargs = {}, diagnostic_message_success = None exc_type = None, exc_value = None, exc_traceback = None def _ldap_call(self,func,*args,**kwargs): """ Wrapper method mainly for serializing calls into OpenLDAP libs and trace logs """ self._ldap_object_lock.acquire() if __debug__: if self._trace_level>=1: self._trace_file.write('*** %s %s - %s\n%s\n' % ( repr(self), self._uri, '.'.join((self.__class__.__name__,func.__name__)), pprint.pformat((args,kwargs)) )) if self._trace_level>=9: traceback.print_stack(limit=self._trace_stack_limit,file=self._trace_file) diagnostic_message_success = None try: try: > result = func(*args,**kwargs) E ldap.INSUFFICIENT_ACCESS: {'msgtype': 103, 'msgid': 3, 'result': 50, 'desc': 'Insufficient access', 'ctrls': [], 'info': "Insufficient 'write' privilege to the 'seeAlso' attribute of entry 'ou=ip,ou=keywords,dc=example,dc=com'.\n"} /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:324: INSUFFICIENT_ACCESS -------------------------------Captured log setup------------------------------- [32mINFO [0m lib389.SetupDs:setup.py:658 Starting installation... [32mINFO [0m lib389.SetupDs:setup.py:686 Completed installation for standalone1 [32mINFO [0m lib389.topologies:topologies.py:109 Instance with parameters {'ldap-port': 38901, 'ldap-secureport': 63601, 'server-id': 'standalone1', 'suffix': 'dc=example,dc=com'} was created. | |||
Failed | suites/acl/keywords_part2_test.py::test_connectin_from_an_unauthorized_network | 0.09 | |
topo = <lib389.topologies.TopologyMain object at 0x7f61d5300b50> add_user = None, aci_of_user = None def test_connectin_from_an_unauthorized_network(topo, add_user, aci_of_user): """ User cannot access the data when connectin from an unauthorized network as per the ACI. :id: 52d1ecce-7ac5-11e8-9ad9-8c16451d917b :setup: Standalone Server :steps: 1. Add test entry 2. Add ACI 3. User should follow ACI role :expectedresults: 1. Entry should be added 2. Operation should succeed 3. Operation should succeed """ # Add ACI domain = Domain(topo.standalone, DEFAULT_SUFFIX) domain.add("aci", f'(target = "ldap:///{IP_OU_KEY}")' f'(targetattr="*")(version 3.0; aci "IP aci"; ' f'allow(all) userdn = "ldap:///{NETSCAPEIP_KEY}" ' f'and ip != "::1" ;)') # create a new connection for the test conn = UserAccount(topo.standalone, NETSCAPEIP_KEY).bind(PW_DM) # Perform Operation org = OrganizationalUnit(conn, IP_OU_KEY) with pytest.raises(ldap.INSUFFICIENT_ACCESS): > org.replace("seeAlso", "cn=1") E Failed: DID NOT RAISE <class 'ldap.INSUFFICIENT_ACCESS'> suites/acl/keywords_part2_test.py:119: Failed | |||
Failed | suites/clu/repl_monitor_test.py::test_dsconf_replication_monitor | 0.30 | |
topology_m2 = <lib389.topologies.TopologyMain object at 0x7f61d3cf95b0> set_log_file = None @pytest.mark.ds50545 @pytest.mark.bz1739718 @pytest.mark.skipif(ds_is_older("1.4.0"), reason="Not implemented") def test_dsconf_replication_monitor(topology_m2, set_log_file): """Test replication monitor that was ported from legacy tools :id: ce48020d-7c30-41b7-8f68-144c9cd757f6 :setup: 2 MM topology :steps: 1. Create DS instance 2. Run replication monitor with connections option 3. Run replication monitor with aliases option 4. Run replication monitor with --json option 5. Run replication monitor with .dsrc file created :expectedresults: 1. Success 2. Success 3. Success 4. Success 5. Success """ m1 = topology_m2.ms["master1"] m2 = topology_m2.ms["master2"] alias_content = ['Supplier: M1 (' + m1.host + ':' + str(m1.port) + ')', 'Supplier: M2 (' + m2.host + ':' + str(m2.port) + ')'] connection_content = 'Supplier: '+ m1.host + ':' + str(m1.port) content_list = ['Replica Root: dc=example,dc=com', 'Replica ID: 1', 'Replica Status: Available', 'Max CSN', 'Status For Agreement: "002" ('+ m2.host + ':' + str(m2.port) + ')', 'Replica Enabled: on', 'Update In Progress: FALSE', 'Last Update Start:', 'Last Update End:', 'Number Of Changes Sent:', 'Number Of Changes Skipped: None', 'Last Update Status: Error (0) Replica acquired successfully: Incremental update succeeded', 'Last Init Start:', 'Last Init End:', 'Last Init Status:', 'Reap Active: 0', 'Replication Status: In Synchronization', 'Replication Lag Time:', 'Supplier: ', m2.host + ':' + str(m2.port), 'Replica Root: dc=example,dc=com', 'Replica ID: 2', 'Status For Agreement: "001" (' + m1.host + ':' + str(m1.port)+')'] json_list = ['type', 'list', 'items', 'name', m1.host + ':' + str(m1.port), 'data', '"replica_id": "1"', '"replica_root": "dc=example,dc=com"', '"replica_status": "Available"', 'maxcsn', 'agmts_status', 'agmt-name', '002', 'replica', m2.host + ':' + str(m2.port), 'replica-enabled', 'update-in-progress', 'last-update-start', 'last-update-end', 'number-changes-sent', 'number-changes-skipped', 'last-update-status', 'Error (0) Replica acquired successfully: Incremental update succeeded', 'last-init-start', 'last-init-end', 'last-init-status', 'reap-active', 'replication-status', 'In Synchronization', 'replication-lag-time', '"replica_id": "2"', '001', m1.host + ':' + str(m1.port)] dsrc_content = '[repl-monitor-connections]\n' \ 'connection1 = ' + m1.host + ':' + str(m1.port) + ':' + DN_DM + ':' + PW_DM + '\n' \ 'connection2 = ' + m2.host + ':' + str(m2.port) + ':' + DN_DM + ':' + PW_DM + '\n' \ '\n' \ '[repl-monitor-aliases]\n' \ 'M1 = ' + m1.host + ':' + str(m1.port) + '\n' \ 'M2 = ' + m2.host + ':' + str(m2.port) connections = [m1.host + ':' + str(m1.port) + ':' + DN_DM + ':' + PW_DM, m2.host + ':' + str(m2.port) + ':' + DN_DM + ':' + PW_DM] aliases = ['M1=' + m1.host + ':' + str(m1.port), 'M2=' + m2.host + ':' + str(m2.port)] args = FakeArgs() args.connections = connections args.aliases = None args.json = False log.info('Run replication monitor with connections option') get_repl_monitor_info(m1, DEFAULT_SUFFIX, log, args) check_value_in_log_and_reset(content_list, connection_content) log.info('Run replication monitor with aliases option') args.aliases = aliases get_repl_monitor_info(m1, DEFAULT_SUFFIX, log, args) > check_value_in_log_and_reset(content_list, alias_content) suites/clu/repl_monitor_test.py:177: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ content_list = ['Replica Root: dc=example,dc=com', 'Replica ID: 1', 'Replica Status: Available', 'Max CSN', 'Status For Agreement: "002" (ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002)', 'Replica Enabled: on', ...] second_list = ['Supplier: M1 (ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001)', 'Supplier: M2 (ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002)'] single_value = None def check_value_in_log_and_reset(content_list, second_list=None, single_value=None): with open(LOG_FILE, 'r+') as f: file_content = f.read() for item in content_list: log.info('Check that "{}" is present'.format(item)) assert item in file_content if second_list is not None: log.info('Check for "{}"'.format(second_list)) for item in second_list: > assert item in file_content E AssertionError: assert 'Supplier: M1 (ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001)' in 'Run replication monitor with aliases option\ndsrc path: /root/.dsrc\ndsrc container path: /data/config/container.inf\...t Init Status: unavailable\nReap Active: 0\nReplication Status: In Synchronization\nReplication Lag Time: 00:00:00\n\n' suites/clu/repl_monitor_test.py:54: AssertionError -------------------------------Captured log setup------------------------------- [32mINFO [0m lib389.SetupDs:setup.py:658 Starting installation... [32mINFO [0m lib389.SetupDs:setup.py:686 Completed installation for master1 [32mINFO [0m lib389.topologies:topologies.py:109 Instance with parameters {'ldap-port': 39001, 'ldap-secureport': 63701, 'server-id': 'master1', 'suffix': 'dc=example,dc=com'} was created. [32mINFO [0m lib389.SetupDs:setup.py:658 Starting installation... [32mINFO [0m lib389.SetupDs:setup.py:686 Completed installation for master2 [32mINFO [0m lib389.topologies:topologies.py:109 Instance with parameters {'ldap-port': 39002, 'ldap-secureport': 63702, 'server-id': 'master2', 'suffix': 'dc=example,dc=com'} was created. [32mINFO [0m lib389.topologies:topologies.py:142 Creating replication topology. [32mINFO [0m lib389.topologies:topologies.py:156 Joining master master2 to master1 ... [32mINFO [0m lib389.replica:replica.py:2084 SUCCESS: bootstrap to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 completed [32mINFO [0m lib389.replica:replica.py:2365 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is was created [32mINFO [0m lib389.replica:replica.py:2365 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is was created [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is NOT working (expect dc405130-f6e2-4b75-8f11-97072db44a96 / got description=None) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 4e92e0ea-263e-4cbf-b04d-8b6f87bb8f04 / got description=dc405130-f6e2-4b75-8f11-97072db44a96) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is working [32mINFO [0m lib389.replica:replica.py:2153 SUCCESS: joined master from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 [32mINFO [0m lib389.topologies:topologies.py:164 Ensuring master master1 to master2 ... [32mINFO [0m lib389.replica:replica.py:2338 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 already exists [32mINFO [0m lib389.topologies:topologies.py:164 Ensuring master master2 to master1 ... [32mINFO [0m lib389.replica:replica.py:2338 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 already exists -------------------------------Captured log call-------------------------------- [32mINFO [0m tests.suites.clu.repl_monitor_test:repl_monitor_test.py:170 Run replication monitor with connections option [35mDEBUG [0m tests.suites.clu.repl_monitor_test:dsrc.py:76 dsrc path: /root/.dsrc [35mDEBUG [0m tests.suites.clu.repl_monitor_test:dsrc.py:77 dsrc container path: /data/config/container.inf [35mDEBUG [0m tests.suites.clu.repl_monitor_test:dsrc.py:85 dsrc instances: [] [35mDEBUG [0m tests.suites.clu.repl_monitor_test:dsrc.py:210 dsrc completed with {'connections': None, 'aliases': None} [32mINFO [0m tests.suites.clu.repl_monitor_test:replication.py:438 Supplier: localhost.localdomain:39001 [32mINFO [0m tests.suites.clu.repl_monitor_test:replication.py:443 ------------------------------------- [32mINFO [0m tests.suites.clu.repl_monitor_test:replication.py:455 Replica Root: dc=example,dc=com [32mINFO [0m tests.suites.clu.repl_monitor_test:replication.py:456 Replica ID: 1 [32mINFO [0m tests.suites.clu.repl_monitor_test:replication.py:457 Replica Status: Available [32mINFO [0m tests.suites.clu.repl_monitor_test:replication.py:458 Max CSN: 5f9cb294000000010000 [32mINFO [0m tests.suites.clu.repl_monitor_test:replication.py:461 Status For Agreement: "002" (ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002) Replica Enabled: on Update In Progress: FALSE Last Update Start: 20201031004053Z Last Update End: 20201031004053Z Number Of Changes Sent: 1:2/0 Number Of Changes Skipped: None Last Update Status: Error (0) Replica acquired successfully: Incremental update succeeded Last Init Start: 19700101000000Z Last Init End: 19700101000000Z Last Init Status: unavailable Reap Active: 0 Replication Status: In Synchronization Replication Lag Time: 00:00:00 [32mINFO [0m tests.suites.clu.repl_monitor_test:replication.py:438 Supplier: ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 [32mINFO [0m tests.suites.clu.repl_monitor_test:replication.py:443 ----------------------------------------------------------------- [32mINFO [0m tests.suites.clu.repl_monitor_test:replication.py:455 Replica Root: dc=example,dc=com [32mINFO [0m tests.suites.clu.repl_monitor_test:replication.py:456 Replica ID: 2 [32mINFO [0m tests.suites.clu.repl_monitor_test:replication.py:457 Replica Status: Available [32mINFO [0m tests.suites.clu.repl_monitor_test:replication.py:458 Max CSN: 5f9cb295000000020000 [32mINFO [0m tests.suites.clu.repl_monitor_test:replication.py:461 Status For Agreement: "001" (ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001) Replica Enabled: on Update In Progress: FALSE Last Update Start: 20201031004053Z Last Update End: 20201031004053Z Number Of Changes Sent: 2:1/0 Number Of Changes Skipped: None Last Update Status: Error (0) Replica acquired successfully: Incremental update succeeded Last Init Start: 19700101000000Z Last Init End: 19700101000000Z Last Init Status: unavailable Reap Active: 0 Replication Status: In Synchronization Replication Lag Time: 00:00:00 [32mINFO [0m tests.suites.clu.repl_monitor_test:repl_monitor_test.py:48 Check that "Replica Root: dc=example,dc=com" is present [32mINFO [0m tests.suites.clu.repl_monitor_test:repl_monitor_test.py:48 Check that "Replica ID: 1" is present [32mINFO [0m tests.suites.clu.repl_monitor_test:repl_monitor_test.py:48 Check that "Replica Status: Available" is present [32mINFO [0m tests.suites.clu.repl_monitor_test:repl_monitor_test.py:48 Check that "Max CSN" is present [32mINFO [0m tests.suites.clu.repl_monitor_test:repl_monitor_test.py:48 Check that "Status For Agreement: "002" (ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002)" is present [32mINFO [0m tests.suites.clu.repl_monitor_test:repl_monitor_test.py:48 Check that "Replica Enabled: on" is present [32mINFO [0m tests.suites.clu.repl_monitor_test:repl_monitor_test.py:48 Check that "Update In Progress: FALSE" is present [32mINFO [0m tests.suites.clu.repl_monitor_test:repl_monitor_test.py:48 Check that "Last Update Start:" is present [32mINFO [0m tests.suites.clu.repl_monitor_test:repl_monitor_test.py:48 Check that "Last Update End:" is present [32mINFO [0m tests.suites.clu.repl_monitor_test:repl_monitor_test.py:48 Check that "Number Of Changes Sent:" is present [32mINFO [0m tests.suites.clu.repl_monitor_test:repl_monitor_test.py:48 Check that "Number Of Changes Skipped: None" is present [32mINFO [0m tests.suites.clu.repl_monitor_test:repl_monitor_test.py:48 Check that "Last Update Status: Error (0) Replica acquired successfully: Incremental update succeeded" is present [32mINFO [0m tests.suites.clu.repl_monitor_test:repl_monitor_test.py:48 Check that "Last Init Start:" is present [32mINFO [0m tests.suites.clu.repl_monitor_test:repl_monitor_test.py:48 Check that "Last Init End:" is present [32mINFO [0m tests.suites.clu.repl_monitor_test:repl_monitor_test.py:48 Check that "Last Init Status:" is present [32mINFO [0m tests.suites.clu.repl_monitor_test:repl_monitor_test.py:48 Check that "Reap Active: 0" is present [32mINFO [0m tests.suites.clu.repl_monitor_test:repl_monitor_test.py:48 Check that "Replication Status: In Synchronization" is present [32mINFO [0m tests.suites.clu.repl_monitor_test:repl_monitor_test.py:48 Check that "Replication Lag Time:" is present [32mINFO [0m tests.suites.clu.repl_monitor_test:repl_monitor_test.py:48 Check that "Supplier: " is present [32mINFO [0m tests.suites.clu.repl_monitor_test:repl_monitor_test.py:48 Check that "ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002" is present [32mINFO [0m tests.suites.clu.repl_monitor_test:repl_monitor_test.py:48 Check that "Replica Root: dc=example,dc=com" is present [32mINFO [0m tests.suites.clu.repl_monitor_test:repl_monitor_test.py:48 Check that "Replica ID: 2" is present [32mINFO [0m tests.suites.clu.repl_monitor_test:repl_monitor_test.py:48 Check that "Status For Agreement: "001" (ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001)" is present [32mINFO [0m tests.suites.clu.repl_monitor_test:repl_monitor_test.py:52 Check for "Supplier: ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001" [32mINFO [0m tests.suites.clu.repl_monitor_test:repl_monitor_test.py:60 Reset log file [32mINFO [0m tests.suites.clu.repl_monitor_test:repl_monitor_test.py:174 Run replication monitor with aliases option [35mDEBUG [0m tests.suites.clu.repl_monitor_test:dsrc.py:76 dsrc path: /root/.dsrc [35mDEBUG [0m tests.suites.clu.repl_monitor_test:dsrc.py:77 dsrc container path: /data/config/container.inf [35mDEBUG [0m tests.suites.clu.repl_monitor_test:dsrc.py:85 dsrc instances: [] [35mDEBUG [0m tests.suites.clu.repl_monitor_test:dsrc.py:210 dsrc completed with {'connections': None, 'aliases': None} [32mINFO [0m tests.suites.clu.repl_monitor_test:replication.py:438 Supplier: localhost.localdomain:39001 [32mINFO [0m tests.suites.clu.repl_monitor_test:replication.py:443 ------------------------------------- [32mINFO [0m tests.suites.clu.repl_monitor_test:replication.py:455 Replica Root: dc=example,dc=com [32mINFO [0m tests.suites.clu.repl_monitor_test:replication.py:456 Replica ID: 1 [32mINFO [0m tests.suites.clu.repl_monitor_test:replication.py:457 Replica Status: Available [32mINFO [0m tests.suites.clu.repl_monitor_test:replication.py:458 Max CSN: 5f9cb294000000010000 [32mINFO [0m tests.suites.clu.repl_monitor_test:replication.py:461 Status For Agreement: "002" (ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002) Replica Enabled: on Update In Progress: FALSE Last Update Start: 20201031004053Z Last Update End: 20201031004053Z Number Of Changes Sent: 1:2/0 Number Of Changes Skipped: None Last Update Status: Error (0) Replica acquired successfully: Incremental update succeeded Last Init Start: 19700101000000Z Last Init End: 19700101000000Z Last Init Status: unavailable Reap Active: 0 Replication Status: In Synchronization Replication Lag Time: 00:00:00 [32mINFO [0m tests.suites.clu.repl_monitor_test:replication.py:438 Supplier: M2 (ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002) [32mINFO [0m tests.suites.clu.repl_monitor_test:replication.py:443 ---------------------------------------------------------------------- [32mINFO [0m tests.suites.clu.repl_monitor_test:replication.py:455 Replica Root: dc=example,dc=com [32mINFO [0m tests.suites.clu.repl_monitor_test:replication.py:456 Replica ID: 2 [32mINFO [0m tests.suites.clu.repl_monitor_test:replication.py:457 Replica Status: Available [32mINFO [0m tests.suites.clu.repl_monitor_test:replication.py:458 Max CSN: 5f9cb295000000020000 [32mINFO [0m tests.suites.clu.repl_monitor_test:replication.py:461 Status For Agreement: "001" (ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001) Replica Enabled: on Update In Progress: FALSE Last Update Start: 20201031004053Z Last Update End: 20201031004053Z Number Of Changes Sent: 2:1/0 Number Of Changes Skipped: None Last Update Status: Error (0) Replica acquired successfully: Incremental update succeeded Last Init Start: 19700101000000Z Last Init End: 19700101000000Z Last Init Status: unavailable Reap Active: 0 Replication Status: In Synchronization Replication Lag Time: 00:00:00 [32mINFO [0m tests.suites.clu.repl_monitor_test:repl_monitor_test.py:48 Check that "Replica Root: dc=example,dc=com" is present [32mINFO [0m tests.suites.clu.repl_monitor_test:repl_monitor_test.py:48 Check that "Replica ID: 1" is present [32mINFO [0m tests.suites.clu.repl_monitor_test:repl_monitor_test.py:48 Check that "Replica Status: Available" is present [32mINFO [0m tests.suites.clu.repl_monitor_test:repl_monitor_test.py:48 Check that "Max CSN" is present [32mINFO [0m tests.suites.clu.repl_monitor_test:repl_monitor_test.py:48 Check that "Status For Agreement: "002" (ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002)" is present [32mINFO [0m tests.suites.clu.repl_monitor_test:repl_monitor_test.py:48 Check that "Replica Enabled: on" is present [32mINFO [0m tests.suites.clu.repl_monitor_test:repl_monitor_test.py:48 Check that "Update In Progress: FALSE" is present [32mINFO [0m tests.suites.clu.repl_monitor_test:repl_monitor_test.py:48 Check that "Last Update Start:" is present [32mINFO [0m tests.suites.clu.repl_monitor_test:repl_monitor_test.py:48 Check that "Last Update End:" is present [32mINFO [0m tests.suites.clu.repl_monitor_test:repl_monitor_test.py:48 Check that "Number Of Changes Sent:" is present [32mINFO [0m tests.suites.clu.repl_monitor_test:repl_monitor_test.py:48 Check that "Number Of Changes Skipped: None" is present [32mINFO [0m tests.suites.clu.repl_monitor_test:repl_monitor_test.py:48 Check that "Last Update Status: Error (0) Replica acquired successfully: Incremental update succeeded" is present [32mINFO [0m tests.suites.clu.repl_monitor_test:repl_monitor_test.py:48 Check that "Last Init Start:" is present [32mINFO [0m tests.suites.clu.repl_monitor_test:repl_monitor_test.py:48 Check that "Last Init End:" is present [32mINFO [0m tests.suites.clu.repl_monitor_test:repl_monitor_test.py:48 Check that "Last Init Status:" is present [32mINFO [0m tests.suites.clu.repl_monitor_test:repl_monitor_test.py:48 Check that "Reap Active: 0" is present [32mINFO [0m tests.suites.clu.repl_monitor_test:repl_monitor_test.py:48 Check that "Replication Status: In Synchronization" is present [32mINFO [0m tests.suites.clu.repl_monitor_test:repl_monitor_test.py:48 Check that "Replication Lag Time:" is present [32mINFO [0m tests.suites.clu.repl_monitor_test:repl_monitor_test.py:48 Check that "Supplier: " is present [32mINFO [0m tests.suites.clu.repl_monitor_test:repl_monitor_test.py:48 Check that "ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002" is present [32mINFO [0m tests.suites.clu.repl_monitor_test:repl_monitor_test.py:48 Check that "Replica Root: dc=example,dc=com" is present [32mINFO [0m tests.suites.clu.repl_monitor_test:repl_monitor_test.py:48 Check that "Replica ID: 2" is present [32mINFO [0m tests.suites.clu.repl_monitor_test:repl_monitor_test.py:48 Check that "Status For Agreement: "001" (ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001)" is present [32mINFO [0m tests.suites.clu.repl_monitor_test:repl_monitor_test.py:52 Check for "['Supplier: M1 (ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001)', 'Supplier: M2 (ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002)']" | |||
Failed | suites/gssapi/simple_gssapi_test.py::test_gssapi_bind | 0.28 | |
topology_st_gssapi = <lib389.topologies.TopologyMain object at 0x7f61d07a45e0> testuser = <lib389.idm.user.UserAccount object at 0x7f61d07a4340> @gssapi_ack def test_gssapi_bind(topology_st_gssapi, testuser): """Test that we can bind with GSSAPI :id: 894a4c27-3d4c-4ba3-aa33-2910032e3783 :setup: standalone gssapi instance :steps: 1. Bind with sasl/gssapi :expectedresults: 1. Bind succeeds """ > conn = testuser.bind_gssapi() suites/gssapi/simple_gssapi_test.py:53: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/local/lib/python3.8/site-packages/lib389/idm/account.py:258: in bind_gssapi inst_clone.open(saslmethod='gssapi') /usr/local/lib/python3.8/site-packages/lib389/__init__.py:995: in open self.sasl_interactive_bind_s("", sasl_auth, escapehatch='i am sure') /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:476: in sasl_interactive_bind_s return self._ldap_call(self._l.sasl_interactive_bind_s,who,auth,RequestControlTuples(serverctrls),RequestControlTuples(clientctrls),sasl_flags) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:340: in _ldap_call reraise(exc_type, exc_value, exc_traceback) /usr/local/lib64/python3.8/site-packages/ldap/compat.py:46: in reraise raise exc_value _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.DirSrv object at 0x7f61d0a76c10> func = <built-in method sasl_interactive_bind_s of LDAP object at 0x7f61c3f4bdb0> args = ('', <ldap.sasl.gssapi object at 0x7f61c3f4baf0>, None, None, 2) kwargs = {}, diagnostic_message_success = None, exc_type = None exc_value = None, exc_traceback = None def _ldap_call(self,func,*args,**kwargs): """ Wrapper method mainly for serializing calls into OpenLDAP libs and trace logs """ self._ldap_object_lock.acquire() if __debug__: if self._trace_level>=1: self._trace_file.write('*** %s %s - %s\n%s\n' % ( repr(self), self._uri, '.'.join((self.__class__.__name__,func.__name__)), pprint.pformat((args,kwargs)) )) if self._trace_level>=9: traceback.print_stack(limit=self._trace_stack_limit,file=self._trace_file) diagnostic_message_success = None try: try: > result = func(*args,**kwargs) E ldap.INVALID_CREDENTIALS: {'result': 49, 'desc': 'Invalid credentials', 'ctrls': [], 'info': 'SASL(-1): generic failure: GSSAPI Error: An invalid name was supplied (Included profile file could not be read)'} /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:324: INVALID_CREDENTIALS -----------------------------Captured stdout setup------------------------------ Kerberos master password: sYtlIl3tRayMqDm2HhLDKI4IfxCJbf.e38Y9eiuV0gmnJ2tmMdeqPxaKNEBtAxhdE Loading random data Initializing database '/var/kerberos/krb5kdc/principal' for realm 'HOSTED.UPSHIFT.RDU2.REDHAT.COM', master key name 'K/M@HOSTED.UPSHIFT.RDU2.REDHAT.COM' Authenticating as principal root/admin@HOSTED.UPSHIFT.RDU2.REDHAT.COM with password. Principal "ldap/ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com@HOSTED.UPSHIFT.RDU2.REDHAT.COM" created. Authenticating as principal root/admin@HOSTED.UPSHIFT.RDU2.REDHAT.COM with password. K/M@HOSTED.UPSHIFT.RDU2.REDHAT.COM kadmin/admin@HOSTED.UPSHIFT.RDU2.REDHAT.COM kadmin/changepw@HOSTED.UPSHIFT.RDU2.REDHAT.COM kadmin/ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com@HOSTED.UPSHIFT.RDU2.REDHAT.COM kiprop/ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com@HOSTED.UPSHIFT.RDU2.REDHAT.COM krbtgt/HOSTED.UPSHIFT.RDU2.REDHAT.COM@HOSTED.UPSHIFT.RDU2.REDHAT.COM ldap/ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com@HOSTED.UPSHIFT.RDU2.REDHAT.COM Authenticating as principal root/admin@HOSTED.UPSHIFT.RDU2.REDHAT.COM with password. Entry for principal ldap/ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com@HOSTED.UPSHIFT.RDU2.REDHAT.COM with kvno 2, encryption type aes256-cts-hmac-sha1-96 added to keytab WRFILE:/etc/krb5.keytab. Entry for principal ldap/ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com@HOSTED.UPSHIFT.RDU2.REDHAT.COM with kvno 2, encryption type aes128-cts-hmac-sha1-96 added to keytab WRFILE:/etc/krb5.keytab. Authenticating as principal root/admin@HOSTED.UPSHIFT.RDU2.REDHAT.COM with password. Principal "testuser@HOSTED.UPSHIFT.RDU2.REDHAT.COM" created. Authenticating as principal root/admin@HOSTED.UPSHIFT.RDU2.REDHAT.COM with password. K/M@HOSTED.UPSHIFT.RDU2.REDHAT.COM kadmin/admin@HOSTED.UPSHIFT.RDU2.REDHAT.COM kadmin/changepw@HOSTED.UPSHIFT.RDU2.REDHAT.COM kadmin/ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com@HOSTED.UPSHIFT.RDU2.REDHAT.COM kiprop/ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com@HOSTED.UPSHIFT.RDU2.REDHAT.COM krbtgt/HOSTED.UPSHIFT.RDU2.REDHAT.COM@HOSTED.UPSHIFT.RDU2.REDHAT.COM ldap/ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com@HOSTED.UPSHIFT.RDU2.REDHAT.COM testuser@HOSTED.UPSHIFT.RDU2.REDHAT.COM Authenticating as principal root/admin@HOSTED.UPSHIFT.RDU2.REDHAT.COM with password. Entry for principal testuser@HOSTED.UPSHIFT.RDU2.REDHAT.COM with kvno 2, encryption type aes256-cts-hmac-sha1-96 added to keytab WRFILE:/tmp/testuser.keytab. Entry for principal testuser@HOSTED.UPSHIFT.RDU2.REDHAT.COM with kvno 2, encryption type aes128-cts-hmac-sha1-96 added to keytab WRFILE:/tmp/testuser.keytab. -----------------------------Captured stderr setup------------------------------ No policy specified for ldap/ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com@HOSTED.UPSHIFT.RDU2.REDHAT.COM; defaulting to no policy No policy specified for testuser@HOSTED.UPSHIFT.RDU2.REDHAT.COM; defaulting to no policy -------------------------------Captured log setup------------------------------- [32mINFO [0m lib389.SetupDs:setup.py:658 Starting installation... [32mINFO [0m lib389.SetupDs:setup.py:686 Completed installation for standalone1 [32mINFO [0m lib389.topologies:topologies.py:109 Instance with parameters {'ldap-port': 38901, 'ldap-secureport': 63601, 'server-id': 'standalone1', 'suffix': 'dc=example,dc=com'} was created. | |||
Failed | suites/gssapi/simple_gssapi_test.py::test_support_mech | 0.28 | |
topology_st_gssapi = <lib389.topologies.TopologyMain object at 0x7f61d07a45e0> testuser = <lib389.idm.user.UserAccount object at 0x7f61d07a4340> @gssapi_ack def test_support_mech(topology_st_gssapi, testuser): """Test allowed sasl mechs works when GSSAPI is allowed :id: 6ec80aca-00c4-4141-b96b-3ae8837fc751 :setup: standalone gssapi instance :steps: 1. Add GSSAPI to allowed sasl mechanisms. 2. Attempt to bind :expectedresults: 1. The allowed mechs are changed. 2. The bind succeeds. """ topology_st_gssapi.standalone.config.set('nsslapd-allowed-sasl-mechanisms', 'GSSAPI EXTERNAL ANONYMOUS') > conn = testuser.bind_gssapi() suites/gssapi/simple_gssapi_test.py:125: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/local/lib/python3.8/site-packages/lib389/idm/account.py:258: in bind_gssapi inst_clone.open(saslmethod='gssapi') /usr/local/lib/python3.8/site-packages/lib389/__init__.py:995: in open self.sasl_interactive_bind_s("", sasl_auth, escapehatch='i am sure') /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:476: in sasl_interactive_bind_s return self._ldap_call(self._l.sasl_interactive_bind_s,who,auth,RequestControlTuples(serverctrls),RequestControlTuples(clientctrls),sasl_flags) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:340: in _ldap_call reraise(exc_type, exc_value, exc_traceback) /usr/local/lib64/python3.8/site-packages/ldap/compat.py:46: in reraise raise exc_value _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.DirSrv object at 0x7f61d08fa2e0> func = <built-in method sasl_interactive_bind_s of LDAP object at 0x7f61d17580c0> args = ('', <ldap.sasl.gssapi object at 0x7f61d174ed00>, None, None, 2) kwargs = {}, diagnostic_message_success = None, exc_type = None exc_value = None, exc_traceback = None def _ldap_call(self,func,*args,**kwargs): """ Wrapper method mainly for serializing calls into OpenLDAP libs and trace logs """ self._ldap_object_lock.acquire() if __debug__: if self._trace_level>=1: self._trace_file.write('*** %s %s - %s\n%s\n' % ( repr(self), self._uri, '.'.join((self.__class__.__name__,func.__name__)), pprint.pformat((args,kwargs)) )) if self._trace_level>=9: traceback.print_stack(limit=self._trace_stack_limit,file=self._trace_file) diagnostic_message_success = None try: try: > result = func(*args,**kwargs) E ldap.INVALID_CREDENTIALS: {'result': 49, 'desc': 'Invalid credentials', 'ctrls': [], 'info': 'SASL(-1): generic failure: GSSAPI Error: An invalid name was supplied (Included profile file could not be read)'} /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:324: INVALID_CREDENTIALS | |||
Failed | suites/healthcheck/health_security_test.py::test_healthcheck_certif_expiring_within_30d | 11.09 | |
topology_st = <lib389.topologies.TopologyMain object at 0x7f61d06d7640> @pytest.mark.ds50873 @pytest.mark.bz1685160 @pytest.mark.xfail(ds_is_older("1.4.1"), reason="Not implemented") def test_healthcheck_certif_expiring_within_30d(topology_st): """Check if HealthCheck returns DSCERTLE0001 code :id: c2165032-88ba-4978-a4ca-2fecfd8c35d8 :setup: Standalone instance :steps: 1. Create DS instance 2. Use libfaketime to tell the process the date is within 30 days before certificate expiration 3. Use HealthCheck without --json option 4. Use HealthCheck with --json option :expectedresults: 1. Success 2. Success 3. Healthcheck reports DSCERTLE0001 code and related details 4. Healthcheck reports DSCERTLE0001 code and related details """ RET_CODE = 'DSCERTLE0001' standalone = topology_st.standalone standalone.enable_tls() # Cert is valid two years from today, so we count the date that is within 30 days before certificate expiration date_future = datetime.now() + timedelta(days=701) with libfaketime.fake_time(date_future): time.sleep(1) > run_healthcheck_and_flush_log(topology_st, standalone, RET_CODE, json=False) suites/healthcheck/health_security_test.py:304: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ topology = <lib389.topologies.TopologyMain object at 0x7f61d06d7640> instance = <lib389.DirSrv object at 0x7f61d07cd700> searched_code = 'DSCERTLE0001', json = False, searched_code2 = None def run_healthcheck_and_flush_log(topology, instance, searched_code, json, searched_code2=None): args = FakeArgs() args.instance = instance.serverid args.verbose = instance.verbose args.list_errors = False args.list_checks = False args.check = ['config', 'encryption', 'tls', 'fschecks'] args.dry_run = False if json: log.info('Use healthcheck with --json option') args.json = json health_check_run(instance, topology.logcap.log, args) assert topology.logcap.contains(searched_code) log.info('Healthcheck returned searched code: %s' % searched_code) if searched_code2 is not None: assert topology.logcap.contains(searched_code2) log.info('Healthcheck returned searched code: %s' % searched_code2) else: log.info('Use healthcheck without --json option') args.json = json health_check_run(instance, topology.logcap.log, args) > assert topology.logcap.contains(searched_code) E AssertionError: assert False E + where False = <bound method LogCapture.contains of <LogCapture (NOTSET)>>('DSCERTLE0001') E + where <bound method LogCapture.contains of <LogCapture (NOTSET)>> = <LogCapture (NOTSET)>.contains E + where <LogCapture (NOTSET)> = <lib389.topologies.TopologyMain object at 0x7f61d06d7640>.logcap suites/healthcheck/health_security_test.py:67: AssertionError -------------------------------Captured log call-------------------------------- [32mINFO [0m LogCapture:health.py:94 Beginning lint report, this could take a while ... [32mINFO [0m LogCapture:health.py:99 Checking config:hr_timestamp ... [32mINFO [0m LogCapture:health.py:99 Checking config:passwordscheme ... [32mINFO [0m LogCapture:health.py:99 Checking encryption:check_tls_version ... [32mINFO [0m LogCapture:health.py:99 Checking tls:certificate_expiration ... [32mINFO [0m LogCapture:health.py:99 Checking fschecks:file_perms ... [32mINFO [0m LogCapture:health.py:106 Healthcheck complete. [32mINFO [0m LogCapture:health.py:111 No issues found. | |||
Failed | suites/healthcheck/health_security_test.py::test_healthcheck_certif_expired | 10.70 | |
topology_st = <lib389.topologies.TopologyMain object at 0x7f61d06d7640> @pytest.mark.ds50873 @pytest.mark.bz1685160 @pytest.mark.xfail(ds_is_older("1.4.1"), reason="Not implemented") def test_healthcheck_certif_expired(topology_st): """Check if HealthCheck returns DSCERTLE0002 code :id: ceff2c22-62c0-4fd9-b737-930a88458d68 :setup: Standalone instance :steps: 1. Create DS instance 2. Use libfaketime to tell the process the date is after certificate expiration 3. Use HealthCheck without --json option 4. Use HealthCheck with --json option :expectedresults: 1. Success 2. Success 3. Healthcheck reports DSCERTLE0002 code and related details 4. Healthcheck reports DSCERTLE0002 code and related details """ RET_CODE = 'DSCERTLE0002' standalone = topology_st.standalone standalone.enable_tls() # Cert is valid two years from today, so we count the date that is after expiration date_future = datetime.now() + timedelta(days=731) with libfaketime.fake_time(date_future): time.sleep(1) > run_healthcheck_and_flush_log(topology_st, standalone, RET_CODE, json=False) suites/healthcheck/health_security_test.py:343: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ topology = <lib389.topologies.TopologyMain object at 0x7f61d06d7640> instance = <lib389.DirSrv object at 0x7f61d07cd700> searched_code = 'DSCERTLE0002', json = False, searched_code2 = None def run_healthcheck_and_flush_log(topology, instance, searched_code, json, searched_code2=None): args = FakeArgs() args.instance = instance.serverid args.verbose = instance.verbose args.list_errors = False args.list_checks = False args.check = ['config', 'encryption', 'tls', 'fschecks'] args.dry_run = False if json: log.info('Use healthcheck with --json option') args.json = json health_check_run(instance, topology.logcap.log, args) assert topology.logcap.contains(searched_code) log.info('Healthcheck returned searched code: %s' % searched_code) if searched_code2 is not None: assert topology.logcap.contains(searched_code2) log.info('Healthcheck returned searched code: %s' % searched_code2) else: log.info('Use healthcheck without --json option') args.json = json health_check_run(instance, topology.logcap.log, args) > assert topology.logcap.contains(searched_code) E AssertionError: assert False E + where False = <bound method LogCapture.contains of <LogCapture (NOTSET)>>('DSCERTLE0002') E + where <bound method LogCapture.contains of <LogCapture (NOTSET)>> = <LogCapture (NOTSET)>.contains E + where <LogCapture (NOTSET)> = <lib389.topologies.TopologyMain object at 0x7f61d06d7640>.logcap suites/healthcheck/health_security_test.py:67: AssertionError -------------------------------Captured log call-------------------------------- [32mINFO [0m LogCapture:health.py:94 Beginning lint report, this could take a while ... [32mINFO [0m LogCapture:health.py:99 Checking config:hr_timestamp ... [32mINFO [0m LogCapture:health.py:99 Checking config:passwordscheme ... [32mINFO [0m LogCapture:health.py:99 Checking encryption:check_tls_version ... [32mINFO [0m LogCapture:health.py:99 Checking tls:certificate_expiration ... [32mINFO [0m LogCapture:health.py:99 Checking fschecks:file_perms ... [32mINFO [0m LogCapture:health.py:106 Healthcheck complete. [32mINFO [0m LogCapture:health.py:119 2 Issues found! Generating report ... [32mINFO [0m LogCapture:health.py:45 [1] DS Lint Error: DSCERTLE0001 [32mINFO [0m LogCapture:health.py:46 -------------------------------------------------------------------------------- [32mINFO [0m LogCapture:health.py:47 Severity: MEDIUM [32mINFO [0m LogCapture:health.py:49 Check: tls:certificate_expiration [32mINFO [0m LogCapture:health.py:50 Affects: [32mINFO [0m LogCapture:health.py:52 -- Expiring Certificate [32mINFO [0m LogCapture:health.py:53 Details: [32mINFO [0m LogCapture:health.py:54 ----------- [32mINFO [0m LogCapture:health.py:55 The certificate (Self-Signed-CA) will expire in less than 30 days [32mINFO [0m LogCapture:health.py:56 Resolution: [32mINFO [0m LogCapture:health.py:57 ----------- [32mINFO [0m LogCapture:health.py:58 Renew the certificate before it expires to prevent disruptions with TLS connections. [32mINFO [0m LogCapture:health.py:45 [2] DS Lint Error: DSCERTLE0001 [32mINFO [0m LogCapture:health.py:46 -------------------------------------------------------------------------------- [32mINFO [0m LogCapture:health.py:47 Severity: MEDIUM [32mINFO [0m LogCapture:health.py:49 Check: tls:certificate_expiration [32mINFO [0m LogCapture:health.py:50 Affects: [32mINFO [0m LogCapture:health.py:52 -- Expiring Certificate [32mINFO [0m LogCapture:health.py:53 Details: [32mINFO [0m LogCapture:health.py:54 ----------- [32mINFO [0m LogCapture:health.py:55 The certificate (Server-Cert) will expire in less than 30 days [32mINFO [0m LogCapture:health.py:56 Resolution: [32mINFO [0m LogCapture:health.py:57 ----------- [32mINFO [0m LogCapture:health.py:58 Renew the certificate before it expires to prevent disruptions with TLS connections. [32mINFO [0m LogCapture:health.py:124 ===== End Of Report (2 Issues found) ===== | |||
Failed | suites/import/import_test.py::test_fast_slow_import | 10.47 | |
topo = <lib389.topologies.TopologyMain object at 0x7f61c3b82e20> _toggle_private_import_mem = None, _import_clean = None def test_fast_slow_import(topo, _toggle_private_import_mem, _import_clean): """With nsslapd-db-private-import-mem: on is faster import. :id: 3044331c-9c0e-11ea-ac9f-8c16451d917b :setup: Standalone Instance :steps: 1. Let's set nsslapd-db-private-import-mem:on, nsslapd-import-cache-autosize: 0 2. Measure offline import time duration total_time1 3. Now nsslapd-db-private-import-mem:off 4. Measure offline import time duration total_time2 5. total_time1 < total_time2 6. Set nsslapd-db-private-import-mem:on, nsslapd-import-cache-autosize: -1 7. Measure offline import time duration total_time1 8. Now nsslapd-db-private-import-mem:off 9. Measure offline import time duration total_time2 10. total_time1 < total_time2 :expected results: 1. Operation successful 2. Operation successful 3. Operation successful 4. Operation successful 5. Operation successful 6. Operation successful 7. Operation successful 8. Operation successful 9. Operation successful 10. Operation successful """ # Let's set nsslapd-db-private-import-mem:on, nsslapd-import-cache-autosize: 0 config = LDBMConfig(topo.standalone) # Measure offline import time duration total_time1 total_time1 = _import_offline(topo, 20) # Now nsslapd-db-private-import-mem:off config.replace('nsslapd-db-private-import-mem', 'off') accounts = Accounts(topo.standalone, DEFAULT_SUFFIX) for i in accounts.filter('(uid=*)'): UserAccount(topo.standalone, i.dn).delete() # Measure offline import time duration total_time2 total_time2 = _import_offline(topo, 20) # total_time1 < total_time2 > assert total_time1 < total_time2 E assert 2.094937801361084 < 2.0365939140319824 suites/import/import_test.py:307: AssertionError | |||
Failed | suites/paged_results/paged_results_test.py::test_search_paged_limits[conf_attr_values1-PASS] | 5.57 | |
topology_st = <lib389.topologies.TopologyMain object at 0x7f61d1361310> create_user = <lib389.idm.user.UserAccount object at 0x7f61d0ef5970> conf_attr_values = ('5000', '120', '122'), expected_rs = 'PASS' @pytest.mark.parametrize('conf_attr_values,expected_rs', ((('5000', '100', '100'), ldap.ADMINLIMIT_EXCEEDED), (('5000', '120', '122'), 'PASS'))) def test_search_paged_limits(topology_st, create_user, conf_attr_values, expected_rs): """Verify that nsslapd-idlistscanlimit and nsslapd-lookthroughlimit can limit the administrator search abilities. :id: e0f8b916-7276-4bd3-9e73-8696a4468811 :parametrized: yes :setup: Standalone instance, test user for binding, 10 users for the search base :steps: 1. Set nsslapd-sizelimit and nsslapd-pagedsizelimit to 5000 2. Set nsslapd-idlistscanlimit: 120 3. Set nsslapd-lookthroughlimit: 122 4. Bind as test user 5. Search through added users with a simple paged control using page_size = 10 6. Bind as Directory Manager 7. Set nsslapd-idlistscanlimit: 100 8. Set nsslapd-lookthroughlimit: 100 9. Bind as test user 10. Search through added users with a simple paged control using page_size = 10 :expectedresults: 1. nsslapd-sizelimit and nsslapd-pagedsizelimit should be successfully set 2. nsslapd-idlistscanlimit should be successfully set 3. nsslapd-lookthroughlimit should be successfully set 4. Bind should be successful 5. No error happens, all users should be found 6. Bind should be successful 7. nsslapd-idlistscanlimit should be successfully set 8. nsslapd-lookthroughlimit should be successfully set 9. Bind should be successful 10. It should throw ADMINLIMIT_EXCEEDED exception """ users_num = 101 page_size = 10 users_list = add_users(topology_st, users_num, DEFAULT_SUFFIX) search_flt = r'(uid=test*)' searchreq_attrlist = ['dn', 'sn'] size_attr_bck = change_conf_attr(topology_st, DN_CONFIG, 'nsslapd-sizelimit', conf_attr_values[0]) pagedsize_attr_bck = change_conf_attr(topology_st, DN_CONFIG, 'nsslapd-pagedsizelimit', conf_attr_values[0]) idlistscan_attr_bck = change_conf_attr(topology_st, 'cn=config,%s' % DN_LDBM, 'nsslapd-idlistscanlimit', conf_attr_values[1]) lookthrough_attr_bck = change_conf_attr(topology_st, 'cn=config,%s' % DN_LDBM, 'nsslapd-lookthroughlimit', conf_attr_values[2]) try: log.info('Set user bind') conn = create_user.bind(TEST_USER_PWD) req_ctrl = SimplePagedResultsControl(True, size=page_size, cookie='') controls = [req_ctrl] if expected_rs == ldap.ADMINLIMIT_EXCEEDED: log.info('Expect to fail with ADMINLIMIT_EXCEEDED') with pytest.raises(expected_rs): all_results = paged_search(conn, DEFAULT_SUFFIX, controls, search_flt, searchreq_attrlist) elif expected_rs == 'PASS': log.info('Expect to pass') > all_results = paged_search(conn, DEFAULT_SUFFIX, controls, search_flt, searchreq_attrlist) suites/paged_results/paged_results_test.py:901: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ suites/paged_results/paged_results_test.py:200: in paged_search rtype, rdata, rmsgid, rctrls = conn.result3(msgid) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:764: in result3 resp_type, resp_data, resp_msgid, decoded_resp_ctrls, retoid, retval = self.result4( /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:774: in result4 ldap_result = self._ldap_call(self._l.result4,msgid,all,timeout,add_ctrls,add_intermediates,add_extop) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:340: in _ldap_call reraise(exc_type, exc_value, exc_traceback) /usr/local/lib64/python3.8/site-packages/ldap/compat.py:46: in reraise raise exc_value _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.DirSrv object at 0x7f61c36a66a0> func = <built-in method result4 of LDAP object at 0x7f61c3725990> args = (12, 1, -1, 0, 0, 0), kwargs = {}, diagnostic_message_success = None exc_type = None, exc_value = None, exc_traceback = None def _ldap_call(self,func,*args,**kwargs): """ Wrapper method mainly for serializing calls into OpenLDAP libs and trace logs """ self._ldap_object_lock.acquire() if __debug__: if self._trace_level>=1: self._trace_file.write('*** %s %s - %s\n%s\n' % ( repr(self), self._uri, '.'.join((self.__class__.__name__,func.__name__)), pprint.pformat((args,kwargs)) )) if self._trace_level>=9: traceback.print_stack(limit=self._trace_stack_limit,file=self._trace_file) diagnostic_message_success = None try: try: > result = func(*args,**kwargs) E ldap.ADMINLIMIT_EXCEEDED: {'msgtype': 100, 'msgid': 12, 'result': 11, 'desc': 'Administrative limit exceeded', 'ctrls': []} /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:324: ADMINLIMIT_EXCEEDED -------------------------------Captured log call-------------------------------- [32mINFO [0m tests.suites.paged_results.paged_results_test:paged_results_test.py:133 Adding 101 users [32mINFO [0m tests.suites.paged_results.paged_results_test:paged_results_test.py:169 Set nsslapd-sizelimit to 5000. Previous value - b'2000'. Modified suffix - cn=config. [32mINFO [0m tests.suites.paged_results.paged_results_test:paged_results_test.py:169 Set nsslapd-pagedsizelimit to 5000. Previous value - b'0'. Modified suffix - cn=config. [32mINFO [0m tests.suites.paged_results.paged_results_test:paged_results_test.py:169 Set nsslapd-idlistscanlimit to 120. Previous value - b'4000'. Modified suffix - cn=config,cn=ldbm database,cn=plugins,cn=config. [32mINFO [0m tests.suites.paged_results.paged_results_test:paged_results_test.py:169 Set nsslapd-lookthroughlimit to 122. Previous value - b'5000'. Modified suffix - cn=config,cn=ldbm database,cn=plugins,cn=config. [32mINFO [0m tests.suites.paged_results.paged_results_test:paged_results_test.py:889 Set user bind [32mINFO [0m tests.suites.paged_results.paged_results_test:paged_results_test.py:900 Expect to pass [32mINFO [0m tests.suites.paged_results.paged_results_test:paged_results_test.py:191 Running simple paged result search with - search suffix: dc=example,dc=com; filter: (uid=test*); attr list ['dn', 'sn']; page_size = 10; controls: [<ldap.controls.libldap.SimplePagedResultsControl object at 0x7f61c3725820>]. [32mINFO [0m tests.suites.paged_results.paged_results_test:paged_results_test.py:199 Getting page 0 [32mINFO [0m tests.suites.paged_results.paged_results_test:paged_results_test.py:199 Getting page 1 [32mINFO [0m tests.suites.paged_results.paged_results_test:paged_results_test.py:199 Getting page 2 [32mINFO [0m tests.suites.paged_results.paged_results_test:paged_results_test.py:199 Getting page 3 [32mINFO [0m tests.suites.paged_results.paged_results_test:paged_results_test.py:199 Getting page 4 [32mINFO [0m tests.suites.paged_results.paged_results_test:paged_results_test.py:199 Getting page 5 [32mINFO [0m tests.suites.paged_results.paged_results_test:paged_results_test.py:199 Getting page 6 [32mINFO [0m tests.suites.paged_results.paged_results_test:paged_results_test.py:199 Getting page 7 [32mINFO [0m tests.suites.paged_results.paged_results_test:paged_results_test.py:199 Getting page 8 [32mINFO [0m tests.suites.paged_results.paged_results_test:paged_results_test.py:199 Getting page 9 [32mINFO [0m tests.suites.paged_results.paged_results_test:paged_results_test.py:155 Deleting 101 users [32mINFO [0m tests.suites.paged_results.paged_results_test:paged_results_test.py:169 Set nsslapd-sizelimit to b'2000'. Previous value - b'5000'. Modified suffix - cn=config. [32mINFO [0m tests.suites.paged_results.paged_results_test:paged_results_test.py:169 Set nsslapd-pagedsizelimit to b'0'. Previous value - b'5000'. Modified suffix - cn=config. [32mINFO [0m tests.suites.paged_results.paged_results_test:paged_results_test.py:169 Set nsslapd-lookthroughlimit to b'5000'. Previous value - b'122'. Modified suffix - cn=config,cn=ldbm database,cn=plugins,cn=config. [32mINFO [0m tests.suites.paged_results.paged_results_test:paged_results_test.py:169 Set nsslapd-idlistscanlimit to b'4000'. Previous value - b'120'. Modified suffix - cn=config,cn=ldbm database,cn=plugins,cn=config. | |||
Failed | suites/paged_results/paged_results_test.py::test_search_paged_user_limits[conf_attr_values1-PASS] | 4.83 | |
topology_st = <lib389.topologies.TopologyMain object at 0x7f61d1361310> create_user = <lib389.idm.user.UserAccount object at 0x7f61d0ef5970> conf_attr_values = ('1000', '120', '122'), expected_rs = 'PASS' @pytest.mark.parametrize('conf_attr_values,expected_rs', ((('1000', '100', '100'), ldap.ADMINLIMIT_EXCEEDED), (('1000', '120', '122'), 'PASS'))) def test_search_paged_user_limits(topology_st, create_user, conf_attr_values, expected_rs): """Verify that nsPagedIDListScanLimit and nsPagedLookthroughLimit override nsslapd-idlistscanlimit and nsslapd-lookthroughlimit while performing search with the simple paged results control. :id: 69e393e9-1ab8-4f4e-b4a1-06ca63dc7b1b :parametrized: yes :setup: Standalone instance, test user for binding, 10 users for the search base :steps: 1. Set nsslapd-idlistscanlimit: 1000 2. Set nsslapd-lookthroughlimit: 1000 3. Set nsPagedIDListScanLimit: 120 4. Set nsPagedLookthroughLimit: 122 5. Bind as test user 6. Search through added users with a simple paged control using page_size = 10 7. Bind as Directory Manager 8. Set nsPagedIDListScanLimit: 100 9. Set nsPagedLookthroughLimit: 100 10. Bind as test user 11. Search through added users with a simple paged control using page_size = 10 :expectedresults: 1. nsslapd-idlistscanlimit should be successfully set 2. nsslapd-lookthroughlimit should be successfully set 3. nsPagedIDListScanLimit should be successfully set 4. nsPagedLookthroughLimit should be successfully set 5. Bind should be successful 6. No error happens, all users should be found 7. Bind should be successful 8. nsPagedIDListScanLimit should be successfully set 9. nsPagedLookthroughLimit should be successfully set 10. Bind should be successful 11. It should throw ADMINLIMIT_EXCEEDED exception """ users_num = 101 page_size = 10 users_list = add_users(topology_st, users_num, DEFAULT_SUFFIX) search_flt = r'(uid=test*)' searchreq_attrlist = ['dn', 'sn'] lookthrough_attr_bck = change_conf_attr(topology_st, 'cn=config,%s' % DN_LDBM, 'nsslapd-lookthroughlimit', conf_attr_values[0]) idlistscan_attr_bck = change_conf_attr(topology_st, 'cn=config,%s' % DN_LDBM, 'nsslapd-idlistscanlimit', conf_attr_values[0]) user_idlistscan_attr_bck = change_conf_attr(topology_st, create_user.dn, 'nsPagedIDListScanLimit', conf_attr_values[1]) user_lookthrough_attr_bck = change_conf_attr(topology_st, create_user.dn, 'nsPagedLookthroughLimit', conf_attr_values[2]) try: log.info('Set user bind') conn = create_user.bind(TEST_USER_PWD) req_ctrl = SimplePagedResultsControl(True, size=page_size, cookie='') controls = [req_ctrl] if expected_rs == ldap.ADMINLIMIT_EXCEEDED: log.info('Expect to fail with ADMINLIMIT_EXCEEDED') with pytest.raises(expected_rs): all_results = paged_search(conn, DEFAULT_SUFFIX, controls, search_flt, searchreq_attrlist) elif expected_rs == 'PASS': log.info('Expect to pass') > all_results = paged_search(conn, DEFAULT_SUFFIX, controls, search_flt, searchreq_attrlist) suites/paged_results/paged_results_test.py:975: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ suites/paged_results/paged_results_test.py:200: in paged_search rtype, rdata, rmsgid, rctrls = conn.result3(msgid) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:764: in result3 resp_type, resp_data, resp_msgid, decoded_resp_ctrls, retoid, retval = self.result4( /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:774: in result4 ldap_result = self._ldap_call(self._l.result4,msgid,all,timeout,add_ctrls,add_intermediates,add_extop) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:340: in _ldap_call reraise(exc_type, exc_value, exc_traceback) /usr/local/lib64/python3.8/site-packages/ldap/compat.py:46: in reraise raise exc_value _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.DirSrv object at 0x7f61d04f9490> func = <built-in method result4 of LDAP object at 0x7f61c3717090> args = (12, 1, -1, 0, 0, 0), kwargs = {}, diagnostic_message_success = None exc_type = None, exc_value = None, exc_traceback = None def _ldap_call(self,func,*args,**kwargs): """ Wrapper method mainly for serializing calls into OpenLDAP libs and trace logs """ self._ldap_object_lock.acquire() if __debug__: if self._trace_level>=1: self._trace_file.write('*** %s %s - %s\n%s\n' % ( repr(self), self._uri, '.'.join((self.__class__.__name__,func.__name__)), pprint.pformat((args,kwargs)) )) if self._trace_level>=9: traceback.print_stack(limit=self._trace_stack_limit,file=self._trace_file) diagnostic_message_success = None try: try: > result = func(*args,**kwargs) E ldap.ADMINLIMIT_EXCEEDED: {'msgtype': 100, 'msgid': 12, 'result': 11, 'desc': 'Administrative limit exceeded', 'ctrls': []} /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:324: ADMINLIMIT_EXCEEDED -------------------------------Captured log call-------------------------------- [32mINFO [0m tests.suites.paged_results.paged_results_test:paged_results_test.py:133 Adding 101 users [32mINFO [0m tests.suites.paged_results.paged_results_test:paged_results_test.py:169 Set nsslapd-lookthroughlimit to 1000. Previous value - b'5000'. Modified suffix - cn=config,cn=ldbm database,cn=plugins,cn=config. [32mINFO [0m tests.suites.paged_results.paged_results_test:paged_results_test.py:169 Set nsslapd-idlistscanlimit to 1000. Previous value - b'4000'. Modified suffix - cn=config,cn=ldbm database,cn=plugins,cn=config. [32mINFO [0m tests.suites.paged_results.paged_results_test:paged_results_test.py:169 Set nsPagedIDListScanLimit to 120. Previous value - None. Modified suffix - uid=simplepaged_test,ou=People,dc=example,dc=com. [32mINFO [0m tests.suites.paged_results.paged_results_test:paged_results_test.py:169 Set nsPagedLookthroughLimit to 122. Previous value - None. Modified suffix - uid=simplepaged_test,ou=People,dc=example,dc=com. [32mINFO [0m tests.suites.paged_results.paged_results_test:paged_results_test.py:963 Set user bind [32mINFO [0m tests.suites.paged_results.paged_results_test:paged_results_test.py:974 Expect to pass [32mINFO [0m tests.suites.paged_results.paged_results_test:paged_results_test.py:191 Running simple paged result search with - search suffix: dc=example,dc=com; filter: (uid=test*); attr list ['dn', 'sn']; page_size = 10; controls: [<ldap.controls.libldap.SimplePagedResultsControl object at 0x7f61c3880370>]. [32mINFO [0m tests.suites.paged_results.paged_results_test:paged_results_test.py:199 Getting page 0 [32mINFO [0m tests.suites.paged_results.paged_results_test:paged_results_test.py:199 Getting page 1 [32mINFO [0m tests.suites.paged_results.paged_results_test:paged_results_test.py:199 Getting page 2 [32mINFO [0m tests.suites.paged_results.paged_results_test:paged_results_test.py:199 Getting page 3 [32mINFO [0m tests.suites.paged_results.paged_results_test:paged_results_test.py:199 Getting page 4 [32mINFO [0m tests.suites.paged_results.paged_results_test:paged_results_test.py:199 Getting page 5 [32mINFO [0m tests.suites.paged_results.paged_results_test:paged_results_test.py:199 Getting page 6 [32mINFO [0m tests.suites.paged_results.paged_results_test:paged_results_test.py:199 Getting page 7 [32mINFO [0m tests.suites.paged_results.paged_results_test:paged_results_test.py:199 Getting page 8 [32mINFO [0m tests.suites.paged_results.paged_results_test:paged_results_test.py:199 Getting page 9 [32mINFO [0m tests.suites.paged_results.paged_results_test:paged_results_test.py:155 Deleting 101 users [32mINFO [0m tests.suites.paged_results.paged_results_test:paged_results_test.py:169 Set nsslapd-lookthroughlimit to b'5000'. Previous value - b'1000'. Modified suffix - cn=config,cn=ldbm database,cn=plugins,cn=config. [32mINFO [0m tests.suites.paged_results.paged_results_test:paged_results_test.py:169 Set nsslapd-idlistscanlimit to b'4000'. Previous value - b'1000'. Modified suffix - cn=config,cn=ldbm database,cn=plugins,cn=config. [32mINFO [0m tests.suites.paged_results.paged_results_test:paged_results_test.py:169 Set nsPagedIDListScanLimit to None. Previous value - b'120'. Modified suffix - uid=simplepaged_test,ou=People,dc=example,dc=com. [32mINFO [0m tests.suites.paged_results.paged_results_test:paged_results_test.py:169 Set nsPagedLookthroughLimit to None. Previous value - b'122'. Modified suffix - uid=simplepaged_test,ou=People,dc=example,dc=com. | |||
Failed | suites/replication/conflict_resolve_test.py::TestTwoMasters::test_complex_add_modify_modrdn_delete | 88.36 | |
self = <tests.suites.replication.conflict_resolve_test.TestTwoMasters object at 0x7f61c33aea00> topology_m2 = <lib389.topologies.TopologyMain object at 0x7f61c2f65f40> base_m2 = <lib389.idm.nscontainer.nsContainer object at 0x7f61c33ae760> def test_complex_add_modify_modrdn_delete(self, topology_m2, base_m2): """Check that conflict properly resolved for complex operations which involve add, modify, modrdn and delete :id: 77f09b18-03d1-45da-940b-1ad2c2908eb1 :setup: Two master replication, test container for entries, enable plugin logging, audit log, error log for replica and access log for internal :steps: 1. Add ten users to m1 and wait for replication to happen 2. Pause replication 3. Test add-del on m1 and add on m2 4. Test add-mod on m1 and add on m2 5. Test add-modrdn on m1 and add on m2 6. Test multiple add, modrdn 7. Test Add-del on both masters 8. Test modrdn-modrdn 9. Test modrdn-del 10. Resume replication 11. Check that the entries on both masters are the same and replication is working :expectedresults: 1. It should pass 2. It should pass 3. It should pass 4. It should pass 5. It should pass 6. It should pass 7. It should pass 8. It should pass 9. It should pass 10. It should pass 11. It should pass """ M1 = topology_m2.ms["master1"] M2 = topology_m2.ms["master2"] test_users_m1 = UserAccounts(M1, base_m2.dn, rdn=None) test_users_m2 = UserAccounts(M2, base_m2.dn, rdn=None) repl = ReplicationManager(SUFFIX) for user_num in range(1100, 1110): _create_user(test_users_m1, user_num) repl.test_replication(M1, M2) topology_m2.pause_all_replicas() log.info("Test add-del on M1 and add on M2") user_num += 1 _create_user(test_users_m1, user_num) _delete_user(test_users_m1, user_num, sleep=True) _create_user(test_users_m2, user_num, sleep=True) user_num += 1 _create_user(test_users_m1, user_num, sleep=True) _create_user(test_users_m2, user_num, sleep=True) _delete_user(test_users_m1, user_num, sleep=True) user_num += 1 _create_user(test_users_m2, user_num, sleep=True) _create_user(test_users_m1, user_num) _delete_user(test_users_m1, user_num) log.info("Test add-mod on M1 and add on M2") user_num += 1 _create_user(test_users_m1, user_num) _modify_user(test_users_m1, user_num, sleep=True) _create_user(test_users_m2, user_num, sleep=True) user_num += 1 _create_user(test_users_m1, user_num, sleep=True) _create_user(test_users_m2, user_num, sleep=True) _modify_user(test_users_m1, user_num, sleep=True) user_num += 1 _create_user(test_users_m2, user_num, sleep=True) _create_user(test_users_m1, user_num) _modify_user(test_users_m1, user_num) log.info("Test add-modrdn on M1 and add on M2") user_num += 1 _create_user(test_users_m1, user_num) _rename_user(test_users_m1, user_num, user_num+20, sleep=True) _create_user(test_users_m2, user_num, sleep=True) user_num += 1 _create_user(test_users_m1, user_num, sleep=True) _create_user(test_users_m2, user_num, sleep=True) _rename_user(test_users_m1, user_num, user_num+20, sleep=True) user_num += 1 _create_user(test_users_m2, user_num, sleep=True) _create_user(test_users_m1, user_num) _rename_user(test_users_m1, user_num, user_num+20) log.info("Test multiple add, modrdn") user_num += 1 _create_user(test_users_m1, user_num, sleep=True) _create_user(test_users_m2, user_num, sleep=True) _rename_user(test_users_m1, user_num, user_num+20) _create_user(test_users_m1, user_num, sleep=True) _modify_user(test_users_m2, user_num, sleep=True) log.info("Add - del on both masters") user_num += 1 _create_user(test_users_m1, user_num) _delete_user(test_users_m1, user_num, sleep=True) _create_user(test_users_m2, user_num) _delete_user(test_users_m2, user_num, sleep=True) log.info("Test modrdn - modrdn") user_num += 1 _rename_user(test_users_m1, 1109, 1129, sleep=True) _rename_user(test_users_m2, 1109, 1129, sleep=True) log.info("Test modrdn - del") user_num += 1 _rename_user(test_users_m1, 1100, 1120, sleep=True) _delete_user(test_users_m2, 1100) user_num += 1 _delete_user(test_users_m2, 1101, sleep=True) _rename_user(test_users_m1, 1101, 1121) topology_m2.resume_all_replicas() repl.test_replication_topology(topology_m2) time.sleep(30) user_dns_m1 = [user.dn for user in test_users_m1.list()] user_dns_m2 = [user.dn for user in test_users_m2.list()] > assert set(user_dns_m1) == set(user_dns_m2) E AssertionError: assert {'uid=test_us...,dc=com', ...} == {'uid=test_us...,dc=com', ...} E Extra items in the left set: E 'uid=test_user_1112,cn=test_container,dc=example,dc=com' E 'uid=test_user_1111,cn=test_container,dc=example,dc=com' E 'uid=test_user_1117,cn=test_container,dc=example,dc=com' E Full diff: E { E 'uid=test_user_1102,cn=test_container,dc=example,dc=com',... E E ...Full output truncated (24 lines hidden), use '-vv' to show suites/replication/conflict_resolve_test.py:369: AssertionError -------------------------------Captured log call-------------------------------- [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is NOT working (expect e18329f5-f189-414f-9b66-9e7affe2a14b / got description=d01c440b-0b62-4d1c-96b4-cfee0878540b) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is NOT working (expect e18329f5-f189-414f-9b66-9e7affe2a14b / got description=d01c440b-0b62-4d1c-96b4-cfee0878540b) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is NOT working (expect e18329f5-f189-414f-9b66-9e7affe2a14b / got description=d01c440b-0b62-4d1c-96b4-cfee0878540b) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is NOT working (expect e18329f5-f189-414f-9b66-9e7affe2a14b / got description=d01c440b-0b62-4d1c-96b4-cfee0878540b) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is NOT working (expect e18329f5-f189-414f-9b66-9e7affe2a14b / got description=d01c440b-0b62-4d1c-96b4-cfee0878540b) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is working [32mINFO [0m tests.suites.replication.conflict_resolve_test:conflict_resolve_test.py:285 Test add-del on M1 and add on M2 [32mINFO [0m tests.suites.replication.conflict_resolve_test:conflict_resolve_test.py:301 Test add-mod on M1 and add on M2 [32mINFO [0m tests.suites.replication.conflict_resolve_test:conflict_resolve_test.py:317 Test add-modrdn on M1 and add on M2 [32mINFO [0m tests.suites.replication.conflict_resolve_test:conflict_resolve_test.py:333 Test multiple add, modrdn [32mINFO [0m tests.suites.replication.conflict_resolve_test:conflict_resolve_test.py:341 Add - del on both masters [32mINFO [0m tests.suites.replication.conflict_resolve_test:conflict_resolve_test.py:348 Test modrdn - modrdn [32mINFO [0m tests.suites.replication.conflict_resolve_test:conflict_resolve_test.py:353 Test modrdn - del [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is NOT working (expect 0c268b80-f2e8-41ad-9de5-522fb814eb39 / got description=e18329f5-f189-414f-9b66-9e7affe2a14b) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is NOT working (expect 0c268b80-f2e8-41ad-9de5-522fb814eb39 / got description=e18329f5-f189-414f-9b66-9e7affe2a14b) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is NOT working (expect 0c268b80-f2e8-41ad-9de5-522fb814eb39 / got description=e18329f5-f189-414f-9b66-9e7affe2a14b) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is NOT working (expect 0c268b80-f2e8-41ad-9de5-522fb814eb39 / got description=e18329f5-f189-414f-9b66-9e7affe2a14b) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is NOT working (expect 0c268b80-f2e8-41ad-9de5-522fb814eb39 / got description=e18329f5-f189-414f-9b66-9e7affe2a14b) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is NOT working (expect 0c268b80-f2e8-41ad-9de5-522fb814eb39 / got description=e18329f5-f189-414f-9b66-9e7affe2a14b) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is NOT working (expect 0c268b80-f2e8-41ad-9de5-522fb814eb39 / got description=e18329f5-f189-414f-9b66-9e7affe2a14b) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect f2e221f8-3b1d-4d9d-b66c-a3843bebe85a / got description=0c268b80-f2e8-41ad-9de5-522fb814eb39) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is working | |||
Failed | suites/schema/schema_reload_test.py::test_schema_operation | 2.19 | |
topo = <lib389.topologies.TopologyMain object at 0x7f61c35d4460> def test_schema_operation(topo): """Test that the cases in original schema are preserved. Test that duplicated schema except cases are not loaded Test to use a custom schema :id: e7448863-ac62-4b49-b013-4efa412c0455 :setup: Standalone instance :steps: 1. Create a test schema with cases 2. Run a schema_reload task 3. Check the attribute is present 4. Case 2: Check duplicated schema except cases are not loaded 5. Case 2-1: Use the custom schema :expectedresults: 1. Operation should be successful 2. Operation should be successful 3. Operation should be successful 4. Operation should be successful 5. Operation should be successful """ log.info('case 1: Test the cases in the original schema are preserved.') schema_filename = topo.standalone.schemadir + '/98test.ldif' try: with open(schema_filename, "w") as schema_file: schema_file.write("dn: cn=schema\n") schema_file.write("attributetypes: ( 8.9.10.11.12.13.14 NAME " + "'MoZiLLaaTTRiBuTe' SYNTAX 1.3.6.1.4.1.1466.115.121.1.15 " + " X-ORIGIN 'Mozilla Dummy Schema' )\n") schema_file.write("objectclasses: ( 1.2.3.4.5.6.7 NAME 'MozillaObject' " + "SUP top MUST ( objectclass $ cn ) MAY ( MoZiLLaaTTRiBuTe )" + " X-ORIGIN 'user defined' )')\n") except OSError as e: log.fatal("Failed to create schema file: " + "{} Error: {}".format(schema_filename, str(e))) # run the schema reload task with the default schemadir schema = Schema(topo.standalone) task = schema.reload(schema_dir=topo.standalone.schemadir) task.wait() subschema = topo.standalone.schema.get_subschema() at_obj = subschema.get_obj(ldap.schema.AttributeType, 'MoZiLLaaTTRiBuTe') > assert at_obj is not None, "The attribute was not found on server" E AssertionError: The attribute was not found on server E assert None is not None suites/schema/schema_reload_test.py:120: AssertionError -------------------------------Captured log call-------------------------------- [32mINFO [0m tests.suites.schema.schema_reload_test:schema_reload_test.py:94 case 1: Test the cases in the original schema are preserved. | |||
Failed | suites/schema/schema_reload_test.py::test_valid_schema | 2.02 | |
topo = <lib389.topologies.TopologyMain object at 0x7f61c35d4460> def test_valid_schema(topo): """Test schema-reload task with valid schema :id: 2ab304c0-3e58-4d34-b23b-a14b5997c7a8 :setup: Standalone instance :steps: 1. Create schema file with valid schema 2. Run schema-reload.pl script 3. Run ldapsearch and check if schema was added :expectedresults: 1. File creation should work 2. The schema reload task should be successful 3. Searching the server should return the new schema """ log.info("Test schema-reload task with valid schema") # Step 1 - Create schema file log.info("Create valid schema file (99user.ldif)...") schema_filename = (topo.standalone.schemadir + "/99user.ldif") try: with open(schema_filename, 'w') as schema_file: schema_file.write("dn: cn=schema\n") schema_file.write("attributetypes: ( 8.9.10.11.12.13.13 NAME " + "'ValidAttribute' SYNTAX 1.3.6.1.4.1.1466.115.121.1.15" + " X-ORIGIN 'Mozilla Dummy Schema' )\n") schema_file.write("objectclasses: ( 1.2.3.4.5.6.7.8 NAME 'TestObject' " + "SUP top MUST ( objectclass $ cn ) MAY ( givenName $ " + "sn $ ValidAttribute ) X-ORIGIN 'user defined' )')\n") except OSError as e: log.fatal("Failed to create schema file: " + "{} Error: {}".format(schema_filename, str(e))) # Step 2 - Run the schema-reload task log.info("Run the schema-reload task...") schema = Schema(topo.standalone) task = schema.reload(schema_dir=topo.standalone.schemadir) task.wait() > assert task.get_exit_code() == 0, "The schema reload task failed" E AssertionError: The schema reload task failed E assert 65 == 0 E +65 E -0 suites/schema/schema_reload_test.py:207: AssertionError -------------------------------Captured log call-------------------------------- [32mINFO [0m tests.suites.schema.schema_reload_test:schema_reload_test.py:184 Test schema-reload task with valid schema [32mINFO [0m tests.suites.schema.schema_reload_test:schema_reload_test.py:187 Create valid schema file (99user.ldif)... [32mINFO [0m tests.suites.schema.schema_reload_test:schema_reload_test.py:203 Run the schema-reload task... | |||
Failed | suites/syncrepl_plugin/basic_test.py::test_sync_repl_cookie | 0.00 | |
topology = <lib389.topologies.TopologyMain object at 0x7f61c31ae730> request = <FixtureRequest for <Function test_sync_repl_cookie>> def test_sync_repl_cookie(topology, request): """Test sync_repl cookie are progressing is an increasing order when there are nested updates :id: d7fbde25-5702-46ac-b38e-169d7a68e97c :setup: Standalone Instance :steps: 1.: enable retroCL 2.: configure retroCL to log nsuniqueid as targetUniqueId 3.: enable content_sync plugin 4.: enable automember 5.: create (2) groups. Few groups can help to reproduce the concurrent updates problem. 6.: configure automember to provision those groups with 'member' 7.: enable and configure memberof plugin 8.: enable plugin log level 9.: restart the server 10.: create a thread dedicated to run a sync repl client 11.: Create (9) users that will generate nested updates (automember/memberof) 12.: stop sync repl client and collect the list of cookie.change_no 13.: check that cookies.change_no are in increasing order :expectedresults: 1.: succeeds 2.: succeeds 3.: succeeds 4.: succeeds 5.: succeeds 6.: succeeds 7.: succeeds 8.: succeeds 9.: succeeds 10.: succeeds 11.: succeeds 12.: succeeds 13.: succeeds """ inst = topology[0] # Enable/configure retroCL plugin = RetroChangelogPlugin(inst) > plugin.disable() suites/syncrepl_plugin/basic_test.py:275: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/local/lib/python3.8/site-packages/lib389/plugins.py:63: in disable self.set('nsslapd-pluginEnabled', 'off') _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.plugins.RetroChangelogPlugin object at 0x7f61c3137d90> key = 'nsslapd-pluginEnabled', value = 'off', action = 2 def set(self, key, value, action=ldap.MOD_REPLACE): """Perform a specified action on a key with value :param key: an attribute name :type key: str :param value: an attribute value :type value: str :param action: - ldap.MOD_REPLACE - by default - ldap.MOD_ADD - ldap.MOD_DELETE :type action: int :returns: result of modify_s operation :raises: ValueError - if instance is not online """ if action == ldap.MOD_ADD: action_txt = "ADD" elif action == ldap.MOD_REPLACE: action_txt = "REPLACE" elif action == ldap.MOD_DELETE: action_txt = "DELETE" else: # This should never happen (bug!) action_txt = "UNKNOWN" if value is None or len(value) < 512: self._log.debug("%s set %s: (%r, %r)" % (self._dn, action_txt, key, display_log_value(key, value))) else: self._log.debug("%s set %s: (%r, value too large)" % (self._dn, action_txt, key)) if self._instance.state != DIRSRV_STATE_ONLINE: > raise ValueError("Invalid state. Cannot set properties on instance that is not ONLINE.") E ValueError: Invalid state. Cannot set properties on instance that is not ONLINE. /usr/local/lib/python3.8/site-packages/lib389/_mapped_object.py:438: ValueError | |||
Failed | suites/syncrepl_plugin/basic_test.py::test_sync_repl_cookie_add_del | 0.00 | |
topology = <lib389.topologies.TopologyMain object at 0x7f61c31ae730> request = <FixtureRequest for <Function test_sync_repl_cookie_add_del>> def test_sync_repl_cookie_add_del(topology, request): """Test sync_repl cookie are progressing is an increasing order when there add and del :id: 83e11038-6ed0-4a5b-ac77-e44887ab11e3 :setup: Standalone Instance :steps: 1.: enable retroCL 2.: configure retroCL to log nsuniqueid as targetUniqueId 3.: enable content_sync plugin 4.: enable automember 5.: create (2) groups. Few groups can help to reproduce the concurrent updates problem. 6.: configure automember to provision those groups with 'member' 7.: enable and configure memberof plugin 8.: enable plugin log level 9.: restart the server 10.: create a thread dedicated to run a sync repl client 11.: Create (3) users that will generate nested updates (automember/memberof) 12.: Delete (3) users 13.: stop sync repl client and collect the list of cookie.change_no 14.: check that cookies.change_no are in increasing order :expectedresults: 1.: succeeds 2.: succeeds 3.: succeeds 4.: succeeds 5.: succeeds 6.: succeeds 7.: succeeds 8.: succeeds 9.: succeeds 10.: succeeds 11.: succeeds 12.: succeeds 13.: succeeds 14.: succeeds """ inst = topology[0] # Enable/configure retroCL plugin = RetroChangelogPlugin(inst) > plugin.disable() suites/syncrepl_plugin/basic_test.py:407: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/local/lib/python3.8/site-packages/lib389/plugins.py:63: in disable self.set('nsslapd-pluginEnabled', 'off') _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.plugins.RetroChangelogPlugin object at 0x7f61c315f3d0> key = 'nsslapd-pluginEnabled', value = 'off', action = 2 def set(self, key, value, action=ldap.MOD_REPLACE): """Perform a specified action on a key with value :param key: an attribute name :type key: str :param value: an attribute value :type value: str :param action: - ldap.MOD_REPLACE - by default - ldap.MOD_ADD - ldap.MOD_DELETE :type action: int :returns: result of modify_s operation :raises: ValueError - if instance is not online """ if action == ldap.MOD_ADD: action_txt = "ADD" elif action == ldap.MOD_REPLACE: action_txt = "REPLACE" elif action == ldap.MOD_DELETE: action_txt = "DELETE" else: # This should never happen (bug!) action_txt = "UNKNOWN" if value is None or len(value) < 512: self._log.debug("%s set %s: (%r, %r)" % (self._dn, action_txt, key, display_log_value(key, value))) else: self._log.debug("%s set %s: (%r, value too large)" % (self._dn, action_txt, key)) if self._instance.state != DIRSRV_STATE_ONLINE: > raise ValueError("Invalid state. Cannot set properties on instance that is not ONLINE.") E ValueError: Invalid state. Cannot set properties on instance that is not ONLINE. /usr/local/lib/python3.8/site-packages/lib389/_mapped_object.py:438: ValueError | |||
Failed | suites/syncrepl_plugin/basic_test.py::test_sync_repl_cookie_with_failure | 0.00 | |
topology = <lib389.topologies.TopologyMain object at 0x7f61c31ae730> request = <FixtureRequest for <Function test_sync_repl_cookie_with_failure>> def test_sync_repl_cookie_with_failure(topology, request): """Test sync_repl cookie are progressing is the right order when there is a failure in nested updates :id: e0103448-170e-4080-8f22-c34606447ce2 :setup: Standalone Instance :steps: 1.: enable retroCL 2.: configure retroCL to log nsuniqueid as targetUniqueId 3.: enable content_sync plugin 4.: enable automember 5.: create (4) groups. make group2 groupOfUniqueNames so the automember will fail to add 'member' (uniqueMember expected) 6.: configure automember to provision those groups with 'member' 7.: enable and configure memberof plugin 8.: enable plugin log level 9.: restart the server 10.: create a thread dedicated to run a sync repl client 11.: Create a group that will be the only update received by sync repl client 12.: Create (9) users that will generate nested updates (automember/memberof) 13.: stop sync repl client and collect the list of cookie.change_no 14.: check that the list of cookie.change_no contains only the group 'step 11' :expectedresults: 1.: succeeds 2.: succeeds 3.: succeeds 4.: succeeds 5.: succeeds 6.: succeeds 7.: succeeds 8.: succeeds 9.: succeeds 10.: succeeds 11.: succeeds 12.: Fails (expected) 13.: succeeds 14.: succeeds """ inst = topology[0] # Enable/configure retroCL plugin = RetroChangelogPlugin(inst) > plugin.disable() suites/syncrepl_plugin/basic_test.py:539: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/local/lib/python3.8/site-packages/lib389/plugins.py:63: in disable self.set('nsslapd-pluginEnabled', 'off') _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.plugins.RetroChangelogPlugin object at 0x7f61c321c370> key = 'nsslapd-pluginEnabled', value = 'off', action = 2 def set(self, key, value, action=ldap.MOD_REPLACE): """Perform a specified action on a key with value :param key: an attribute name :type key: str :param value: an attribute value :type value: str :param action: - ldap.MOD_REPLACE - by default - ldap.MOD_ADD - ldap.MOD_DELETE :type action: int :returns: result of modify_s operation :raises: ValueError - if instance is not online """ if action == ldap.MOD_ADD: action_txt = "ADD" elif action == ldap.MOD_REPLACE: action_txt = "REPLACE" elif action == ldap.MOD_DELETE: action_txt = "DELETE" else: # This should never happen (bug!) action_txt = "UNKNOWN" if value is None or len(value) < 512: self._log.debug("%s set %s: (%r, %r)" % (self._dn, action_txt, key, display_log_value(key, value))) else: self._log.debug("%s set %s: (%r, value too large)" % (self._dn, action_txt, key)) if self._instance.state != DIRSRV_STATE_ONLINE: > raise ValueError("Invalid state. Cannot set properties on instance that is not ONLINE.") E ValueError: Invalid state. Cannot set properties on instance that is not ONLINE. /usr/local/lib/python3.8/site-packages/lib389/_mapped_object.py:438: ValueError | |||
Failed | suites/vlv/regression_test.py::test_bulk_import_when_the_backend_with_vlv_was_recreated | 0.42 | |
self = <lib389.mappingTree.MappingTreeLegacy object at 0x7f61c33aad00> suffix = 'dc=example,dc=com', bename = 'userRoot', parent = None def create(self, suffix=None, bename=None, parent=None): ''' Create a mapping tree entry (under "cn=mapping tree,cn=config"), for the 'suffix' and that is stored in 'bename' backend. 'bename' backend must exist before creating the mapping tree entry. If a 'parent' is provided that means that we are creating a sub-suffix mapping tree. @param suffix - suffix mapped by this mapping tree entry. It will be the common name ('cn') of the entry @param benamebase - backend common name (e.g. 'userRoot') @param parent - if provided is a parent suffix of 'suffix' @return DN of the mapping tree entry @raise ldap.NO_SUCH_OBJECT - if the backend entry or parent mapping tree does not exist ValueError - if missing a parameter, ''' # Check suffix is provided if not suffix: raise ValueError("suffix is mandatory") else: nsuffix = normalizeDN(suffix) # Check backend name is provided if not bename: raise ValueError("backend name is mandatory") # Check that if the parent suffix is provided then # it exists a mapping tree for it if parent: nparent = normalizeDN(parent) filt = suffixfilt(parent) try: entry = self.conn.getEntry(DN_MAPPING_TREE, ldap.SCOPE_SUBTREE, filt) pass except NoSuchEntryError: raise ValueError("parent suffix has no mapping tree") else: nparent = "" # Check if suffix exists, return filt = suffixfilt(suffix) try: entry = self.conn.getEntry(DN_MAPPING_TREE, ldap.SCOPE_SUBTREE, filt) return entry except ldap.NO_SUCH_OBJECT: entry = None # # Now start the real work # # fix me when we can actually used escaped DNs dn = ','.join(('cn="%s"' % nsuffix, DN_MAPPING_TREE)) entry = Entry(dn) entry.update({ 'objectclass': ['top', 'extensibleObject', MT_OBJECTCLASS_VALUE], 'nsslapd-state': 'backend', # the value in the dn has to be DN escaped # internal code will add the quoted value - unquoted value is # useful for searching. MT_PROPNAME_TO_ATTRNAME[MT_SUFFIX]: nsuffix, MT_PROPNAME_TO_ATTRNAME[MT_BACKEND]: bename }) # possibly add the parent if parent: entry.setValues(MT_PROPNAME_TO_ATTRNAME[MT_PARENT_SUFFIX], nparent) try: self.log.debug("Creating entry: %s", entry.dn) self.log.info("Entry %r", entry) > self.conn.add_s(entry) /usr/local/lib/python3.8/site-packages/lib389/mappingTree.py:155: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ args = (dn: cn="dc=example,dc=com",cn=mapping tree,cn=config cn: dc=example,dc=com nsslapd-backend: userRoot nsslapd-state: backend objectclass: top objectclass: extensibleObject objectclass: nsMappingTree ,) kwargs = {} c_stack = [FrameInfo(frame=<frame at 0x7f61c33c6040, file '/usr/local/lib/python3.8/site-packages/lib389/__init__.py', line 176,...mbda>', code_context=[' self._inner_hookexec = lambda hook, methods, kwargs: hook.multicall(\n'], index=0), ...] frame = FrameInfo(frame=<frame at 0x5576b77742c0, file '/usr/local/lib/python3.8/site-packages/lib389/mappingTree.py', line 15.../lib389/mappingTree.py', lineno=155, function='create', code_context=[' self.conn.add_s(entry)\n'], index=0) ent = dn: cn="dc=example,dc=com",cn=mapping tree,cn=config cn: dc=example,dc=com nsslapd-backend: userRoot nsslapd-state: backend objectclass: top objectclass: extensibleObject objectclass: nsMappingTree def inner(*args, **kwargs): if name in [ 'add_s', 'bind_s', 'delete_s', 'modify_s', 'modrdn_s', 'rename_s', 'sasl_interactive_bind_s', 'search_s', 'search_ext_s', 'simple_bind_s', 'unbind_s', 'getEntry', ] and not ('escapehatch' in kwargs and kwargs['escapehatch'] == 'i am sure'): c_stack = inspect.stack() frame = c_stack[1] warnings.warn(DeprecationWarning("Use of raw ldap function %s. This will be removed in a future release. " "Found in: %s:%s" % (name, frame.filename, frame.lineno))) # Later, we will add a sleep here to make it even more painful. # Finally, it will raise an exception. elif 'escapehatch' in kwargs: kwargs.pop('escapehatch') if name == 'result': objtype, data = f(*args, **kwargs) # data is either a 2-tuple or a list of 2-tuples # print data if data: if isinstance(data, tuple): return objtype, Entry(data) elif isinstance(data, list): # AD sends back these search references # if objtype == ldap.RES_SEARCH_RESULT and \ # isinstance(data[-1],tuple) and \ # not data[-1][0]: # print "Received search reference: " # pprint.pprint(data[-1][1]) # data.pop() # remove the last non-entry element return objtype, [Entry(x) for x in data] else: raise TypeError("unknown data type %s returned by result" % type(data)) else: return objtype, data elif name.startswith('add'): # the first arg is self # the second and third arg are the dn and the data to send # We need to convert the Entry into the format used by # python-ldap ent = args[0] if isinstance(ent, Entry): > return f(ent.dn, ent.toTupleList(), *args[2:]) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:176: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.DirSrv object at 0x7f61c3376580> dn = 'cn="dc=example,dc=com",cn=mapping tree,cn=config' modlist = [('objectclass', [b'top', b'extensibleObject', b'nsMappingTree']), ('nsslapd-state', [b'backend']), ('cn', [b'dc=example,dc=com']), ('nsslapd-backend', [b'userRoot'])] def add_s(self,dn,modlist): > return self.add_ext_s(dn,modlist,None,None) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:439: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ args = ('cn="dc=example,dc=com",cn=mapping tree,cn=config', [('objectclass', [b'top', b'extensibleObject', b'nsMappingTree']), ('nsslapd-state', [b'backend']), ('cn', [b'dc=example,dc=com']), ('nsslapd-backend', [b'userRoot'])], None, None) kwargs = {}, ent = 'cn="dc=example,dc=com",cn=mapping tree,cn=config' def inner(*args, **kwargs): if name in [ 'add_s', 'bind_s', 'delete_s', 'modify_s', 'modrdn_s', 'rename_s', 'sasl_interactive_bind_s', 'search_s', 'search_ext_s', 'simple_bind_s', 'unbind_s', 'getEntry', ] and not ('escapehatch' in kwargs and kwargs['escapehatch'] == 'i am sure'): c_stack = inspect.stack() frame = c_stack[1] warnings.warn(DeprecationWarning("Use of raw ldap function %s. This will be removed in a future release. " "Found in: %s:%s" % (name, frame.filename, frame.lineno))) # Later, we will add a sleep here to make it even more painful. # Finally, it will raise an exception. elif 'escapehatch' in kwargs: kwargs.pop('escapehatch') if name == 'result': objtype, data = f(*args, **kwargs) # data is either a 2-tuple or a list of 2-tuples # print data if data: if isinstance(data, tuple): return objtype, Entry(data) elif isinstance(data, list): # AD sends back these search references # if objtype == ldap.RES_SEARCH_RESULT and \ # isinstance(data[-1],tuple) and \ # not data[-1][0]: # print "Received search reference: " # pprint.pprint(data[-1][1]) # data.pop() # remove the last non-entry element return objtype, [Entry(x) for x in data] else: raise TypeError("unknown data type %s returned by result" % type(data)) else: return objtype, data elif name.startswith('add'): # the first arg is self # the second and third arg are the dn and the data to send # We need to convert the Entry into the format used by # python-ldap ent = args[0] if isinstance(ent, Entry): return f(ent.dn, ent.toTupleList(), *args[2:]) else: > return f(*args, **kwargs) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:178: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.DirSrv object at 0x7f61c3376580> dn = 'cn="dc=example,dc=com",cn=mapping tree,cn=config' modlist = [('objectclass', [b'top', b'extensibleObject', b'nsMappingTree']), ('nsslapd-state', [b'backend']), ('cn', [b'dc=example,dc=com']), ('nsslapd-backend', [b'userRoot'])] serverctrls = None, clientctrls = None def add_ext_s(self,dn,modlist,serverctrls=None,clientctrls=None): msgid = self.add_ext(dn,modlist,serverctrls,clientctrls) > resp_type, resp_data, resp_msgid, resp_ctrls = self.result3(msgid,all=1,timeout=self.timeout) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:425: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ args = (76,), kwargs = {'all': 1, 'timeout': -1} def inner(*args, **kwargs): if name in [ 'add_s', 'bind_s', 'delete_s', 'modify_s', 'modrdn_s', 'rename_s', 'sasl_interactive_bind_s', 'search_s', 'search_ext_s', 'simple_bind_s', 'unbind_s', 'getEntry', ] and not ('escapehatch' in kwargs and kwargs['escapehatch'] == 'i am sure'): c_stack = inspect.stack() frame = c_stack[1] warnings.warn(DeprecationWarning("Use of raw ldap function %s. This will be removed in a future release. " "Found in: %s:%s" % (name, frame.filename, frame.lineno))) # Later, we will add a sleep here to make it even more painful. # Finally, it will raise an exception. elif 'escapehatch' in kwargs: kwargs.pop('escapehatch') if name == 'result': objtype, data = f(*args, **kwargs) # data is either a 2-tuple or a list of 2-tuples # print data if data: if isinstance(data, tuple): return objtype, Entry(data) elif isinstance(data, list): # AD sends back these search references # if objtype == ldap.RES_SEARCH_RESULT and \ # isinstance(data[-1],tuple) and \ # not data[-1][0]: # print "Received search reference: " # pprint.pprint(data[-1][1]) # data.pop() # remove the last non-entry element return objtype, [Entry(x) for x in data] else: raise TypeError("unknown data type %s returned by result" % type(data)) else: return objtype, data elif name.startswith('add'): # the first arg is self # the second and third arg are the dn and the data to send # We need to convert the Entry into the format used by # python-ldap ent = args[0] if isinstance(ent, Entry): return f(ent.dn, ent.toTupleList(), *args[2:]) else: return f(*args, **kwargs) else: > return f(*args, **kwargs) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.DirSrv object at 0x7f61c3376580>, msgid = 76, all = 1 timeout = -1, resp_ctrl_classes = None def result3(self,msgid=ldap.RES_ANY,all=1,timeout=None,resp_ctrl_classes=None): > resp_type, resp_data, resp_msgid, decoded_resp_ctrls, retoid, retval = self.result4( msgid,all,timeout, add_ctrls=0,add_intermediates=0,add_extop=0, resp_ctrl_classes=resp_ctrl_classes ) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:764: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ args = (76, 1, -1) kwargs = {'add_ctrls': 0, 'add_extop': 0, 'add_intermediates': 0, 'resp_ctrl_classes': None} def inner(*args, **kwargs): if name in [ 'add_s', 'bind_s', 'delete_s', 'modify_s', 'modrdn_s', 'rename_s', 'sasl_interactive_bind_s', 'search_s', 'search_ext_s', 'simple_bind_s', 'unbind_s', 'getEntry', ] and not ('escapehatch' in kwargs and kwargs['escapehatch'] == 'i am sure'): c_stack = inspect.stack() frame = c_stack[1] warnings.warn(DeprecationWarning("Use of raw ldap function %s. This will be removed in a future release. " "Found in: %s:%s" % (name, frame.filename, frame.lineno))) # Later, we will add a sleep here to make it even more painful. # Finally, it will raise an exception. elif 'escapehatch' in kwargs: kwargs.pop('escapehatch') if name == 'result': objtype, data = f(*args, **kwargs) # data is either a 2-tuple or a list of 2-tuples # print data if data: if isinstance(data, tuple): return objtype, Entry(data) elif isinstance(data, list): # AD sends back these search references # if objtype == ldap.RES_SEARCH_RESULT and \ # isinstance(data[-1],tuple) and \ # not data[-1][0]: # print "Received search reference: " # pprint.pprint(data[-1][1]) # data.pop() # remove the last non-entry element return objtype, [Entry(x) for x in data] else: raise TypeError("unknown data type %s returned by result" % type(data)) else: return objtype, data elif name.startswith('add'): # the first arg is self # the second and third arg are the dn and the data to send # We need to convert the Entry into the format used by # python-ldap ent = args[0] if isinstance(ent, Entry): return f(ent.dn, ent.toTupleList(), *args[2:]) else: return f(*args, **kwargs) else: > return f(*args, **kwargs) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.DirSrv object at 0x7f61c3376580>, msgid = 76, all = 1 timeout = -1, add_ctrls = 0, add_intermediates = 0, add_extop = 0 resp_ctrl_classes = None def result4(self,msgid=ldap.RES_ANY,all=1,timeout=None,add_ctrls=0,add_intermediates=0,add_extop=0,resp_ctrl_classes=None): if timeout is None: timeout = self.timeout > ldap_result = self._ldap_call(self._l.result4,msgid,all,timeout,add_ctrls,add_intermediates,add_extop) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:774: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ args = (<built-in method result4 of LDAP object at 0x7f61c35288d0>, 76, 1, -1, 0, 0, ...) kwargs = {} def inner(*args, **kwargs): if name in [ 'add_s', 'bind_s', 'delete_s', 'modify_s', 'modrdn_s', 'rename_s', 'sasl_interactive_bind_s', 'search_s', 'search_ext_s', 'simple_bind_s', 'unbind_s', 'getEntry', ] and not ('escapehatch' in kwargs and kwargs['escapehatch'] == 'i am sure'): c_stack = inspect.stack() frame = c_stack[1] warnings.warn(DeprecationWarning("Use of raw ldap function %s. This will be removed in a future release. " "Found in: %s:%s" % (name, frame.filename, frame.lineno))) # Later, we will add a sleep here to make it even more painful. # Finally, it will raise an exception. elif 'escapehatch' in kwargs: kwargs.pop('escapehatch') if name == 'result': objtype, data = f(*args, **kwargs) # data is either a 2-tuple or a list of 2-tuples # print data if data: if isinstance(data, tuple): return objtype, Entry(data) elif isinstance(data, list): # AD sends back these search references # if objtype == ldap.RES_SEARCH_RESULT and \ # isinstance(data[-1],tuple) and \ # not data[-1][0]: # print "Received search reference: " # pprint.pprint(data[-1][1]) # data.pop() # remove the last non-entry element return objtype, [Entry(x) for x in data] else: raise TypeError("unknown data type %s returned by result" % type(data)) else: return objtype, data elif name.startswith('add'): # the first arg is self # the second and third arg are the dn and the data to send # We need to convert the Entry into the format used by # python-ldap ent = args[0] if isinstance(ent, Entry): return f(ent.dn, ent.toTupleList(), *args[2:]) else: return f(*args, **kwargs) else: > return f(*args, **kwargs) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.DirSrv object at 0x7f61c3376580> func = <built-in method result4 of LDAP object at 0x7f61c35288d0> args = (76, 1, -1, 0, 0, 0), kwargs = {}, diagnostic_message_success = None exc_type = None, exc_value = None, exc_traceback = None def _ldap_call(self,func,*args,**kwargs): """ Wrapper method mainly for serializing calls into OpenLDAP libs and trace logs """ self._ldap_object_lock.acquire() if __debug__: if self._trace_level>=1: self._trace_file.write('*** %s %s - %s\n%s\n' % ( repr(self), self._uri, '.'.join((self.__class__.__name__,func.__name__)), pprint.pformat((args,kwargs)) )) if self._trace_level>=9: traceback.print_stack(limit=self._trace_stack_limit,file=self._trace_file) diagnostic_message_success = None try: try: result = func(*args,**kwargs) if __debug__ and self._trace_level>=2: if func.__name__!="unbind_ext": diagnostic_message_success = self._l.get_option(ldap.OPT_DIAGNOSTIC_MESSAGE) finally: self._ldap_object_lock.release() except LDAPError as e: exc_type,exc_value,exc_traceback = sys.exc_info() try: if 'info' not in e.args[0] and 'errno' in e.args[0]: e.args[0]['info'] = strerror(e.args[0]['errno']) except IndexError: pass if __debug__ and self._trace_level>=2: self._trace_file.write('=> LDAPError - %s: %s\n' % (e.__class__.__name__,str(e))) try: > reraise(exc_type, exc_value, exc_traceback) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:340: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ exc_type = <class 'ldap.UNWILLING_TO_PERFORM'> exc_value = UNWILLING_TO_PERFORM({'msgtype': 105, 'msgid': 76, 'result': 53, 'desc': 'Server is unwilling to perform', 'ctrls': []}) exc_traceback = <traceback object at 0x7f61c3272080> def reraise(exc_type, exc_value, exc_traceback): """Re-raise an exception given information from sys.exc_info() Note that unlike six.reraise, this does not support replacing the traceback. All arguments must come from a single sys.exc_info() call. """ # In Python 3, all exception info is contained in one object. > raise exc_value /usr/local/lib64/python3.8/site-packages/ldap/compat.py:46: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.DirSrv object at 0x7f61c3376580> func = <built-in method result4 of LDAP object at 0x7f61c35288d0> args = (76, 1, -1, 0, 0, 0), kwargs = {}, diagnostic_message_success = None exc_type = None, exc_value = None, exc_traceback = None def _ldap_call(self,func,*args,**kwargs): """ Wrapper method mainly for serializing calls into OpenLDAP libs and trace logs """ self._ldap_object_lock.acquire() if __debug__: if self._trace_level>=1: self._trace_file.write('*** %s %s - %s\n%s\n' % ( repr(self), self._uri, '.'.join((self.__class__.__name__,func.__name__)), pprint.pformat((args,kwargs)) )) if self._trace_level>=9: traceback.print_stack(limit=self._trace_stack_limit,file=self._trace_file) diagnostic_message_success = None try: try: > result = func(*args,**kwargs) E ldap.UNWILLING_TO_PERFORM: {'msgtype': 105, 'msgid': 76, 'result': 53, 'desc': 'Server is unwilling to perform', 'ctrls': []} /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:324: UNWILLING_TO_PERFORM During handling of the above exception, another exception occurred: topology_m2 = <lib389.topologies.TopologyMain object at 0x7f61c2f88340> @pytest.mark.DS47966 def test_bulk_import_when_the_backend_with_vlv_was_recreated(topology_m2): """ Testing bulk import when the backend with VLV was recreated. If the test passes without the server crash, 47966 is verified. :id: 512963fa-fe02-11e8-b1d3-8c16451d917b :setup: Replication with two masters. :steps: 1. Generate vlvSearch entry 2. Generate vlvIndex entry 3. Delete the backend instance on Master 2 4. Delete the agreement, replica, and mapping tree, too. 5. Recreate the backend and the VLV index on Master 2. 6. Recreating vlvSrchDn and vlvIndexDn on Master 2. :expectedresults: 1. Should Success. 2. Should Success. 3. Should Success. 4. Should Success. 5. Should Success. 6. Should Success. """ M1 = topology_m2.ms["master1"] M2 = topology_m2.ms["master2"] # generate vlvSearch entry properties_for_search = { "objectclass": ["top", "vlvSearch"], "cn": "vlvSrch", "vlvbase": DEFAULT_SUFFIX, "vlvfilter": "(|(objectclass=*)(objectclass=ldapsubentry))", "vlvscope": "2", } vlv_searches = VLVSearch(M2) userroot_vlvsearch = vlv_searches.create( basedn="cn=userRoot,cn=ldbm database,cn=plugins,cn=config", properties=properties_for_search, ) assert "cn=vlvSrch,cn=userRoot,cn=ldbm database,cn=plugins,cn=config" in M2.getEntry( "cn=vlvSrch,cn=userRoot,cn=ldbm database,cn=plugins,cn=config").dn # generate vlvIndex entry properties_for_index = { "objectclass": ["top", "vlvIndex"], "cn": "vlvIdx", "vlvsort": "cn ou sn", } vlv_index = VLVIndex(M2) userroot_index = vlv_index.create( basedn="cn=vlvSrch,cn=userRoot,cn=ldbm database,cn=plugins,cn=config", properties=properties_for_index, ) assert "cn=vlvIdx,cn=vlvSrch,cn=userRoot,cn=ldbm database,cn=plugins,cn=config" in M2.getEntry( "cn=vlvIdx,cn=vlvSrch,cn=userRoot,cn=ldbm database,cn=plugins,cn=config").dn # Delete the backend instance on Master 2." userroot_index.delete() userroot_vlvsearch.delete_all() # delete the agreement, replica, and mapping tree, too. repl = ReplicationManager(DEFAULT_SUFFIX) repl.remove_master(M2) MappingTrees(M2).list()[0].delete() Backends(M2).list()[0].delete() # Recreate the backend and the VLV index on Master 2. > M2.mappingtree.create(DEFAULT_SUFFIX, "userRoot") suites/vlv/regression_test.py:87: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.mappingTree.MappingTreeLegacy object at 0x7f61c33aad00> suffix = 'dc=example,dc=com', bename = 'userRoot', parent = None def create(self, suffix=None, bename=None, parent=None): ''' Create a mapping tree entry (under "cn=mapping tree,cn=config"), for the 'suffix' and that is stored in 'bename' backend. 'bename' backend must exist before creating the mapping tree entry. If a 'parent' is provided that means that we are creating a sub-suffix mapping tree. @param suffix - suffix mapped by this mapping tree entry. It will be the common name ('cn') of the entry @param benamebase - backend common name (e.g. 'userRoot') @param parent - if provided is a parent suffix of 'suffix' @return DN of the mapping tree entry @raise ldap.NO_SUCH_OBJECT - if the backend entry or parent mapping tree does not exist ValueError - if missing a parameter, ''' # Check suffix is provided if not suffix: raise ValueError("suffix is mandatory") else: nsuffix = normalizeDN(suffix) # Check backend name is provided if not bename: raise ValueError("backend name is mandatory") # Check that if the parent suffix is provided then # it exists a mapping tree for it if parent: nparent = normalizeDN(parent) filt = suffixfilt(parent) try: entry = self.conn.getEntry(DN_MAPPING_TREE, ldap.SCOPE_SUBTREE, filt) pass except NoSuchEntryError: raise ValueError("parent suffix has no mapping tree") else: nparent = "" # Check if suffix exists, return filt = suffixfilt(suffix) try: entry = self.conn.getEntry(DN_MAPPING_TREE, ldap.SCOPE_SUBTREE, filt) return entry except ldap.NO_SUCH_OBJECT: entry = None # # Now start the real work # # fix me when we can actually used escaped DNs dn = ','.join(('cn="%s"' % nsuffix, DN_MAPPING_TREE)) entry = Entry(dn) entry.update({ 'objectclass': ['top', 'extensibleObject', MT_OBJECTCLASS_VALUE], 'nsslapd-state': 'backend', # the value in the dn has to be DN escaped # internal code will add the quoted value - unquoted value is # useful for searching. MT_PROPNAME_TO_ATTRNAME[MT_SUFFIX]: nsuffix, MT_PROPNAME_TO_ATTRNAME[MT_BACKEND]: bename }) # possibly add the parent if parent: entry.setValues(MT_PROPNAME_TO_ATTRNAME[MT_PARENT_SUFFIX], nparent) try: self.log.debug("Creating entry: %s", entry.dn) self.log.info("Entry %r", entry) self.conn.add_s(entry) except ldap.LDAPError as e: > raise ldap.LDAPError("Error adding suffix entry " + dn, e) E ldap.LDAPError: ('Error adding suffix entry cn="dc=example,dc=com",cn=mapping tree,cn=config', UNWILLING_TO_PERFORM({'msgtype': 105, 'msgid': 76, 'result': 53, 'desc': 'Server is unwilling to perform', 'ctrls': []})) /usr/local/lib/python3.8/site-packages/lib389/mappingTree.py:157: LDAPError -------------------------------Captured log setup------------------------------- [32mINFO [0m lib389.SetupDs:setup.py:658 Starting installation... [32mINFO [0m lib389.SetupDs:setup.py:686 Completed installation for master1 [32mINFO [0m lib389.topologies:topologies.py:109 Instance with parameters {'ldap-port': 39001, 'ldap-secureport': 63701, 'server-id': 'master1', 'suffix': 'dc=example,dc=com'} was created. [32mINFO [0m lib389.SetupDs:setup.py:658 Starting installation... [32mINFO [0m lib389.SetupDs:setup.py:686 Completed installation for master2 [32mINFO [0m lib389.topologies:topologies.py:109 Instance with parameters {'ldap-port': 39002, 'ldap-secureport': 63702, 'server-id': 'master2', 'suffix': 'dc=example,dc=com'} was created. [32mINFO [0m lib389.topologies:topologies.py:142 Creating replication topology. [32mINFO [0m lib389.topologies:topologies.py:156 Joining master master2 to master1 ... [32mINFO [0m lib389.replica:replica.py:2084 SUCCESS: bootstrap to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 completed [32mINFO [0m lib389.replica:replica.py:2365 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is was created [32mINFO [0m lib389.replica:replica.py:2365 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is was created [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is NOT working (expect 78c65298-0dbf-4d53-984b-524f7cca4636 / got description=None) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect e75dc12f-6352-4b80-be1e-fc0db95634b6 / got description=78c65298-0dbf-4d53-984b-524f7cca4636) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is working [32mINFO [0m lib389.replica:replica.py:2153 SUCCESS: joined master from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 [32mINFO [0m lib389.topologies:topologies.py:164 Ensuring master master1 to master2 ... [32mINFO [0m lib389.replica:replica.py:2338 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 already exists [32mINFO [0m lib389.topologies:topologies.py:164 Ensuring master master2 to master1 ... [32mINFO [0m lib389.replica:replica.py:2338 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 already exists ------------------------------Captured stdout call------------------------------ deleting vlv search: cn=vlvSrch,cn=userRoot,cn=ldbm database,cn=plugins,cn=config deleting vlv search entry... -------------------------------Captured log call-------------------------------- [32mINFO [0m lib389:mappingTree.py:154 Entry dn: cn="dc=example,dc=com",cn=mapping tree,cn=config cn: dc=example,dc=com nsslapd-backend: userRoot nsslapd-state: backend objectclass: top objectclass: extensibleObject objectclass: nsMappingTree | |||
Failed | tickets/ticket47781_test.py::test_ticket47781 | 3.66 | |
topology_st = <lib389.topologies.TopologyMain object at 0x7f61c3179ac0> def test_ticket47781(topology_st): """ Testing for a deadlock after doing an online import of an LDIF with replication data. The replication agreement should be invalid. """ log.info('Testing Ticket 47781 - Testing for deadlock after importing LDIF with replication data') master = topology_st.standalone repl = ReplicationManager(DEFAULT_SUFFIX) repl.create_first_master(master) properties = {RA_NAME: r'meTo_$host:$port', RA_BINDDN: defaultProperties[REPLICATION_BIND_DN], RA_BINDPW: defaultProperties[REPLICATION_BIND_PW], RA_METHOD: defaultProperties[REPLICATION_BIND_METHOD], RA_TRANSPORT_PROT: defaultProperties[REPLICATION_TRANSPORT]} # The agreement should point to a server that does NOT exist (invalid port) repl_agreement = master.agreement.create(suffix=DEFAULT_SUFFIX, host=master.host, port=5555, properties=properties) # # add two entries # log.info('Adding two entries...') master.add_s(Entry(('cn=entry1,dc=example,dc=com', { 'objectclass': 'top person'.split(), 'sn': 'user', 'cn': 'entry1'}))) master.add_s(Entry(('cn=entry2,dc=example,dc=com', { 'objectclass': 'top person'.split(), 'sn': 'user', 'cn': 'entry2'}))) # # export the replication ldif # log.info('Exporting replication ldif...') args = {EXPORT_REPL_INFO: True} exportTask = Tasks(master) exportTask.exportLDIF(DEFAULT_SUFFIX, None, "/tmp/export.ldif", args) # # Restart the server # log.info('Restarting server...') master.stop() master.start() # # Import the ldif # log.info('Import replication LDIF file...') importTask = Tasks(master) args = {TASK_WAIT: True} > importTask.importLDIF(DEFAULT_SUFFIX, None, "/tmp/export.ldif", args) tickets/ticket47781_test.py:85: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.tasks.Tasks object at 0x7f61c2e50a60> suffix = 'dc=example,dc=com', benamebase = None, input_file = '/tmp/export.ldif' args = {'wait': True} def importLDIF(self, suffix=None, benamebase=None, input_file=None, args=None): ''' Import from a LDIF format a given 'suffix' (or 'benamebase' that stores that suffix). It uses an internal task to acheive this request. If 'suffix' and 'benamebase' are specified, it uses 'benamebase' first else 'suffix'. If both 'suffix' and 'benamebase' are missing it raise ValueError 'input_file' is the ldif input file @param suffix - suffix of the backend @param benamebase - 'commonname'/'cn' of the backend (e.g. 'userRoot') @param ldif_input - file that will contain the entries in LDIF format to import @param args - is a dictionary that contains modifier of the import task wait: True/[False] - If True, 'export' waits for the completion of the task before to return @return None @raise ValueError ''' if self.conn.state != DIRSRV_STATE_ONLINE: raise ValueError("Invalid Server State %s! Must be online" % self.conn.state) # Checking the parameters if not benamebase and not suffix: raise ValueError("Specify either bename or suffix") if not input_file: raise ValueError("input_file is mandatory") if not os.path.exists(input_file): > raise ValueError("Import file (%s) does not exist" % input_file) E ValueError: Import file (/tmp/export.ldif) does not exist /usr/local/lib/python3.8/site-packages/lib389/tasks.py:473: ValueError -------------------------------Captured log setup------------------------------- [32mINFO [0m lib389.SetupDs:setup.py:658 Starting installation... [32mINFO [0m lib389.SetupDs:setup.py:686 Completed installation for standalone1 [32mINFO [0m lib389.topologies:topologies.py:109 Instance with parameters {'ldap-port': 38901, 'ldap-secureport': 63601, 'server-id': 'standalone1', 'suffix': 'dc=example,dc=com'} was created. -------------------------------Captured log call-------------------------------- [32mINFO [0m lib389:tasks.py:567 Export task export_10302020_234840 for file /tmp/export.ldif completed successfully | |||
Failed | tickets/ticket47988_test.py::test_ticket47988_init | 6.85 | |
topology_m2 = <lib389.topologies.TopologyMain object at 0x7f61c297d550> def test_ticket47988_init(topology_m2): """ It adds - Objectclass with MAY 'member' - an entry ('bind_entry') with which we bind to test the 'SELFDN' operation It deletes the anonymous aci """ _header(topology_m2, 'test_ticket47988_init') # enable acl error logging mod = [(ldap.MOD_REPLACE, 'nsslapd-errorlog-level', ensure_bytes(str(8192)))] # REPL topology_m2.ms["master1"].modify_s(DN_CONFIG, mod) topology_m2.ms["master2"].modify_s(DN_CONFIG, mod) mod = [(ldap.MOD_REPLACE, 'nsslapd-accesslog-level', ensure_bytes(str(260)))] # Internal op topology_m2.ms["master1"].modify_s(DN_CONFIG, mod) topology_m2.ms["master2"].modify_s(DN_CONFIG, mod) # add dummy entries for cpt in range(MAX_OTHERS): name = "%s%d" % (OTHER_NAME, cpt) topology_m2.ms["master1"].add_s(Entry(("cn=%s,%s" % (name, SUFFIX), { 'objectclass': "top person".split(), 'sn': name, 'cn': name}))) # check that entry 0 is replicated before loop = 0 entryDN = "cn=%s0,%s" % (OTHER_NAME, SUFFIX) while loop <= 10: try: ent = topology_m2.ms["master2"].getEntry(entryDN, ldap.SCOPE_BASE, "(objectclass=*)", ['telephonenumber']) break except ldap.NO_SUCH_OBJECT: time.sleep(1) loop += 1 assert (loop <= 10) topology_m2.ms["master1"].stop(timeout=10) topology_m2.ms["master2"].stop(timeout=10) # install the specific schema M1: ipa3.3, M2: ipa4.1 schema_file = os.path.join(topology_m2.ms["master1"].getDir(__file__, DATA_DIR), "ticket47988/schema_ipa3.3.tar.gz") _install_schema(topology_m2.ms["master1"], schema_file) schema_file = os.path.join(topology_m2.ms["master1"].getDir(__file__, DATA_DIR), "ticket47988/schema_ipa4.1.tar.gz") _install_schema(topology_m2.ms["master2"], schema_file) > topology_m2.ms["master1"].start(timeout=10) /export/tests/tickets/ticket47988_test.py:157: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/local/lib/python3.8/site-packages/lib389/__init__.py:1079: in start subprocess.check_output(["systemctl", "start", "dirsrv@%s" % self.serverid], stderr=subprocess.STDOUT) /usr/lib64/python3.8/subprocess.py:411: in check_output return run(*popenargs, stdout=PIPE, timeout=timeout, check=True, _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = None, capture_output = False, timeout = None, check = True popenargs = (['systemctl', 'start', 'dirsrv@master1'],) kwargs = {'stderr': -2, 'stdout': -1} process = <subprocess.Popen object at 0x7f61c2936520> stdout = b'Job for dirsrv@master1.service failed because the control process exited with error code.\nSee "systemctl status dirsrv@master1.service" and "journalctl -xe" for details.\n' stderr = None, retcode = 1 def run(*popenargs, input=None, capture_output=False, timeout=None, check=False, **kwargs): """Run command with arguments and return a CompletedProcess instance. The returned instance will have attributes args, returncode, stdout and stderr. By default, stdout and stderr are not captured, and those attributes will be None. Pass stdout=PIPE and/or stderr=PIPE in order to capture them. If check is True and the exit code was non-zero, it raises a CalledProcessError. The CalledProcessError object will have the return code in the returncode attribute, and output & stderr attributes if those streams were captured. If timeout is given, and the process takes too long, a TimeoutExpired exception will be raised. There is an optional argument "input", allowing you to pass bytes or a string to the subprocess's stdin. If you use this argument you may not also use the Popen constructor's "stdin" argument, as it will be used internally. By default, all communication is in bytes, and therefore any "input" should be bytes, and the stdout and stderr will be bytes. If in text mode, any "input" should be a string, and stdout and stderr will be strings decoded according to locale encoding, or by "encoding" if set. Text mode is triggered by setting any of text, encoding, errors or universal_newlines. The other arguments are the same as for the Popen constructor. """ if input is not None: if kwargs.get('stdin') is not None: raise ValueError('stdin and input arguments may not both be used.') kwargs['stdin'] = PIPE if capture_output: if kwargs.get('stdout') is not None or kwargs.get('stderr') is not None: raise ValueError('stdout and stderr arguments may not be used ' 'with capture_output.') kwargs['stdout'] = PIPE kwargs['stderr'] = PIPE with Popen(*popenargs, **kwargs) as process: try: stdout, stderr = process.communicate(input, timeout=timeout) except TimeoutExpired as exc: process.kill() if _mswindows: # Windows accumulates the output in a single blocking # read() call run on child threads, with the timeout # being done in a join() on those threads. communicate() # _after_ kill() is required to collect that and add it # to the exception. exc.stdout, exc.stderr = process.communicate() else: # POSIX _communicate already populated the output so # far into the TimeoutExpired exception. process.wait() raise except: # Including KeyboardInterrupt, communicate handled that. process.kill() # We don't call process.wait() as .__exit__ does that for us. raise retcode = process.poll() if check and retcode: > raise CalledProcessError(retcode, process.args, output=stdout, stderr=stderr) E subprocess.CalledProcessError: Command '['systemctl', 'start', 'dirsrv@master1']' returned non-zero exit status 1. /usr/lib64/python3.8/subprocess.py:512: CalledProcessError -------------------------------Captured log setup------------------------------- [32mINFO [0m lib389.SetupDs:setup.py:658 Starting installation... [32mINFO [0m lib389.SetupDs:setup.py:686 Completed installation for master1 [32mINFO [0m lib389.topologies:topologies.py:109 Instance with parameters {'ldap-port': 39001, 'ldap-secureport': 63701, 'server-id': 'master1', 'suffix': 'dc=example,dc=com'} was created. [32mINFO [0m lib389.SetupDs:setup.py:658 Starting installation... [32mINFO [0m lib389.SetupDs:setup.py:686 Completed installation for master2 [32mINFO [0m lib389.topologies:topologies.py:109 Instance with parameters {'ldap-port': 39002, 'ldap-secureport': 63702, 'server-id': 'master2', 'suffix': 'dc=example,dc=com'} was created. [32mINFO [0m lib389.topologies:topologies.py:142 Creating replication topology. [32mINFO [0m lib389.topologies:topologies.py:156 Joining master master2 to master1 ... [32mINFO [0m lib389.replica:replica.py:2084 SUCCESS: bootstrap to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 completed [32mINFO [0m lib389.replica:replica.py:2365 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is was created [32mINFO [0m lib389.replica:replica.py:2365 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is was created [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is NOT working (expect e48ab3df-de91-4756-bac3-704e8058a247 / got description=None) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 70a666e1-1277-440f-9a6c-eeaf77a1f458 / got description=e48ab3df-de91-4756-bac3-704e8058a247) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is working [32mINFO [0m lib389.replica:replica.py:2153 SUCCESS: joined master from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 [32mINFO [0m lib389.topologies:topologies.py:164 Ensuring master master1 to master2 ... [32mINFO [0m lib389.replica:replica.py:2338 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 already exists [32mINFO [0m lib389.topologies:topologies.py:164 Ensuring master master2 to master1 ... [32mINFO [0m lib389.replica:replica.py:2338 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 already exists -------------------------------Captured log call-------------------------------- [32mINFO [0m lib389:ticket47988_test.py:64 ############################################### [32mINFO [0m lib389:ticket47988_test.py:65 ####### [32mINFO [0m lib389:ticket47988_test.py:66 ####### test_ticket47988_init [32mINFO [0m lib389:ticket47988_test.py:67 ####### [32mINFO [0m lib389:ticket47988_test.py:68 ################################################### [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master1/schema/02common.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master1/schema/50ns-admin.ldif [32mINFO [0m lib389:ticket47988_test.py:98 replace /etc/dirsrv/slapd-master1/schema/99user.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master1/schema/60nss-ldap.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master1/schema/60autofs.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master1/schema/50ns-web.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master1/schema/60samba.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master1/schema/10dna-plugin.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master1/schema/05rfc4523.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master1/schema/60basev2.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master1/schema/10automember-plugin.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master1/schema/05rfc2927.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master1/schema/10mep-plugin.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master1/schema/60ipadns.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master1/schema/10rfc2307.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master1/schema/50ns-mail.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master1/schema/05rfc4524.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master1/schema/60trust.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master1/schema/60ipaconfig.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master1/schema/50ns-directory.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master1/schema/60eduperson.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master1/schema/60mozilla.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master1/schema/65ipasudo.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master1/schema/60rfc3712.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master1/schema/60rfc2739.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master1/schema/50ns-value.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master1/schema/60acctpolicy.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master1/schema/01core389.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master1/schema/60sabayon.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master1/schema/60pam-plugin.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master1/schema/00core.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master1/schema/25java-object.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master1/schema/60sudo.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master1/schema/70ipaotp.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master1/schema/60pureftpd.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master1/schema/61kerberos-ipav3.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master1/schema/60kerberos.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master1/schema/60basev3.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master1/schema/06inetorgperson.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master1/schema/30ns-common.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master1/schema/28pilot.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master1/schema/20subscriber.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master1/schema/50ns-certificate.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master1/schema/60posix-winsync-plugin.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master2/schema/02common.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master2/schema/50ns-admin.ldif [32mINFO [0m lib389:ticket47988_test.py:98 replace /etc/dirsrv/slapd-master2/schema/99user.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master2/schema/60nss-ldap.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master2/schema/60autofs.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master2/schema/50ns-web.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master2/schema/60samba.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master2/schema/10dna-plugin.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master2/schema/05rfc4523.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master2/schema/60basev2.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master2/schema/10automember-plugin.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master2/schema/05rfc2927.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master2/schema/10mep-plugin.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master2/schema/60ipadns.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master2/schema/10rfc2307.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master2/schema/50ns-mail.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master2/schema/05rfc4524.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master2/schema/60trust.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master2/schema/60ipaconfig.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master2/schema/50ns-directory.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master2/schema/60eduperson.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master2/schema/60mozilla.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master2/schema/65ipasudo.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master2/schema/60rfc3712.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master2/schema/60rfc2739.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master2/schema/50ns-value.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master2/schema/60acctpolicy.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master2/schema/01core389.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master2/schema/60sabayon.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master2/schema/60pam-plugin.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master2/schema/00core.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master2/schema/25java-object.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master2/schema/60sudo.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master2/schema/70ipaotp.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master2/schema/60pureftpd.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master2/schema/61kerberos-ipav3.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master2/schema/60kerberos.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master2/schema/60basev3.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master2/schema/06inetorgperson.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master2/schema/30ns-common.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master2/schema/28pilot.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master2/schema/20subscriber.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master2/schema/50ns-certificate.ldif [32mINFO [0m lib389:ticket47988_test.py:102 add /etc/dirsrv/slapd-master2/schema/60posix-winsync-plugin.ldif | |||
Failed | tickets/ticket47988_test.py::test_ticket47988_1 | 0.00 | |
topology_m2 = <lib389.topologies.TopologyMain object at 0x7f61c297d550> def test_ticket47988_1(topology_m2): ''' Check that replication is working and pause replication M2->M1 ''' _header(topology_m2, 'test_ticket47988_1') topology_m2.ms["master1"].log.debug("\n\nCheck that replication is working and pause replication M2->M1\n") > _do_update_entry(supplier=topology_m2.ms["master2"], consumer=topology_m2.ms["master1"], attempts=5) /export/tests/tickets/ticket47988_test.py:234: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /export/tests/tickets/ticket47988_test.py:184: in _do_update_entry supplier.modify_s(entryDN, mod) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:640: in modify_s return self.modify_ext_s(dn,modlist,None,None) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:613: in modify_ext_s resp_type, resp_data, resp_msgid, resp_ctrls = self.result3(msgid,all=1,timeout=self.timeout) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:764: in result3 resp_type, resp_data, resp_msgid, decoded_resp_ctrls, retoid, retval = self.result4( /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:774: in result4 ldap_result = self._ldap_call(self._l.result4,msgid,all,timeout,add_ctrls,add_intermediates,add_extop) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:340: in _ldap_call reraise(exc_type, exc_value, exc_traceback) /usr/local/lib64/python3.8/site-packages/ldap/compat.py:46: in reraise raise exc_value _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.DirSrv object at 0x7f61c2939460> func = <built-in method result4 of LDAP object at 0x7f61c28d8f30> args = (26, 1, -1, 0, 0, 0), kwargs = {}, diagnostic_message_success = None exc_type = None, exc_value = None, exc_traceback = None def _ldap_call(self,func,*args,**kwargs): """ Wrapper method mainly for serializing calls into OpenLDAP libs and trace logs """ self._ldap_object_lock.acquire() if __debug__: if self._trace_level>=1: self._trace_file.write('*** %s %s - %s\n%s\n' % ( repr(self), self._uri, '.'.join((self.__class__.__name__,func.__name__)), pprint.pformat((args,kwargs)) )) if self._trace_level>=9: traceback.print_stack(limit=self._trace_stack_limit,file=self._trace_file) diagnostic_message_success = None try: try: > result = func(*args,**kwargs) E ldap.SERVER_DOWN: {'result': -1, 'desc': "Can't contact LDAP server", 'ctrls': []} /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:324: SERVER_DOWN -------------------------------Captured log call-------------------------------- [32mINFO [0m lib389:ticket47988_test.py:64 ############################################### [32mINFO [0m lib389:ticket47988_test.py:65 ####### [32mINFO [0m lib389:ticket47988_test.py:66 ####### test_ticket47988_1 [32mINFO [0m lib389:ticket47988_test.py:67 ####### [32mINFO [0m lib389:ticket47988_test.py:68 ################################################### | |||
Failed | tickets/ticket47988_test.py::test_ticket47988_2 | 0.00 | |
topology_m2 = <lib389.topologies.TopologyMain object at 0x7f61c297d550> def test_ticket47988_2(topology_m2): ''' Update M1 schema and trigger update M1->M2 So M1 should learn new/extended definitions that are in M2 schema ''' _header(topology_m2, 'test_ticket47988_2') topology_m2.ms["master1"].log.debug("\n\nUpdate M1 schema and an entry on M1\n") > master1_schema_csn = topology_m2.ms["master1"].schema.get_schema_csn() /export/tests/tickets/ticket47988_test.py:246: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/local/lib/python3.8/site-packages/lib389/schema.py:604: in get_schema_csn ents = self.conn.search_s(DN_SCHEMA, ldap.SCOPE_BASE, /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:870: in search_s return self.search_ext_s(base,scope,filterstr,attrlist,attrsonly,None,None,timeout=self.timeout) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:864: in search_ext_s return self.result(msgid,all=1,timeout=timeout)[1] /usr/local/lib/python3.8/site-packages/lib389/__init__.py:148: in inner objtype, data = f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:756: in result resp_type, resp_data, resp_msgid = self.result2(msgid,all,timeout) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:760: in result2 resp_type, resp_data, resp_msgid, resp_ctrls = self.result3(msgid,all,timeout) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:764: in result3 resp_type, resp_data, resp_msgid, decoded_resp_ctrls, retoid, retval = self.result4( /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:774: in result4 ldap_result = self._ldap_call(self._l.result4,msgid,all,timeout,add_ctrls,add_intermediates,add_extop) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:340: in _ldap_call reraise(exc_type, exc_value, exc_traceback) /usr/local/lib64/python3.8/site-packages/ldap/compat.py:46: in reraise raise exc_value _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.DirSrv object at 0x7f61c297d4f0> func = <built-in method result4 of LDAP object at 0x7f61c290f1b0> args = (62, 1, -1, 0, 0, 0), kwargs = {}, diagnostic_message_success = None exc_type = None, exc_value = None, exc_traceback = None def _ldap_call(self,func,*args,**kwargs): """ Wrapper method mainly for serializing calls into OpenLDAP libs and trace logs """ self._ldap_object_lock.acquire() if __debug__: if self._trace_level>=1: self._trace_file.write('*** %s %s - %s\n%s\n' % ( repr(self), self._uri, '.'.join((self.__class__.__name__,func.__name__)), pprint.pformat((args,kwargs)) )) if self._trace_level>=9: traceback.print_stack(limit=self._trace_stack_limit,file=self._trace_file) diagnostic_message_success = None try: try: > result = func(*args,**kwargs) E ldap.SERVER_DOWN: {'result': -1, 'desc': "Can't contact LDAP server", 'ctrls': []} /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:324: SERVER_DOWN -------------------------------Captured log call-------------------------------- [32mINFO [0m lib389:ticket47988_test.py:64 ############################################### [32mINFO [0m lib389:ticket47988_test.py:65 ####### [32mINFO [0m lib389:ticket47988_test.py:66 ####### test_ticket47988_2 [32mINFO [0m lib389:ticket47988_test.py:67 ####### [32mINFO [0m lib389:ticket47988_test.py:68 ################################################### | |||
Failed | tickets/ticket47988_test.py::test_ticket47988_3 | 0.01 | |
topology_m2 = <lib389.topologies.TopologyMain object at 0x7f61c297d550> def test_ticket47988_3(topology_m2): ''' Resume replication M2->M1 and check replication is still working ''' _header(topology_m2, 'test_ticket47988_3') > _resume_M2_to_M1(topology_m2) /export/tests/tickets/ticket47988_test.py:283: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /export/tests/tickets/ticket47988_test.py:222: in _resume_M2_to_M1 ents = topology_m2.ms["master2"].agreement.list(suffix=SUFFIX) /usr/local/lib/python3.8/site-packages/lib389/agreement.py:905: in list replica_entries = self.conn.replica.list(suffix) /usr/local/lib/python3.8/site-packages/lib389/replica.py:178: in list ents = self.conn.search_s(base, ldap.SCOPE_SUBTREE, filtr) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:870: in search_s return self.search_ext_s(base,scope,filterstr,attrlist,attrsonly,None,None,timeout=self.timeout) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:863: in search_ext_s msgid = self.search_ext(base,scope,filterstr,attrlist,attrsonly,serverctrls,clientctrls,timeout,sizelimit) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:853: in search_ext return self._ldap_call( /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:340: in _ldap_call reraise(exc_type, exc_value, exc_traceback) /usr/local/lib64/python3.8/site-packages/ldap/compat.py:46: in reraise raise exc_value _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.DirSrv object at 0x7f61c2939460> func = <built-in method search_ext of LDAP object at 0x7f61c28d8f30> args = ('cn=mapping tree,cn=config', 2, '(&(objectclass=nsds5Replica)(nsDS5ReplicaRoot=dc=example,dc=com))', None, 0, None, ...) kwargs = {}, diagnostic_message_success = None, exc_type = None exc_value = None, exc_traceback = None def _ldap_call(self,func,*args,**kwargs): """ Wrapper method mainly for serializing calls into OpenLDAP libs and trace logs """ self._ldap_object_lock.acquire() if __debug__: if self._trace_level>=1: self._trace_file.write('*** %s %s - %s\n%s\n' % ( repr(self), self._uri, '.'.join((self.__class__.__name__,func.__name__)), pprint.pformat((args,kwargs)) )) if self._trace_level>=9: traceback.print_stack(limit=self._trace_stack_limit,file=self._trace_file) diagnostic_message_success = None try: try: > result = func(*args,**kwargs) E ldap.SERVER_DOWN: {'result': -1, 'desc': "Can't contact LDAP server", 'ctrls': []} /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:324: SERVER_DOWN -------------------------------Captured log call-------------------------------- [32mINFO [0m lib389:ticket47988_test.py:64 ############################################### [32mINFO [0m lib389:ticket47988_test.py:65 ####### [32mINFO [0m lib389:ticket47988_test.py:66 ####### test_ticket47988_3 [32mINFO [0m lib389:ticket47988_test.py:67 ####### [32mINFO [0m lib389:ticket47988_test.py:68 ################################################### [32mINFO [0m lib389:ticket47988_test.py:221 ######################### resume RA M2->M1 ###################### | |||
Failed | tickets/ticket47988_test.py::test_ticket47988_4 | 0.01 | |
topology_m2 = <lib389.topologies.TopologyMain object at 0x7f61c297d550> def test_ticket47988_4(topology_m2): ''' Check schemaCSN is identical on both server And save the nsschemaCSN to later check they do not change unexpectedly ''' _header(topology_m2, 'test_ticket47988_4') > master1_schema_csn = topology_m2.ms["master1"].schema.get_schema_csn() /export/tests/tickets/ticket47988_test.py:295: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/local/lib/python3.8/site-packages/lib389/schema.py:604: in get_schema_csn ents = self.conn.search_s(DN_SCHEMA, ldap.SCOPE_BASE, /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:870: in search_s return self.search_ext_s(base,scope,filterstr,attrlist,attrsonly,None,None,timeout=self.timeout) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:863: in search_ext_s msgid = self.search_ext(base,scope,filterstr,attrlist,attrsonly,serverctrls,clientctrls,timeout,sizelimit) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:853: in search_ext return self._ldap_call( /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:340: in _ldap_call reraise(exc_type, exc_value, exc_traceback) /usr/local/lib64/python3.8/site-packages/ldap/compat.py:46: in reraise raise exc_value _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.DirSrv object at 0x7f61c297d4f0> func = <built-in method search_ext of LDAP object at 0x7f61c290f1b0> args = ('cn=schema', 0, 'objectclass=*', ['nsSchemaCSN'], 0, None, ...) kwargs = {}, diagnostic_message_success = None, exc_type = None exc_value = None, exc_traceback = None def _ldap_call(self,func,*args,**kwargs): """ Wrapper method mainly for serializing calls into OpenLDAP libs and trace logs """ self._ldap_object_lock.acquire() if __debug__: if self._trace_level>=1: self._trace_file.write('*** %s %s - %s\n%s\n' % ( repr(self), self._uri, '.'.join((self.__class__.__name__,func.__name__)), pprint.pformat((args,kwargs)) )) if self._trace_level>=9: traceback.print_stack(limit=self._trace_stack_limit,file=self._trace_file) diagnostic_message_success = None try: try: > result = func(*args,**kwargs) E ldap.SERVER_DOWN: {'result': -1, 'desc': "Can't contact LDAP server", 'ctrls': []} /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:324: SERVER_DOWN -------------------------------Captured log call-------------------------------- [32mINFO [0m lib389:ticket47988_test.py:64 ############################################### [32mINFO [0m lib389:ticket47988_test.py:65 ####### [32mINFO [0m lib389:ticket47988_test.py:66 ####### test_ticket47988_4 [32mINFO [0m lib389:ticket47988_test.py:67 ####### [32mINFO [0m lib389:ticket47988_test.py:68 ################################################### | |||
Failed | tickets/ticket47988_test.py::test_ticket47988_5 | 0.00 | |
topology_m2 = <lib389.topologies.TopologyMain object at 0x7f61c297d550> def test_ticket47988_5(topology_m2): ''' Check schemaCSN do not change unexpectedly ''' _header(topology_m2, 'test_ticket47988_5') > _do_update_entry(supplier=topology_m2.ms["master1"], consumer=topology_m2.ms["master2"], attempts=5) /export/tests/tickets/ticket47988_test.py:313: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /export/tests/tickets/ticket47988_test.py:184: in _do_update_entry supplier.modify_s(entryDN, mod) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:640: in modify_s return self.modify_ext_s(dn,modlist,None,None) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:612: in modify_ext_s msgid = self.modify_ext(dn,modlist,serverctrls,clientctrls) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:609: in modify_ext return self._ldap_call(self._l.modify_ext,dn,modlist,RequestControlTuples(serverctrls),RequestControlTuples(clientctrls)) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:340: in _ldap_call reraise(exc_type, exc_value, exc_traceback) /usr/local/lib64/python3.8/site-packages/ldap/compat.py:46: in reraise raise exc_value _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.DirSrv object at 0x7f61c297d4f0> func = <built-in method modify_ext of LDAP object at 0x7f61c290f1b0> args = ('cn=other_entry0,dc=example,dc=com', [(2, 'telephonenumber', b'178')], None, None) kwargs = {}, diagnostic_message_success = None, exc_type = None exc_value = None, exc_traceback = None def _ldap_call(self,func,*args,**kwargs): """ Wrapper method mainly for serializing calls into OpenLDAP libs and trace logs """ self._ldap_object_lock.acquire() if __debug__: if self._trace_level>=1: self._trace_file.write('*** %s %s - %s\n%s\n' % ( repr(self), self._uri, '.'.join((self.__class__.__name__,func.__name__)), pprint.pformat((args,kwargs)) )) if self._trace_level>=9: traceback.print_stack(limit=self._trace_stack_limit,file=self._trace_file) diagnostic_message_success = None try: try: > result = func(*args,**kwargs) E ldap.SERVER_DOWN: {'result': -1, 'desc': "Can't contact LDAP server", 'ctrls': []} /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:324: SERVER_DOWN -------------------------------Captured log call-------------------------------- [32mINFO [0m lib389:ticket47988_test.py:64 ############################################### [32mINFO [0m lib389:ticket47988_test.py:65 ####### [32mINFO [0m lib389:ticket47988_test.py:66 ####### test_ticket47988_5 [32mINFO [0m lib389:ticket47988_test.py:67 ####### [32mINFO [0m lib389:ticket47988_test.py:68 ################################################### | |||
Failed | tickets/ticket47988_test.py::test_ticket47988_6 | 0.00 | |
topology_m2 = <lib389.topologies.TopologyMain object at 0x7f61c297d550> def test_ticket47988_6(topology_m2): ''' Update M1 schema and trigger update M2->M1 So M2 should learn new/extended definitions that are in M1 schema ''' _header(topology_m2, 'test_ticket47988_6') topology_m2.ms["master1"].log.debug("\n\nUpdate M1 schema and an entry on M1\n") > master1_schema_csn = topology_m2.ms["master1"].schema.get_schema_csn() /export/tests/tickets/ticket47988_test.py:336: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/local/lib/python3.8/site-packages/lib389/schema.py:604: in get_schema_csn ents = self.conn.search_s(DN_SCHEMA, ldap.SCOPE_BASE, /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:870: in search_s return self.search_ext_s(base,scope,filterstr,attrlist,attrsonly,None,None,timeout=self.timeout) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:863: in search_ext_s msgid = self.search_ext(base,scope,filterstr,attrlist,attrsonly,serverctrls,clientctrls,timeout,sizelimit) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:853: in search_ext return self._ldap_call( /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:340: in _ldap_call reraise(exc_type, exc_value, exc_traceback) /usr/local/lib64/python3.8/site-packages/ldap/compat.py:46: in reraise raise exc_value _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.DirSrv object at 0x7f61c297d4f0> func = <built-in method search_ext of LDAP object at 0x7f61c290f1b0> args = ('cn=schema', 0, 'objectclass=*', ['nsSchemaCSN'], 0, None, ...) kwargs = {}, diagnostic_message_success = None, exc_type = None exc_value = None, exc_traceback = None def _ldap_call(self,func,*args,**kwargs): """ Wrapper method mainly for serializing calls into OpenLDAP libs and trace logs """ self._ldap_object_lock.acquire() if __debug__: if self._trace_level>=1: self._trace_file.write('*** %s %s - %s\n%s\n' % ( repr(self), self._uri, '.'.join((self.__class__.__name__,func.__name__)), pprint.pformat((args,kwargs)) )) if self._trace_level>=9: traceback.print_stack(limit=self._trace_stack_limit,file=self._trace_file) diagnostic_message_success = None try: try: > result = func(*args,**kwargs) E ldap.SERVER_DOWN: {'result': -1, 'desc': "Can't contact LDAP server", 'ctrls': []} /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:324: SERVER_DOWN -------------------------------Captured log call-------------------------------- [32mINFO [0m lib389:ticket47988_test.py:64 ############################################### [32mINFO [0m lib389:ticket47988_test.py:65 ####### [32mINFO [0m lib389:ticket47988_test.py:66 ####### test_ticket47988_6 [32mINFO [0m lib389:ticket47988_test.py:67 ####### [32mINFO [0m lib389:ticket47988_test.py:68 ################################################### | |||
Failed | tickets/ticket48005_test.py::test_ticket48005_setup | 4.71 | |
topology_st = <lib389.topologies.TopologyMain object at 0x7f61c285ed60> def test_ticket48005_setup(topology_st): ''' allow dump core generate a test ldif file using dbgen.pl import the ldif ''' log.info("Ticket 48005 setup...") if hasattr(topology_st.standalone, 'prefix'): prefix = topology_st.standalone.prefix else: prefix = None sysconfig_dirsrv = os.path.join(topology_st.standalone.get_initconfig_dir(), 'dirsrv') cmdline = 'egrep "ulimit -c unlimited" %s' % sysconfig_dirsrv p = os.popen(cmdline, "r") ulimitc = p.readline() if ulimitc == "": log.info('No ulimit -c in %s' % sysconfig_dirsrv) log.info('Adding it') cmdline = 'echo "ulimit -c unlimited" >> %s' % sysconfig_dirsrv sysconfig_dirsrv_systemd = sysconfig_dirsrv + ".systemd" cmdline = 'egrep LimitCORE=infinity %s' % sysconfig_dirsrv_systemd p = os.popen(cmdline, "r") lcore = p.readline() if lcore == "": log.info('No LimitCORE in %s' % sysconfig_dirsrv_systemd) log.info('Adding it') cmdline = 'echo LimitCORE=infinity >> %s' % sysconfig_dirsrv_systemd topology_st.standalone.restart(timeout=10) ldif_file = topology_st.standalone.get_ldif_dir() + "/ticket48005.ldif" os.system('ls %s' % ldif_file) os.system('rm -f %s' % ldif_file) if hasattr(topology_st.standalone, 'prefix'): prefix = topology_st.standalone.prefix else: prefix = "" dbgen_prog = prefix + '/bin/dbgen.pl' log.info('dbgen_prog: %s' % dbgen_prog) os.system('%s -s %s -o %s -u -n 10000' % (dbgen_prog, SUFFIX, ldif_file)) cmdline = 'egrep dn: %s | wc -l' % ldif_file p = os.popen(cmdline, "r") dnnumstr = p.readline() num = int(dnnumstr) log.info("We have %d entries.\n", num) importTask = Tasks(topology_st.standalone) args = {TASK_WAIT: True} > importTask.importLDIF(SUFFIX, None, ldif_file, args) /export/tests/tickets/ticket48005_test.py:74: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.tasks.Tasks object at 0x7f61c28617c0> suffix = 'dc=example,dc=com', benamebase = None input_file = '/var/lib/dirsrv/slapd-standalone1/ldif/ticket48005.ldif' args = {'wait': True} def importLDIF(self, suffix=None, benamebase=None, input_file=None, args=None): ''' Import from a LDIF format a given 'suffix' (or 'benamebase' that stores that suffix). It uses an internal task to acheive this request. If 'suffix' and 'benamebase' are specified, it uses 'benamebase' first else 'suffix'. If both 'suffix' and 'benamebase' are missing it raise ValueError 'input_file' is the ldif input file @param suffix - suffix of the backend @param benamebase - 'commonname'/'cn' of the backend (e.g. 'userRoot') @param ldif_input - file that will contain the entries in LDIF format to import @param args - is a dictionary that contains modifier of the import task wait: True/[False] - If True, 'export' waits for the completion of the task before to return @return None @raise ValueError ''' if self.conn.state != DIRSRV_STATE_ONLINE: raise ValueError("Invalid Server State %s! Must be online" % self.conn.state) # Checking the parameters if not benamebase and not suffix: raise ValueError("Specify either bename or suffix") if not input_file: raise ValueError("input_file is mandatory") if not os.path.exists(input_file): > raise ValueError("Import file (%s) does not exist" % input_file) E ValueError: Import file (/var/lib/dirsrv/slapd-standalone1/ldif/ticket48005.ldif) does not exist /usr/local/lib/python3.8/site-packages/lib389/tasks.py:473: ValueError -------------------------------Captured log setup------------------------------- [32mINFO [0m lib389.SetupDs:setup.py:658 Starting installation... [32mINFO [0m lib389.SetupDs:setup.py:686 Completed installation for standalone1 [32mINFO [0m lib389.topologies:topologies.py:109 Instance with parameters {'ldap-port': 38901, 'ldap-secureport': 63601, 'server-id': 'standalone1', 'suffix': 'dc=example,dc=com'} was created. ------------------------------Captured stderr call------------------------------ grep: /etc/sysconfig/dirsrv: No such file or directory grep: /etc/sysconfig/dirsrv.systemd: No such file or directory ls: cannot access '/var/lib/dirsrv/slapd-standalone1/ldif/ticket48005.ldif': No such file or directory sh: /bin/dbgen.pl: No such file or directory grep: /var/lib/dirsrv/slapd-standalone1/ldif/ticket48005.ldif: No such file or directory -------------------------------Captured log call-------------------------------- [32mINFO [0m tests.tickets.ticket48005_test:ticket48005_test.py:31 Ticket 48005 setup... [32mINFO [0m tests.tickets.ticket48005_test:ticket48005_test.py:41 No ulimit -c in /etc/sysconfig/dirsrv [32mINFO [0m tests.tickets.ticket48005_test:ticket48005_test.py:42 Adding it [32mINFO [0m tests.tickets.ticket48005_test:ticket48005_test.py:50 No LimitCORE in /etc/sysconfig/dirsrv.systemd [32mINFO [0m tests.tickets.ticket48005_test:ticket48005_test.py:51 Adding it [32mINFO [0m tests.tickets.ticket48005_test:ticket48005_test.py:64 dbgen_prog: /bin/dbgen.pl [32mINFO [0m tests.tickets.ticket48005_test:ticket48005_test.py:70 We have 0 entries. | |||
Failed | tickets/ticket48013_test.py::test_ticket48013 | 1.75 | |
topology_st = <lib389.topologies.TopologyMain object at 0x7f61c28d2760> def test_ticket48013(topology_st): ''' Content Synchonization: Test that invalid cookies are caught ''' cookies = ('#', '##', 'a#a#a', 'a#a#1') # Enable dynamic plugins try: topology_st.standalone.modify_s(DN_CONFIG, [(ldap.MOD_REPLACE, 'nsslapd-dynamic-plugins', b'on')]) except ldap.LDAPError as e: log.error('Failed to enable dynamic plugin! {}'.format(e.args[0]['desc'])) assert False # Enable retro changelog topology_st.standalone.plugins.enable(name=PLUGIN_RETRO_CHANGELOG) # Enbale content sync plugin > topology_st.standalone.plugins.enable(name=PLUGIN_REPL_SYNC) /export/tests/tickets/ticket48013_test.py:61: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/local/lib/python3.8/site-packages/lib389/plugins.py:2105: in enable plugin.enable() /usr/local/lib/python3.8/site-packages/lib389/plugins.py:58: in enable self.set('nsslapd-pluginEnabled', 'on') /usr/local/lib/python3.8/site-packages/lib389/_mapped_object.py:446: in set return self._instance.modify_ext_s(self._dn, [(action, key, value)], /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:613: in modify_ext_s resp_type, resp_data, resp_msgid, resp_ctrls = self.result3(msgid,all=1,timeout=self.timeout) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:764: in result3 resp_type, resp_data, resp_msgid, decoded_resp_ctrls, retoid, retval = self.result4( /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:774: in result4 ldap_result = self._ldap_call(self._l.result4,msgid,all,timeout,add_ctrls,add_intermediates,add_extop) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:340: in _ldap_call reraise(exc_type, exc_value, exc_traceback) /usr/local/lib64/python3.8/site-packages/ldap/compat.py:46: in reraise raise exc_value _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.DirSrv object at 0x7f61c28d2730> func = <built-in method result4 of LDAP object at 0x7f61c28c3300> args = (7, 1, -1, 0, 0, 0), kwargs = {}, diagnostic_message_success = None exc_type = None, exc_value = None, exc_traceback = None def _ldap_call(self,func,*args,**kwargs): """ Wrapper method mainly for serializing calls into OpenLDAP libs and trace logs """ self._ldap_object_lock.acquire() if __debug__: if self._trace_level>=1: self._trace_file.write('*** %s %s - %s\n%s\n' % ( repr(self), self._uri, '.'.join((self.__class__.__name__,func.__name__)), pprint.pformat((args,kwargs)) )) if self._trace_level>=9: traceback.print_stack(limit=self._trace_stack_limit,file=self._trace_file) diagnostic_message_success = None try: try: > result = func(*args,**kwargs) E ldap.SERVER_DOWN: {'result': -1, 'desc': "Can't contact LDAP server", 'ctrls': []} /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:324: SERVER_DOWN -------------------------------Captured log setup------------------------------- [32mINFO [0m lib389.SetupDs:setup.py:658 Starting installation... [32mINFO [0m lib389.SetupDs:setup.py:686 Completed installation for standalone1 [32mINFO [0m lib389.topologies:topologies.py:109 Instance with parameters {'ldap-port': 38901, 'ldap-secureport': 63601, 'server-id': 'standalone1', 'suffix': 'dc=example,dc=com'} was created. | |||
Failed | tickets/ticket48194_test.py::test_run_1 | 7.36 | |
topology_st = <lib389.topologies.TopologyMain object at 0x7f61c2953a30> def test_run_1(topology_st): """ Check nsSSL3Ciphers: +all All ciphers are enabled except null. Note: default allowWeakCipher (i.e., off) for +all """ _header(topology_st, 'Test Case 2 - Check the ciphers availability for "+all" with default allowWeakCiphers') topology_st.standalone.simple_bind_s(DN_DM, PASSWORD) topology_st.standalone.modify_s(CONFIG_DN, [(ldap.MOD_REPLACE, 'nsslapd-errorlog-level', b'64')]) # Make sure allowWeakCipher is not set. topology_st.standalone.modify_s(ENCRYPTION_DN, [(ldap.MOD_DELETE, 'allowWeakCipher', None)]) log.info("\n######################### Restarting the server ######################\n") topology_st.standalone.stop(timeout=10) os.system('mv %s %s.48194_0' % (topology_st.standalone.errlog, topology_st.standalone.errlog)) os.system('touch %s' % (topology_st.standalone.errlog)) time.sleep(2) topology_st.standalone.start(timeout=120) > connectWithOpenssl(topology_st, 'DES-CBC3-SHA', False) /export/tests/tickets/ticket48194_test.py:158: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ topology_st = <lib389.topologies.TopologyMain object at 0x7f61c2953a30> cipher = 'DES-CBC3-SHA', expect = False def connectWithOpenssl(topology_st, cipher, expect): """ Connect with the given cipher Condition: If expect is True, the handshake should be successful. If expect is False, the handshake should be refused with access log: "Cannot communicate securely with peer: no common encryption algorithm(s)." """ log.info("Testing %s -- expect to handshake %s", cipher, "successfully" if expect else "failed") myurl = 'localhost:%s' % LDAPSPORT cmdline = ['/usr/bin/openssl', 's_client', '-connect', myurl, '-cipher', cipher] strcmdline = " ".join(cmdline) log.info("Running cmdline: %s", strcmdline) try: proc = subprocess.Popen(cmdline, stdout=subprocess.PIPE, stdin=subprocess.PIPE, stderr=subprocess.STDOUT) except ValueError: log.info("%s failed: %s", cmdline, ValueError) proc.kill() while True: l = proc.stdout.readline() if l == b"": break if b'Cipher is' in l: log.info("Found: %s", l) if expect: if b'(NONE)' in l: assert False else: proc.stdin.close() assert True else: if b'(NONE)' in l: assert True else: proc.stdin.close() > assert False E assert False /export/tests/tickets/ticket48194_test.py:117: AssertionError -------------------------------Captured log call-------------------------------- [32mINFO [0m lib389:ticket48194_test.py:40 ############################################### [32mINFO [0m lib389:ticket48194_test.py:41 ####### Test Case 2 - Check the ciphers availability for "+all" with default allowWeakCiphers [32mINFO [0m lib389:ticket48194_test.py:42 ############################################### [32mINFO [0m lib389.utils:ticket48194_test.py:151 ######################### Restarting the server ###################### [32mINFO [0m lib389.utils:ticket48194_test.py:86 Testing DES-CBC3-SHA -- expect to handshake failed [32mINFO [0m lib389.utils:ticket48194_test.py:92 Running cmdline: /usr/bin/openssl s_client -connect localhost:63601 -cipher DES-CBC3-SHA [32mINFO [0m lib389.utils:ticket48194_test.py:105 Found: b'New, TLSv1.3, Cipher is TLS_AES_128_GCM_SHA256\n' | |||
Failed | tickets/ticket48194_test.py::test_run_2 | 6.82 | |
topology_st = <lib389.topologies.TopologyMain object at 0x7f61c2953a30> def test_run_2(topology_st): """ Check nsSSL3Ciphers: +rsa_aes_128_sha,+rsa_aes_256_sha rsa_aes_128_sha, tls_rsa_aes_128_sha, rsa_aes_256_sha, tls_rsa_aes_256_sha are enabled. default allowWeakCipher """ _header(topology_st, 'Test Case 3 - Check the ciphers availability for "+rsa_aes_128_sha,+rsa_aes_256_sha" with default allowWeakCipher') topology_st.standalone.simple_bind_s(DN_DM, PASSWORD) topology_st.standalone.modify_s(ENCRYPTION_DN, [(ldap.MOD_REPLACE, 'nsSSL3Ciphers', b'+rsa_aes_128_sha,+rsa_aes_256_sha')]) log.info("\n######################### Restarting the server ######################\n") topology_st.standalone.stop(timeout=10) os.system('mv %s %s.48194_1' % (topology_st.standalone.errlog, topology_st.standalone.errlog)) os.system('touch %s' % (topology_st.standalone.errlog)) time.sleep(2) topology_st.standalone.start(timeout=120) connectWithOpenssl(topology_st, 'DES-CBC3-SHA', False) connectWithOpenssl(topology_st, 'AES256-SHA256', False) > connectWithOpenssl(topology_st, 'AES128-SHA', True) /export/tests/tickets/ticket48194_test.py:184: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ topology_st = <lib389.topologies.TopologyMain object at 0x7f61c2953a30> cipher = 'AES128-SHA', expect = True def connectWithOpenssl(topology_st, cipher, expect): """ Connect with the given cipher Condition: If expect is True, the handshake should be successful. If expect is False, the handshake should be refused with access log: "Cannot communicate securely with peer: no common encryption algorithm(s)." """ log.info("Testing %s -- expect to handshake %s", cipher, "successfully" if expect else "failed") myurl = 'localhost:%s' % LDAPSPORT cmdline = ['/usr/bin/openssl', 's_client', '-connect', myurl, '-cipher', cipher] strcmdline = " ".join(cmdline) log.info("Running cmdline: %s", strcmdline) try: proc = subprocess.Popen(cmdline, stdout=subprocess.PIPE, stdin=subprocess.PIPE, stderr=subprocess.STDOUT) except ValueError: log.info("%s failed: %s", cmdline, ValueError) proc.kill() while True: l = proc.stdout.readline() if l == b"": break if b'Cipher is' in l: log.info("Found: %s", l) if expect: if b'(NONE)' in l: > assert False E assert False /export/tests/tickets/ticket48194_test.py:108: AssertionError -------------------------------Captured log call-------------------------------- [32mINFO [0m lib389:ticket48194_test.py:40 ############################################### [32mINFO [0m lib389:ticket48194_test.py:41 ####### Test Case 3 - Check the ciphers availability for "+rsa_aes_128_sha,+rsa_aes_256_sha" with default allowWeakCipher [32mINFO [0m lib389:ticket48194_test.py:42 ############################################### [32mINFO [0m lib389.utils:ticket48194_test.py:175 ######################### Restarting the server ###################### [32mINFO [0m lib389.utils:ticket48194_test.py:86 Testing DES-CBC3-SHA -- expect to handshake failed [32mINFO [0m lib389.utils:ticket48194_test.py:92 Running cmdline: /usr/bin/openssl s_client -connect localhost:63601 -cipher DES-CBC3-SHA [32mINFO [0m lib389.utils:ticket48194_test.py:105 Found: b'New, (NONE), Cipher is (NONE)\n' [32mINFO [0m lib389.utils:ticket48194_test.py:86 Testing AES256-SHA256 -- expect to handshake failed [32mINFO [0m lib389.utils:ticket48194_test.py:92 Running cmdline: /usr/bin/openssl s_client -connect localhost:63601 -cipher AES256-SHA256 [32mINFO [0m lib389.utils:ticket48194_test.py:105 Found: b'New, (NONE), Cipher is (NONE)\n' [32mINFO [0m lib389.utils:ticket48194_test.py:86 Testing AES128-SHA -- expect to handshake successfully [32mINFO [0m lib389.utils:ticket48194_test.py:92 Running cmdline: /usr/bin/openssl s_client -connect localhost:63601 -cipher AES128-SHA [32mINFO [0m lib389.utils:ticket48194_test.py:105 Found: b'New, (NONE), Cipher is (NONE)\n' | |||
Failed | tickets/ticket48194_test.py::test_run_4 | 7.25 | |
topology_st = <lib389.topologies.TopologyMain object at 0x7f61c2953a30> def test_run_4(topology_st): """ Check no nsSSL3Ciphers Default ciphers are enabled. default allowWeakCipher """ _header(topology_st, 'Test Case 5 - Check no nsSSL3Ciphers (-all) with default allowWeakCipher') topology_st.standalone.simple_bind_s(DN_DM, PASSWORD) topology_st.standalone.modify_s(ENCRYPTION_DN, [(ldap.MOD_DELETE, 'nsSSL3Ciphers', b'-all')]) log.info("\n######################### Restarting the server ######################\n") topology_st.standalone.stop(timeout=10) os.system('mv %s %s.48194_3' % (topology_st.standalone.errlog, topology_st.standalone.errlog)) os.system('touch %s' % (topology_st.standalone.errlog)) time.sleep(2) topology_st.standalone.start(timeout=120) > connectWithOpenssl(topology_st, 'DES-CBC3-SHA', False) /export/tests/tickets/ticket48194_test.py:228: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ topology_st = <lib389.topologies.TopologyMain object at 0x7f61c2953a30> cipher = 'DES-CBC3-SHA', expect = False def connectWithOpenssl(topology_st, cipher, expect): """ Connect with the given cipher Condition: If expect is True, the handshake should be successful. If expect is False, the handshake should be refused with access log: "Cannot communicate securely with peer: no common encryption algorithm(s)." """ log.info("Testing %s -- expect to handshake %s", cipher, "successfully" if expect else "failed") myurl = 'localhost:%s' % LDAPSPORT cmdline = ['/usr/bin/openssl', 's_client', '-connect', myurl, '-cipher', cipher] strcmdline = " ".join(cmdline) log.info("Running cmdline: %s", strcmdline) try: proc = subprocess.Popen(cmdline, stdout=subprocess.PIPE, stdin=subprocess.PIPE, stderr=subprocess.STDOUT) except ValueError: log.info("%s failed: %s", cmdline, ValueError) proc.kill() while True: l = proc.stdout.readline() if l == b"": break if b'Cipher is' in l: log.info("Found: %s", l) if expect: if b'(NONE)' in l: assert False else: proc.stdin.close() assert True else: if b'(NONE)' in l: assert True else: proc.stdin.close() > assert False E assert False /export/tests/tickets/ticket48194_test.py:117: AssertionError -------------------------------Captured log call-------------------------------- [32mINFO [0m lib389:ticket48194_test.py:40 ############################################### [32mINFO [0m lib389:ticket48194_test.py:41 ####### Test Case 5 - Check no nsSSL3Ciphers (-all) with default allowWeakCipher [32mINFO [0m lib389:ticket48194_test.py:42 ############################################### [32mINFO [0m lib389.utils:ticket48194_test.py:221 ######################### Restarting the server ###################### [32mINFO [0m lib389.utils:ticket48194_test.py:86 Testing DES-CBC3-SHA -- expect to handshake failed [32mINFO [0m lib389.utils:ticket48194_test.py:92 Running cmdline: /usr/bin/openssl s_client -connect localhost:63601 -cipher DES-CBC3-SHA [32mINFO [0m lib389.utils:ticket48194_test.py:105 Found: b'New, TLSv1.3, Cipher is TLS_AES_128_GCM_SHA256\n' | |||
Failed | tickets/ticket48194_test.py::test_run_5 | 7.10 | |
topology_st = <lib389.topologies.TopologyMain object at 0x7f61c2953a30> def test_run_5(topology_st): """ Check nsSSL3Ciphers: default Default ciphers are enabled. default allowWeakCipher """ _header(topology_st, 'Test Case 6 - Check default nsSSL3Ciphers (default setting) with default allowWeakCipher') topology_st.standalone.simple_bind_s(DN_DM, PASSWORD) topology_st.standalone.modify_s(ENCRYPTION_DN, [(ldap.MOD_REPLACE, 'nsSSL3Ciphers', b'default')]) log.info("\n######################### Restarting the server ######################\n") topology_st.standalone.stop(timeout=10) os.system('mv %s %s.48194_4' % (topology_st.standalone.errlog, topology_st.standalone.errlog)) os.system('touch %s' % (topology_st.standalone.errlog)) time.sleep(2) topology_st.standalone.start(timeout=120) > connectWithOpenssl(topology_st, 'DES-CBC3-SHA', False) /export/tests/tickets/ticket48194_test.py:250: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ topology_st = <lib389.topologies.TopologyMain object at 0x7f61c2953a30> cipher = 'DES-CBC3-SHA', expect = False def connectWithOpenssl(topology_st, cipher, expect): """ Connect with the given cipher Condition: If expect is True, the handshake should be successful. If expect is False, the handshake should be refused with access log: "Cannot communicate securely with peer: no common encryption algorithm(s)." """ log.info("Testing %s -- expect to handshake %s", cipher, "successfully" if expect else "failed") myurl = 'localhost:%s' % LDAPSPORT cmdline = ['/usr/bin/openssl', 's_client', '-connect', myurl, '-cipher', cipher] strcmdline = " ".join(cmdline) log.info("Running cmdline: %s", strcmdline) try: proc = subprocess.Popen(cmdline, stdout=subprocess.PIPE, stdin=subprocess.PIPE, stderr=subprocess.STDOUT) except ValueError: log.info("%s failed: %s", cmdline, ValueError) proc.kill() while True: l = proc.stdout.readline() if l == b"": break if b'Cipher is' in l: log.info("Found: %s", l) if expect: if b'(NONE)' in l: assert False else: proc.stdin.close() assert True else: if b'(NONE)' in l: assert True else: proc.stdin.close() > assert False E assert False /export/tests/tickets/ticket48194_test.py:117: AssertionError -------------------------------Captured log call-------------------------------- [32mINFO [0m lib389:ticket48194_test.py:40 ############################################### [32mINFO [0m lib389:ticket48194_test.py:41 ####### Test Case 6 - Check default nsSSL3Ciphers (default setting) with default allowWeakCipher [32mINFO [0m lib389:ticket48194_test.py:42 ############################################### [32mINFO [0m lib389.utils:ticket48194_test.py:243 ######################### Restarting the server ###################### [32mINFO [0m lib389.utils:ticket48194_test.py:86 Testing DES-CBC3-SHA -- expect to handshake failed [32mINFO [0m lib389.utils:ticket48194_test.py:92 Running cmdline: /usr/bin/openssl s_client -connect localhost:63601 -cipher DES-CBC3-SHA [32mINFO [0m lib389.utils:ticket48194_test.py:105 Found: b'New, TLSv1.3, Cipher is TLS_AES_128_GCM_SHA256\n' | |||
Failed | tickets/ticket48194_test.py::test_run_6 | 7.16 | |
topology_st = <lib389.topologies.TopologyMain object at 0x7f61c2953a30> def test_run_6(topology_st): """ Check nsSSL3Ciphers: +all,-TLS_RSA_WITH_AES_256_CBC_SHA256 All ciphers are disabled. default allowWeakCipher """ _header(topology_st, 'Test Case 7 - Check nsSSL3Ciphers: +all,-TLS_RSA_WITH_AES_256_CBC_SHA256 with default allowWeakCipher') topology_st.standalone.simple_bind_s(DN_DM, PASSWORD) topology_st.standalone.modify_s(ENCRYPTION_DN, [(ldap.MOD_REPLACE, 'nsSSL3Ciphers', b'+all,-TLS_RSA_WITH_AES_256_CBC_SHA256')]) log.info("\n######################### Restarting the server ######################\n") topology_st.standalone.stop(timeout=10) os.system('mv %s %s.48194_5' % (topology_st.standalone.errlog, topology_st.standalone.errlog)) os.system('touch %s' % (topology_st.standalone.errlog)) time.sleep(2) topology_st.standalone.start(timeout=120) > connectWithOpenssl(topology_st, 'DES-CBC3-SHA', False) /export/tests/tickets/ticket48194_test.py:274: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ topology_st = <lib389.topologies.TopologyMain object at 0x7f61c2953a30> cipher = 'DES-CBC3-SHA', expect = False def connectWithOpenssl(topology_st, cipher, expect): """ Connect with the given cipher Condition: If expect is True, the handshake should be successful. If expect is False, the handshake should be refused with access log: "Cannot communicate securely with peer: no common encryption algorithm(s)." """ log.info("Testing %s -- expect to handshake %s", cipher, "successfully" if expect else "failed") myurl = 'localhost:%s' % LDAPSPORT cmdline = ['/usr/bin/openssl', 's_client', '-connect', myurl, '-cipher', cipher] strcmdline = " ".join(cmdline) log.info("Running cmdline: %s", strcmdline) try: proc = subprocess.Popen(cmdline, stdout=subprocess.PIPE, stdin=subprocess.PIPE, stderr=subprocess.STDOUT) except ValueError: log.info("%s failed: %s", cmdline, ValueError) proc.kill() while True: l = proc.stdout.readline() if l == b"": break if b'Cipher is' in l: log.info("Found: %s", l) if expect: if b'(NONE)' in l: assert False else: proc.stdin.close() assert True else: if b'(NONE)' in l: assert True else: proc.stdin.close() > assert False E assert False /export/tests/tickets/ticket48194_test.py:117: AssertionError -------------------------------Captured log call-------------------------------- [32mINFO [0m lib389:ticket48194_test.py:40 ############################################### [32mINFO [0m lib389:ticket48194_test.py:41 ####### Test Case 7 - Check nsSSL3Ciphers: +all,-TLS_RSA_WITH_AES_256_CBC_SHA256 with default allowWeakCipher [32mINFO [0m lib389:ticket48194_test.py:42 ############################################### [32mINFO [0m lib389.utils:ticket48194_test.py:267 ######################### Restarting the server ###################### [32mINFO [0m lib389.utils:ticket48194_test.py:86 Testing DES-CBC3-SHA -- expect to handshake failed [32mINFO [0m lib389.utils:ticket48194_test.py:92 Running cmdline: /usr/bin/openssl s_client -connect localhost:63601 -cipher DES-CBC3-SHA [32mINFO [0m lib389.utils:ticket48194_test.py:105 Found: b'New, TLSv1.3, Cipher is TLS_AES_128_GCM_SHA256\n' | |||
Failed | tickets/ticket48194_test.py::test_run_8 | 7.50 | |
topology_st = <lib389.topologies.TopologyMain object at 0x7f61c2953a30> def test_run_8(topology_st): """ Check nsSSL3Ciphers: default + allowWeakCipher: off Strong Default ciphers are enabled. """ _header(topology_st, 'Test Case 9 - Check default nsSSL3Ciphers (default setting + allowWeakCipher: off)') topology_st.standalone.simple_bind_s(DN_DM, PASSWORD) topology_st.standalone.modify_s(ENCRYPTION_DN, [(ldap.MOD_REPLACE, 'nsSSL3Ciphers', b'default'), (ldap.MOD_REPLACE, 'allowWeakCipher', b'off')]) log.info("\n######################### Restarting the server ######################\n") topology_st.standalone.stop(timeout=10) os.system('mv %s %s.48194_7' % (topology_st.standalone.errlog, topology_st.standalone.errlog)) os.system('touch %s' % (topology_st.standalone.errlog)) time.sleep(2) topology_st.standalone.start(timeout=120) > connectWithOpenssl(topology_st, 'DES-CBC3-SHA', False) /export/tests/tickets/ticket48194_test.py:297: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ topology_st = <lib389.topologies.TopologyMain object at 0x7f61c2953a30> cipher = 'DES-CBC3-SHA', expect = False def connectWithOpenssl(topology_st, cipher, expect): """ Connect with the given cipher Condition: If expect is True, the handshake should be successful. If expect is False, the handshake should be refused with access log: "Cannot communicate securely with peer: no common encryption algorithm(s)." """ log.info("Testing %s -- expect to handshake %s", cipher, "successfully" if expect else "failed") myurl = 'localhost:%s' % LDAPSPORT cmdline = ['/usr/bin/openssl', 's_client', '-connect', myurl, '-cipher', cipher] strcmdline = " ".join(cmdline) log.info("Running cmdline: %s", strcmdline) try: proc = subprocess.Popen(cmdline, stdout=subprocess.PIPE, stdin=subprocess.PIPE, stderr=subprocess.STDOUT) except ValueError: log.info("%s failed: %s", cmdline, ValueError) proc.kill() while True: l = proc.stdout.readline() if l == b"": break if b'Cipher is' in l: log.info("Found: %s", l) if expect: if b'(NONE)' in l: assert False else: proc.stdin.close() assert True else: if b'(NONE)' in l: assert True else: proc.stdin.close() > assert False E assert False /export/tests/tickets/ticket48194_test.py:117: AssertionError -------------------------------Captured log call-------------------------------- [32mINFO [0m lib389:ticket48194_test.py:40 ############################################### [32mINFO [0m lib389:ticket48194_test.py:41 ####### Test Case 9 - Check default nsSSL3Ciphers (default setting + allowWeakCipher: off) [32mINFO [0m lib389:ticket48194_test.py:42 ############################################### [32mINFO [0m lib389.utils:ticket48194_test.py:290 ######################### Restarting the server ###################### [32mINFO [0m lib389.utils:ticket48194_test.py:86 Testing DES-CBC3-SHA -- expect to handshake failed [32mINFO [0m lib389.utils:ticket48194_test.py:92 Running cmdline: /usr/bin/openssl s_client -connect localhost:63601 -cipher DES-CBC3-SHA [32mINFO [0m lib389.utils:ticket48194_test.py:105 Found: b'New, TLSv1.3, Cipher is TLS_AES_128_GCM_SHA256\n' | |||
Failed | tickets/ticket48228_test.py::test_ticket48228_test_global_policy | 1.40 | |
topology_st = <lib389.topologies.TopologyMain object at 0x7f61c2651040> user = 'uid=user1,dc=example,dc=com', passwd = 'password', times = 6 def update_passwd(topology_st, user, passwd, times): # Set the default value cpw = passwd for i in range(times): log.info(" Bind as {%s,%s}" % (user, cpw)) topology_st.standalone.simple_bind_s(user, cpw) # Now update the value for this iter. cpw = 'password%d' % i try: > topology_st.standalone.modify_s(user, [(ldap.MOD_REPLACE, 'userpassword', cpw.encode())]) /export/tests/tickets/ticket48228_test.py:136: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ args = ('uid=user1,dc=example,dc=com', [(2, 'userpassword', b'password0')]) kwargs = {} c_stack = [FrameInfo(frame=<frame at 0x7f61c2d2dc40, file '/usr/local/lib/python3.8/site-packages/lib389/__init__.py', line 180,...mbda>', code_context=[' self._inner_hookexec = lambda hook, methods, kwargs: hook.multicall(\n'], index=0), ...] frame = FrameInfo(frame=<frame at 0x5576b8b186a0, file '/export/tests/tickets/ticket48228_test.py', line 141, code update_pass...t=[" topology_st.standalone.modify_s(user, [(ldap.MOD_REPLACE, 'userpassword', cpw.encode())])\n"], index=0) def inner(*args, **kwargs): if name in [ 'add_s', 'bind_s', 'delete_s', 'modify_s', 'modrdn_s', 'rename_s', 'sasl_interactive_bind_s', 'search_s', 'search_ext_s', 'simple_bind_s', 'unbind_s', 'getEntry', ] and not ('escapehatch' in kwargs and kwargs['escapehatch'] == 'i am sure'): c_stack = inspect.stack() frame = c_stack[1] warnings.warn(DeprecationWarning("Use of raw ldap function %s. This will be removed in a future release. " "Found in: %s:%s" % (name, frame.filename, frame.lineno))) # Later, we will add a sleep here to make it even more painful. # Finally, it will raise an exception. elif 'escapehatch' in kwargs: kwargs.pop('escapehatch') if name == 'result': objtype, data = f(*args, **kwargs) # data is either a 2-tuple or a list of 2-tuples # print data if data: if isinstance(data, tuple): return objtype, Entry(data) elif isinstance(data, list): # AD sends back these search references # if objtype == ldap.RES_SEARCH_RESULT and \ # isinstance(data[-1],tuple) and \ # not data[-1][0]: # print "Received search reference: " # pprint.pprint(data[-1][1]) # data.pop() # remove the last non-entry element return objtype, [Entry(x) for x in data] else: raise TypeError("unknown data type %s returned by result" % type(data)) else: return objtype, data elif name.startswith('add'): # the first arg is self # the second and third arg are the dn and the data to send # We need to convert the Entry into the format used by # python-ldap ent = args[0] if isinstance(ent, Entry): return f(ent.dn, ent.toTupleList(), *args[2:]) else: return f(*args, **kwargs) else: > return f(*args, **kwargs) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.DirSrv object at 0x7f61c264afd0> dn = 'uid=user1,dc=example,dc=com' modlist = [(2, 'userpassword', b'password0')] def modify_s(self,dn,modlist): > return self.modify_ext_s(dn,modlist,None,None) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:640: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ args = ('uid=user1,dc=example,dc=com', [(2, 'userpassword', b'password0')], None, None) kwargs = {} def inner(*args, **kwargs): if name in [ 'add_s', 'bind_s', 'delete_s', 'modify_s', 'modrdn_s', 'rename_s', 'sasl_interactive_bind_s', 'search_s', 'search_ext_s', 'simple_bind_s', 'unbind_s', 'getEntry', ] and not ('escapehatch' in kwargs and kwargs['escapehatch'] == 'i am sure'): c_stack = inspect.stack() frame = c_stack[1] warnings.warn(DeprecationWarning("Use of raw ldap function %s. This will be removed in a future release. " "Found in: %s:%s" % (name, frame.filename, frame.lineno))) # Later, we will add a sleep here to make it even more painful. # Finally, it will raise an exception. elif 'escapehatch' in kwargs: kwargs.pop('escapehatch') if name == 'result': objtype, data = f(*args, **kwargs) # data is either a 2-tuple or a list of 2-tuples # print data if data: if isinstance(data, tuple): return objtype, Entry(data) elif isinstance(data, list): # AD sends back these search references # if objtype == ldap.RES_SEARCH_RESULT and \ # isinstance(data[-1],tuple) and \ # not data[-1][0]: # print "Received search reference: " # pprint.pprint(data[-1][1]) # data.pop() # remove the last non-entry element return objtype, [Entry(x) for x in data] else: raise TypeError("unknown data type %s returned by result" % type(data)) else: return objtype, data elif name.startswith('add'): # the first arg is self # the second and third arg are the dn and the data to send # We need to convert the Entry into the format used by # python-ldap ent = args[0] if isinstance(ent, Entry): return f(ent.dn, ent.toTupleList(), *args[2:]) else: return f(*args, **kwargs) else: > return f(*args, **kwargs) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.DirSrv object at 0x7f61c264afd0> dn = 'uid=user1,dc=example,dc=com' modlist = [(2, 'userpassword', b'password0')], serverctrls = None clientctrls = None def modify_ext_s(self,dn,modlist,serverctrls=None,clientctrls=None): msgid = self.modify_ext(dn,modlist,serverctrls,clientctrls) > resp_type, resp_data, resp_msgid, resp_ctrls = self.result3(msgid,all=1,timeout=self.timeout) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:613: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ args = (10,), kwargs = {'all': 1, 'timeout': -1} def inner(*args, **kwargs): if name in [ 'add_s', 'bind_s', 'delete_s', 'modify_s', 'modrdn_s', 'rename_s', 'sasl_interactive_bind_s', 'search_s', 'search_ext_s', 'simple_bind_s', 'unbind_s', 'getEntry', ] and not ('escapehatch' in kwargs and kwargs['escapehatch'] == 'i am sure'): c_stack = inspect.stack() frame = c_stack[1] warnings.warn(DeprecationWarning("Use of raw ldap function %s. This will be removed in a future release. " "Found in: %s:%s" % (name, frame.filename, frame.lineno))) # Later, we will add a sleep here to make it even more painful. # Finally, it will raise an exception. elif 'escapehatch' in kwargs: kwargs.pop('escapehatch') if name == 'result': objtype, data = f(*args, **kwargs) # data is either a 2-tuple or a list of 2-tuples # print data if data: if isinstance(data, tuple): return objtype, Entry(data) elif isinstance(data, list): # AD sends back these search references # if objtype == ldap.RES_SEARCH_RESULT and \ # isinstance(data[-1],tuple) and \ # not data[-1][0]: # print "Received search reference: " # pprint.pprint(data[-1][1]) # data.pop() # remove the last non-entry element return objtype, [Entry(x) for x in data] else: raise TypeError("unknown data type %s returned by result" % type(data)) else: return objtype, data elif name.startswith('add'): # the first arg is self # the second and third arg are the dn and the data to send # We need to convert the Entry into the format used by # python-ldap ent = args[0] if isinstance(ent, Entry): return f(ent.dn, ent.toTupleList(), *args[2:]) else: return f(*args, **kwargs) else: > return f(*args, **kwargs) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.DirSrv object at 0x7f61c264afd0>, msgid = 10, all = 1 timeout = -1, resp_ctrl_classes = None def result3(self,msgid=ldap.RES_ANY,all=1,timeout=None,resp_ctrl_classes=None): > resp_type, resp_data, resp_msgid, decoded_resp_ctrls, retoid, retval = self.result4( msgid,all,timeout, add_ctrls=0,add_intermediates=0,add_extop=0, resp_ctrl_classes=resp_ctrl_classes ) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:764: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ args = (10, 1, -1) kwargs = {'add_ctrls': 0, 'add_extop': 0, 'add_intermediates': 0, 'resp_ctrl_classes': None} def inner(*args, **kwargs): if name in [ 'add_s', 'bind_s', 'delete_s', 'modify_s', 'modrdn_s', 'rename_s', 'sasl_interactive_bind_s', 'search_s', 'search_ext_s', 'simple_bind_s', 'unbind_s', 'getEntry', ] and not ('escapehatch' in kwargs and kwargs['escapehatch'] == 'i am sure'): c_stack = inspect.stack() frame = c_stack[1] warnings.warn(DeprecationWarning("Use of raw ldap function %s. This will be removed in a future release. " "Found in: %s:%s" % (name, frame.filename, frame.lineno))) # Later, we will add a sleep here to make it even more painful. # Finally, it will raise an exception. elif 'escapehatch' in kwargs: kwargs.pop('escapehatch') if name == 'result': objtype, data = f(*args, **kwargs) # data is either a 2-tuple or a list of 2-tuples # print data if data: if isinstance(data, tuple): return objtype, Entry(data) elif isinstance(data, list): # AD sends back these search references # if objtype == ldap.RES_SEARCH_RESULT and \ # isinstance(data[-1],tuple) and \ # not data[-1][0]: # print "Received search reference: " # pprint.pprint(data[-1][1]) # data.pop() # remove the last non-entry element return objtype, [Entry(x) for x in data] else: raise TypeError("unknown data type %s returned by result" % type(data)) else: return objtype, data elif name.startswith('add'): # the first arg is self # the second and third arg are the dn and the data to send # We need to convert the Entry into the format used by # python-ldap ent = args[0] if isinstance(ent, Entry): return f(ent.dn, ent.toTupleList(), *args[2:]) else: return f(*args, **kwargs) else: > return f(*args, **kwargs) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.DirSrv object at 0x7f61c264afd0>, msgid = 10, all = 1 timeout = -1, add_ctrls = 0, add_intermediates = 0, add_extop = 0 resp_ctrl_classes = None def result4(self,msgid=ldap.RES_ANY,all=1,timeout=None,add_ctrls=0,add_intermediates=0,add_extop=0,resp_ctrl_classes=None): if timeout is None: timeout = self.timeout > ldap_result = self._ldap_call(self._l.result4,msgid,all,timeout,add_ctrls,add_intermediates,add_extop) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:774: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ args = (<built-in method result4 of LDAP object at 0x7f61c24e9bd0>, 10, 1, -1, 0, 0, ...) kwargs = {} def inner(*args, **kwargs): if name in [ 'add_s', 'bind_s', 'delete_s', 'modify_s', 'modrdn_s', 'rename_s', 'sasl_interactive_bind_s', 'search_s', 'search_ext_s', 'simple_bind_s', 'unbind_s', 'getEntry', ] and not ('escapehatch' in kwargs and kwargs['escapehatch'] == 'i am sure'): c_stack = inspect.stack() frame = c_stack[1] warnings.warn(DeprecationWarning("Use of raw ldap function %s. This will be removed in a future release. " "Found in: %s:%s" % (name, frame.filename, frame.lineno))) # Later, we will add a sleep here to make it even more painful. # Finally, it will raise an exception. elif 'escapehatch' in kwargs: kwargs.pop('escapehatch') if name == 'result': objtype, data = f(*args, **kwargs) # data is either a 2-tuple or a list of 2-tuples # print data if data: if isinstance(data, tuple): return objtype, Entry(data) elif isinstance(data, list): # AD sends back these search references # if objtype == ldap.RES_SEARCH_RESULT and \ # isinstance(data[-1],tuple) and \ # not data[-1][0]: # print "Received search reference: " # pprint.pprint(data[-1][1]) # data.pop() # remove the last non-entry element return objtype, [Entry(x) for x in data] else: raise TypeError("unknown data type %s returned by result" % type(data)) else: return objtype, data elif name.startswith('add'): # the first arg is self # the second and third arg are the dn and the data to send # We need to convert the Entry into the format used by # python-ldap ent = args[0] if isinstance(ent, Entry): return f(ent.dn, ent.toTupleList(), *args[2:]) else: return f(*args, **kwargs) else: > return f(*args, **kwargs) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.DirSrv object at 0x7f61c264afd0> func = <built-in method result4 of LDAP object at 0x7f61c24e9bd0> args = (10, 1, -1, 0, 0, 0), kwargs = {}, diagnostic_message_success = None exc_type = None, exc_value = None, exc_traceback = None def _ldap_call(self,func,*args,**kwargs): """ Wrapper method mainly for serializing calls into OpenLDAP libs and trace logs """ self._ldap_object_lock.acquire() if __debug__: if self._trace_level>=1: self._trace_file.write('*** %s %s - %s\n%s\n' % ( repr(self), self._uri, '.'.join((self.__class__.__name__,func.__name__)), pprint.pformat((args,kwargs)) )) if self._trace_level>=9: traceback.print_stack(limit=self._trace_stack_limit,file=self._trace_file) diagnostic_message_success = None try: try: result = func(*args,**kwargs) if __debug__ and self._trace_level>=2: if func.__name__!="unbind_ext": diagnostic_message_success = self._l.get_option(ldap.OPT_DIAGNOSTIC_MESSAGE) finally: self._ldap_object_lock.release() except LDAPError as e: exc_type,exc_value,exc_traceback = sys.exc_info() try: if 'info' not in e.args[0] and 'errno' in e.args[0]: e.args[0]['info'] = strerror(e.args[0]['errno']) except IndexError: pass if __debug__ and self._trace_level>=2: self._trace_file.write('=> LDAPError - %s: %s\n' % (e.__class__.__name__,str(e))) try: > reraise(exc_type, exc_value, exc_traceback) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:340: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ exc_type = <class 'ldap.INSUFFICIENT_ACCESS'> exc_value = INSUFFICIENT_ACCESS({'msgtype': 103, 'msgid': 10, 'result': 50, 'desc': 'Insufficient access', 'ctrls': [], 'info': "Insufficient 'write' privilege to the 'userPassword' attribute of entry 'uid=user1,dc=example,dc=com'.\n"}) exc_traceback = <traceback object at 0x7f61c267cc40> def reraise(exc_type, exc_value, exc_traceback): """Re-raise an exception given information from sys.exc_info() Note that unlike six.reraise, this does not support replacing the traceback. All arguments must come from a single sys.exc_info() call. """ # In Python 3, all exception info is contained in one object. > raise exc_value /usr/local/lib64/python3.8/site-packages/ldap/compat.py:46: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.DirSrv object at 0x7f61c264afd0> func = <built-in method result4 of LDAP object at 0x7f61c24e9bd0> args = (10, 1, -1, 0, 0, 0), kwargs = {}, diagnostic_message_success = None exc_type = None, exc_value = None, exc_traceback = None def _ldap_call(self,func,*args,**kwargs): """ Wrapper method mainly for serializing calls into OpenLDAP libs and trace logs """ self._ldap_object_lock.acquire() if __debug__: if self._trace_level>=1: self._trace_file.write('*** %s %s - %s\n%s\n' % ( repr(self), self._uri, '.'.join((self.__class__.__name__,func.__name__)), pprint.pformat((args,kwargs)) )) if self._trace_level>=9: traceback.print_stack(limit=self._trace_stack_limit,file=self._trace_file) diagnostic_message_success = None try: try: > result = func(*args,**kwargs) E ldap.INSUFFICIENT_ACCESS: {'msgtype': 103, 'msgid': 10, 'result': 50, 'desc': 'Insufficient access', 'ctrls': [], 'info': "Insufficient 'write' privilege to the 'userPassword' attribute of entry 'uid=user1,dc=example,dc=com'.\n"} /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:324: INSUFFICIENT_ACCESS During handling of the above exception, another exception occurred: topology_st = <lib389.topologies.TopologyMain object at 0x7f61c2651040> def test_ticket48228_test_global_policy(topology_st): """ Check global password policy """ log.info(' Set inhistory = 6') set_global_pwpolicy(topology_st, 6) log.info(' Bind as directory manager') log.info("Bind as %s" % DN_DM) topology_st.standalone.simple_bind_s(DN_DM, PASSWORD) log.info(' Add an entry' + USER1_DN) try: topology_st.standalone.add_s( Entry((USER1_DN, {'objectclass': "top person organizationalPerson inetOrgPerson".split(), 'sn': '1', 'cn': 'user 1', 'uid': 'user1', 'givenname': 'user', 'mail': 'user1@example.com', 'userpassword': 'password'}))) except ldap.LDAPError as e: log.fatal('test_ticket48228: Failed to add user' + USER1_DN + ': error ' + e.message['desc']) assert False log.info(' Update the password of ' + USER1_DN + ' 6 times') > update_passwd(topology_st, USER1_DN, 'password', 6) /export/tests/tickets/ticket48228_test.py:174: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ topology_st = <lib389.topologies.TopologyMain object at 0x7f61c2651040> user = 'uid=user1,dc=example,dc=com', passwd = 'password', times = 6 def update_passwd(topology_st, user, passwd, times): # Set the default value cpw = passwd for i in range(times): log.info(" Bind as {%s,%s}" % (user, cpw)) topology_st.standalone.simple_bind_s(user, cpw) # Now update the value for this iter. cpw = 'password%d' % i try: topology_st.standalone.modify_s(user, [(ldap.MOD_REPLACE, 'userpassword', cpw.encode())]) except ldap.LDAPError as e: log.fatal( > 'test_ticket48228: Failed to update the password ' + cpw + ' of user ' + user + ': error ' + e.message[ 'desc']) E AttributeError: 'INSUFFICIENT_ACCESS' object has no attribute 'message' /export/tests/tickets/ticket48228_test.py:139: AttributeError -------------------------------Captured log setup------------------------------- [32mINFO [0m lib389.SetupDs:setup.py:658 Starting installation... [32mINFO [0m lib389.SetupDs:setup.py:686 Completed installation for standalone1 [32mINFO [0m lib389.topologies:topologies.py:109 Instance with parameters {'ldap-port': 38901, 'ldap-secureport': 63601, 'server-id': 'standalone1', 'suffix': 'dc=example,dc=com'} was created. | |||
Failed | tickets/ticket48234_test.py::test_ticket48234 | 0.61 | |
topology_st = <lib389.topologies.TopologyMain object at 0x7f61c2825ca0> def test_ticket48234(topology_st): """ Test aci which contains an extensible filter. shutdown """ log.info('Bind as root DN') try: topology_st.standalone.simple_bind_s(DN_DM, PASSWORD) except ldap.LDAPError as e: topology_st.standalone.log.error('Root DN failed to authenticate: ' + e.args[0]['desc']) assert False ouname = 'outest' username = 'admin' passwd = 'Password' deniedattr = 'telephonenumber' log.info('Add aci which contains extensible filter.') aci_text = ('(targetattr = "%s")' % (deniedattr) + '(target = "ldap:///%s")' % (DEFAULT_SUFFIX) + '(version 3.0;acl "admin-tel-matching-rule-outest";deny (all)' + '(userdn = "ldap:///%s??sub?(&(cn=%s)(ou:dn:=%s))");)' % (DEFAULT_SUFFIX, username, ouname)) try: topology_st.standalone.modify_s(DEFAULT_SUFFIX, [(ldap.MOD_ADD, 'aci', ensure_bytes(aci_text))]) except ldap.LDAPError as e: log.error('Failed to add aci: (%s) error %s' % (aci_text, e.args[0]['desc'])) assert False log.info('Add entries ...') for idx in range(0, 2): ou0 = 'OU%d' % idx log.info('adding %s under %s...' % (ou0, DEFAULT_SUFFIX)) add_ou_entry(topology_st.standalone, ou0, DEFAULT_SUFFIX) parent = 'ou=%s,%s' % (ou0, DEFAULT_SUFFIX) log.info('adding %s under %s...' % (ouname, parent)) add_ou_entry(topology_st.standalone, ouname, parent) for idx in range(0, 2): parent = 'ou=%s,ou=OU%d,%s' % (ouname, idx, DEFAULT_SUFFIX) log.info('adding %s under %s...' % (username, parent)) add_user_entry(topology_st.standalone, username, passwd, parent) binddn = 'cn=%s,%s' % (username, parent) log.info('Bind as user %s' % binddn) try: topology_st.standalone.simple_bind_s(binddn, passwd) except ldap.LDAPError as e: topology_st.standalone.log.error(bindn + ' failed to authenticate: ' + e.args[0]['desc']) assert False filter = '(cn=%s)' % username try: entries = topology_st.standalone.search_s(DEFAULT_SUFFIX, ldap.SCOPE_SUBTREE, filter, [deniedattr, 'dn']) > assert 2 == len(entries) E assert 2 == 0 E +2 E -0 /export/tests/tickets/ticket48234_test.py:83: AssertionError -------------------------------Captured log setup------------------------------- [32mINFO [0m lib389.SetupDs:setup.py:658 Starting installation... [32mINFO [0m lib389.SetupDs:setup.py:686 Completed installation for standalone1 [32mINFO [0m lib389.topologies:topologies.py:109 Instance with parameters {'ldap-port': 38901, 'ldap-secureport': 63601, 'server-id': 'standalone1', 'suffix': 'dc=example,dc=com'} was created. -------------------------------Captured log call-------------------------------- [32mINFO [0m tests.tickets.ticket48234_test:ticket48234_test.py:35 Bind as root DN [32mINFO [0m tests.tickets.ticket48234_test:ticket48234_test.py:46 Add aci which contains extensible filter. [32mINFO [0m tests.tickets.ticket48234_test:ticket48234_test.py:58 Add entries ... [32mINFO [0m tests.tickets.ticket48234_test:ticket48234_test.py:61 adding OU0 under dc=example,dc=com... [32mINFO [0m tests.tickets.ticket48234_test:ticket48234_test.py:64 adding outest under ou=OU0,dc=example,dc=com... [32mINFO [0m tests.tickets.ticket48234_test:ticket48234_test.py:61 adding OU1 under dc=example,dc=com... [32mINFO [0m tests.tickets.ticket48234_test:ticket48234_test.py:64 adding outest under ou=OU1,dc=example,dc=com... [32mINFO [0m tests.tickets.ticket48234_test:ticket48234_test.py:69 adding admin under ou=outest,ou=OU0,dc=example,dc=com... [32mINFO [0m tests.tickets.ticket48234_test:ticket48234_test.py:69 adding admin under ou=outest,ou=OU1,dc=example,dc=com... [32mINFO [0m tests.tickets.ticket48234_test:ticket48234_test.py:73 Bind as user cn=admin,ou=outest,ou=OU1,dc=example,dc=com | |||
Failed | tickets/ticket48266_test.py::test_ticket48266_count_csn_evaluation | 0.20 | |
self = <dateutil.parser._parser.parser object at 0x7f61d7f09d90> timestr = '2020-9-31 00-08-31 -0400' default = datetime.datetime(2020, 10, 31, 0, 0), ignoretz = False tzinfos = None, kwargs = {} res = _result(year=2020, month=9, day=31, hour=0, tzoffset=-14400) skipped_tokens = None def parse(self, timestr, default=None, ignoretz=False, tzinfos=None, **kwargs): """ Parse the date/time string into a :class:`datetime.datetime` object. :param timestr: Any date/time string using the supported formats. :param default: The default datetime object, if this is a datetime object and not ``None``, elements specified in ``timestr`` replace elements in the default object. :param ignoretz: If set ``True``, time zones in parsed strings are ignored and a naive :class:`datetime.datetime` object is returned. :param tzinfos: Additional time zone names / aliases which may be present in the string. This argument maps time zone names (and optionally offsets from those time zones) to time zones. This parameter can be a dictionary with timezone aliases mapping time zone names to time zones or a function taking two parameters (``tzname`` and ``tzoffset``) and returning a time zone. The timezones to which the names are mapped can be an integer offset from UTC in seconds or a :class:`tzinfo` object. .. doctest:: :options: +NORMALIZE_WHITESPACE >>> from dateutil.parser import parse >>> from dateutil.tz import gettz >>> tzinfos = {"BRST": -7200, "CST": gettz("America/Chicago")} >>> parse("2012-01-19 17:21:00 BRST", tzinfos=tzinfos) datetime.datetime(2012, 1, 19, 17, 21, tzinfo=tzoffset(u'BRST', -7200)) >>> parse("2012-01-19 17:21:00 CST", tzinfos=tzinfos) datetime.datetime(2012, 1, 19, 17, 21, tzinfo=tzfile('/usr/share/zoneinfo/America/Chicago')) This parameter is ignored if ``ignoretz`` is set. :param \\*\\*kwargs: Keyword arguments as passed to ``_parse()``. :return: Returns a :class:`datetime.datetime` object or, if the ``fuzzy_with_tokens`` option is ``True``, returns a tuple, the first element being a :class:`datetime.datetime` object, the second a tuple containing the fuzzy tokens. :raises ParserError: Raised for invalid or unknown string format, if the provided :class:`tzinfo` is not in a valid format, or if an invalid date would be created. :raises TypeError: Raised for non-string or character stream input. :raises OverflowError: Raised if the parsed date exceeds the largest valid C integer on your system. """ if default is None: default = datetime.datetime.now().replace(hour=0, minute=0, second=0, microsecond=0) res, skipped_tokens = self._parse(timestr, **kwargs) if res is None: raise ParserError("Unknown string format: %s", timestr) if len(res) == 0: raise ParserError("String does not contain a date: %s", timestr) try: > ret = self._build_naive(res, default) /usr/local/lib/python3.8/site-packages/dateutil/parser/_parser.py:655: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <dateutil.parser._parser.parser object at 0x7f61d7f09d90> res = _result(year=2020, month=9, day=31, hour=0, tzoffset=-14400) default = datetime.datetime(2020, 10, 31, 0, 0) def _build_naive(self, res, default): repl = {} for attr in ("year", "month", "day", "hour", "minute", "second", "microsecond"): value = getattr(res, attr) if value is not None: repl[attr] = value if 'day' not in repl: # If the default day exceeds the last day of the month, fall back # to the end of the month. cyear = default.year if res.year is None else res.year cmonth = default.month if res.month is None else res.month cday = default.day if res.day is None else res.day if cday > monthrange(cyear, cmonth)[1]: repl['day'] = monthrange(cyear, cmonth)[1] > naive = default.replace(**repl) E ValueError: day is out of range for month /usr/local/lib/python3.8/site-packages/dateutil/parser/_parser.py:1241: ValueError The above exception was the direct cause of the following exception: topology_m2 = <lib389.topologies.TopologyMain object at 0x7f61c2192790> entries = None def test_ticket48266_count_csn_evaluation(topology_m2, entries): ents = topology_m2.ms["master1"].agreement.list(suffix=SUFFIX) assert len(ents) == 1 > first_csn = _get_first_not_replicated_csn(topology_m2) /export/tests/tickets/ticket48266_test.py:176: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /export/tests/tickets/ticket48266_test.py:139: in _get_first_not_replicated_csn found_op = topology_m2.ms['master1'].ds_access_log.parse_line(found_ops[-1]) /usr/local/lib/python3.8/site-packages/lib389/dirsrv_log.py:293: in parse_line action['datetime'] = self.parse_timestamp(action['timestamp']) /usr/local/lib/python3.8/site-packages/lib389/dirsrv_log.py:150: in parse_timestamp dt = dt_parse(dt_str) /usr/local/lib/python3.8/site-packages/dateutil/parser/_parser.py:1374: in parse return DEFAULTPARSER.parse(timestr, **kwargs) /usr/local/lib/python3.8/site-packages/dateutil/parser/_parser.py:657: in parse six.raise_from(ParserError(e.args[0] + ": %s", timestr), e) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ value = None, from_value = ValueError('day is out of range for month') > ??? E dateutil.parser._parser.ParserError: day is out of range for month: 2020-9-31 00-08-31 -0400 <string>:3: ParserError -------------------------------Captured log call-------------------------------- [32mINFO [0m lib389:ticket48266_test.py:125 dn: cn=new_account2,dc=example,dc=com [32mINFO [0m tests.tickets.ticket48266_test:ticket48266_test.py:134 ############# cn=new_account2,dc=example,dc=com | |||
Failed | tickets/ticket48325_test.py::test_ticket48325 | 0.03 | |
topology_m1h1c1 = <lib389.topologies.TopologyMain object at 0x7f61c2401a90> def test_ticket48325(topology_m1h1c1): """ Test that the RUV element order is correctly maintained when promoting a hub or consumer. """ # # Promote consumer to master # C1 = topology_m1h1c1.cs["consumer1"] M1 = topology_m1h1c1.ms["master1"] H1 = topology_m1h1c1.hs["hub1"] repl = ReplicationManager(DEFAULT_SUFFIX) > repl._ensure_changelog(C1) /export/tests/tickets/ticket48325_test.py:53: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/local/lib/python3.8/site-packages/lib389/replica.py:1928: in _ensure_changelog cl.create(properties={ /usr/local/lib/python3.8/site-packages/lib389/_mapped_object.py:971: in create return self._create(rdn, properties, basedn, ensure=False) /usr/local/lib/python3.8/site-packages/lib389/_mapped_object.py:946: in _create self._instance.add_ext_s(e, serverctrls=self._server_controls, clientctrls=self._client_controls, escapehatch='i am sure') /usr/local/lib/python3.8/site-packages/lib389/__init__.py:176: in inner return f(ent.dn, ent.toTupleList(), *args[2:]) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:425: in add_ext_s resp_type, resp_data, resp_msgid, resp_ctrls = self.result3(msgid,all=1,timeout=self.timeout) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:764: in result3 resp_type, resp_data, resp_msgid, decoded_resp_ctrls, retoid, retval = self.result4( /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:774: in result4 ldap_result = self._ldap_call(self._l.result4,msgid,all,timeout,add_ctrls,add_intermediates,add_extop) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:340: in _ldap_call reraise(exc_type, exc_value, exc_traceback) /usr/local/lib64/python3.8/site-packages/ldap/compat.py:46: in reraise raise exc_value _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.DirSrv object at 0x7f61c255d8b0> func = <built-in method result4 of LDAP object at 0x7f61c2401570> args = (15, 1, -1, 0, 0, 0), kwargs = {}, diagnostic_message_success = None exc_type = None, exc_value = None, exc_traceback = None def _ldap_call(self,func,*args,**kwargs): """ Wrapper method mainly for serializing calls into OpenLDAP libs and trace logs """ self._ldap_object_lock.acquire() if __debug__: if self._trace_level>=1: self._trace_file.write('*** %s %s - %s\n%s\n' % ( repr(self), self._uri, '.'.join((self.__class__.__name__,func.__name__)), pprint.pformat((args,kwargs)) )) if self._trace_level>=9: traceback.print_stack(limit=self._trace_stack_limit,file=self._trace_file) diagnostic_message_success = None try: try: > result = func(*args,**kwargs) E ldap.UNWILLING_TO_PERFORM: {'msgtype': 105, 'msgid': 15, 'result': 53, 'desc': 'Server is unwilling to perform', 'ctrls': [], 'info': 'Changelog configuration is part of the backend configuration'} /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:324: UNWILLING_TO_PERFORM -------------------------------Captured log setup------------------------------- [32mINFO [0m lib389.SetupDs:setup.py:658 Starting installation... [32mINFO [0m lib389.SetupDs:setup.py:686 Completed installation for master1 [32mINFO [0m lib389.topologies:topologies.py:109 Instance with parameters {'ldap-port': 39001, 'ldap-secureport': 63701, 'server-id': 'master1', 'suffix': 'dc=example,dc=com'} was created. [32mINFO [0m lib389.SetupDs:setup.py:658 Starting installation... [32mINFO [0m lib389.SetupDs:setup.py:686 Completed installation for hub1 [32mINFO [0m lib389.topologies:topologies.py:109 Instance with parameters {'ldap-port': 39101, 'ldap-secureport': 63801, 'server-id': 'hub1', 'suffix': 'dc=example,dc=com'} was created. [32mINFO [0m lib389.SetupDs:setup.py:658 Starting installation... [32mINFO [0m lib389.SetupDs:setup.py:686 Completed installation for consumer1 [32mINFO [0m lib389.topologies:topologies.py:109 Instance with parameters {'ldap-port': 39201, 'ldap-secureport': 63901, 'server-id': 'consumer1', 'suffix': 'dc=example,dc=com'} was created. [32mINFO [0m lib389.topologies:topologies.py:524 Creating replication topology. [32mINFO [0m lib389.replica:replica.py:2084 SUCCESS: bootstrap to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39101 completed [32mINFO [0m lib389.replica:replica.py:2365 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39101 is was created [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39101 is NOT working (expect ff2eddb1-7bc5-440b-9f34-db54d5bc1b21 / got description=None) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39101 is working [32mINFO [0m lib389.replica:replica.py:2211 SUCCESS: joined consumer from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39101 [32mINFO [0m lib389.replica:replica.py:2084 SUCCESS: bootstrap to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39201 completed [32mINFO [0m lib389.replica:replica.py:2365 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39101 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39201 is was created [32mINFO [0m lib389.replica:replica.py:2268 SUCCESS: joined consumer from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39101 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39201 [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39201 is NOT working (expect 89a0a4ed-28ed-407e-8191-7484357ac6ce / got description=ff2eddb1-7bc5-440b-9f34-db54d5bc1b21) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39201 is working | |||
Failed | tickets/ticket48342_test.py::test_ticket4026 | 93.85 | |
topology_m3 = <lib389.topologies.TopologyMain object at 0x7f61c2274af0> def test_ticket4026(topology_m3): """Write your replication testcase here. To access each DirSrv instance use: topology_m3.ms["master1"], topology_m3.ms["master2"], ..., topology_m3.hub1, ..., topology_m3.consumer1, ... Also, if you need any testcase initialization, please, write additional fixture for that(include finalizer). """ try: topology_m3.ms["master1"].add_s(Entry((PEOPLE_DN, { 'objectclass': "top extensibleObject".split(), 'ou': 'people'}))) except ldap.ALREADY_EXISTS: pass topology_m3.ms["master1"].add_s(Entry(('ou=ranges,' + SUFFIX, { 'objectclass': 'top organizationalunit'.split(), 'ou': 'ranges' }))) for cpt in range(MAX_ACCOUNTS): name = "user%d" % (cpt) topology_m3.ms["master1"].add_s(Entry(("uid=%s,%s" % (name, PEOPLE_DN), { 'objectclass': 'top posixAccount extensibleObject'.split(), 'uid': name, 'cn': name, 'uidNumber': '1', 'gidNumber': '1', 'homeDirectory': '/home/%s' % name }))) # make master3 having more free slots that master2 # so master1 will contact master3 _dna_config(topology_m3.ms["master1"], nextValue=100, maxValue=10) _dna_config(topology_m3.ms["master2"], nextValue=200, maxValue=10) _dna_config(topology_m3.ms["master3"], nextValue=300, maxValue=3000) # Turn on lots of error logging now. mod = [(ldap.MOD_REPLACE, 'nsslapd-errorlog-level', b'16384')] # mod = [(ldap.MOD_REPLACE, 'nsslapd-errorlog-level', '1')] topology_m3.ms["master1"].modify_s('cn=config', mod) topology_m3.ms["master2"].modify_s('cn=config', mod) topology_m3.ms["master3"].modify_s('cn=config', mod) # We need to wait for the event in dna.c to fire to start the servers # see dna.c line 899 time.sleep(60) # add on master1 users with description DNA for cpt in range(10): name = "user_with_desc1_%d" % (cpt) topology_m3.ms["master1"].add_s(Entry(("uid=%s,%s" % (name, PEOPLE_DN), { 'objectclass': 'top posixAccount extensibleObject'.split(), 'uid': name, 'cn': name, 'description': '-1', 'uidNumber': '1', 'gidNumber': '1', 'homeDirectory': '/home/%s' % name }))) # give time to negociate master1 <--> master3 time.sleep(10) # add on master1 users with description DNA for cpt in range(11, 20): name = "user_with_desc1_%d" % (cpt) > topology_m3.ms["master1"].add_s(Entry(("uid=%s,%s" % (name, PEOPLE_DN), { 'objectclass': 'top posixAccount extensibleObject'.split(), 'uid': name, 'cn': name, 'description': '-1', 'uidNumber': '1', 'gidNumber': '1', 'homeDirectory': '/home/%s' % name }))) /export/tests/tickets/ticket48342_test.py:118: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/local/lib/python3.8/site-packages/lib389/__init__.py:176: in inner return f(ent.dn, ent.toTupleList(), *args[2:]) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:439: in add_s return self.add_ext_s(dn,modlist,None,None) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:178: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:425: in add_ext_s resp_type, resp_data, resp_msgid, resp_ctrls = self.result3(msgid,all=1,timeout=self.timeout) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:764: in result3 resp_type, resp_data, resp_msgid, decoded_resp_ctrls, retoid, retval = self.result4( /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:774: in result4 ldap_result = self._ldap_call(self._l.result4,msgid,all,timeout,add_ctrls,add_intermediates,add_extop) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:340: in _ldap_call reraise(exc_type, exc_value, exc_traceback) /usr/local/lib64/python3.8/site-packages/ldap/compat.py:46: in reraise raise exc_value _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.DirSrv object at 0x7f61c22740d0> func = <built-in method result4 of LDAP object at 0x7f61d5759270> args = (15, 1, -1, 0, 0, 0), kwargs = {}, diagnostic_message_success = None exc_type = None, exc_value = None, exc_traceback = None def _ldap_call(self,func,*args,**kwargs): """ Wrapper method mainly for serializing calls into OpenLDAP libs and trace logs """ self._ldap_object_lock.acquire() if __debug__: if self._trace_level>=1: self._trace_file.write('*** %s %s - %s\n%s\n' % ( repr(self), self._uri, '.'.join((self.__class__.__name__,func.__name__)), pprint.pformat((args,kwargs)) )) if self._trace_level>=9: traceback.print_stack(limit=self._trace_stack_limit,file=self._trace_file) diagnostic_message_success = None try: try: > result = func(*args,**kwargs) E ldap.OPERATIONS_ERROR: {'msgtype': 105, 'msgid': 15, 'result': 1, 'desc': 'Operations error', 'ctrls': [], 'info': 'Allocation of a new value for range cn=dna config,cn=distributed numeric assignment plugin,cn=plugins,cn=config failed! Unable to proceed.'} /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:324: OPERATIONS_ERROR -------------------------------Captured log setup------------------------------- [32mINFO [0m lib389.SetupDs:setup.py:658 Starting installation... [32mINFO [0m lib389.SetupDs:setup.py:686 Completed installation for master1 [32mINFO [0m lib389.topologies:topologies.py:109 Instance with parameters {'ldap-port': 39001, 'ldap-secureport': 63701, 'server-id': 'master1', 'suffix': 'dc=example,dc=com'} was created. [32mINFO [0m lib389.SetupDs:setup.py:658 Starting installation... [32mINFO [0m lib389.SetupDs:setup.py:686 Completed installation for master2 [32mINFO [0m lib389.topologies:topologies.py:109 Instance with parameters {'ldap-port': 39002, 'ldap-secureport': 63702, 'server-id': 'master2', 'suffix': 'dc=example,dc=com'} was created. [32mINFO [0m lib389.SetupDs:setup.py:658 Starting installation... [32mINFO [0m lib389.SetupDs:setup.py:686 Completed installation for master3 [32mINFO [0m lib389.topologies:topologies.py:109 Instance with parameters {'ldap-port': 39003, 'ldap-secureport': 63703, 'server-id': 'master3', 'suffix': 'dc=example,dc=com'} was created. [32mINFO [0m lib389.topologies:topologies.py:142 Creating replication topology. [32mINFO [0m lib389.topologies:topologies.py:156 Joining master master2 to master1 ... [32mINFO [0m lib389.replica:replica.py:2084 SUCCESS: bootstrap to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 completed [32mINFO [0m lib389.replica:replica.py:2365 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is was created [32mINFO [0m lib389.replica:replica.py:2365 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is was created [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is NOT working (expect 266c1f5b-b757-42ea-adf4-785f6eff810b / got description=None) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 550d249d-5cc1-4643-bcb1-e50a69fde336 / got description=266c1f5b-b757-42ea-adf4-785f6eff810b) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is working [32mINFO [0m lib389.replica:replica.py:2153 SUCCESS: joined master from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 [32mINFO [0m lib389.topologies:topologies.py:156 Joining master master3 to master1 ... [32mINFO [0m lib389.replica:replica.py:2084 SUCCESS: bootstrap to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 completed [32mINFO [0m lib389.replica:replica.py:2365 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 is was created [32mINFO [0m lib389.replica:replica.py:2365 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is was created [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 7a6652a3-a163-4550-9c3e-09085ebd1c53 / got description=58a2af9d-db2a-4014-a014-34df9045f59d) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 7a6652a3-a163-4550-9c3e-09085ebd1c53 / got description=58a2af9d-db2a-4014-a014-34df9045f59d) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 7a6652a3-a163-4550-9c3e-09085ebd1c53 / got description=58a2af9d-db2a-4014-a014-34df9045f59d) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 7a6652a3-a163-4550-9c3e-09085ebd1c53 / got description=58a2af9d-db2a-4014-a014-34df9045f59d) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is working [32mINFO [0m lib389.replica:replica.py:2153 SUCCESS: joined master from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 [32mINFO [0m lib389.topologies:topologies.py:164 Ensuring master master1 to master2 ... [32mINFO [0m lib389.replica:replica.py:2338 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 already exists [32mINFO [0m lib389.topologies:topologies.py:164 Ensuring master master1 to master3 ... [32mINFO [0m lib389.replica:replica.py:2338 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 already exists [32mINFO [0m lib389.topologies:topologies.py:164 Ensuring master master2 to master1 ... [32mINFO [0m lib389.replica:replica.py:2338 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 already exists [32mINFO [0m lib389.topologies:topologies.py:164 Ensuring master master2 to master3 ... [32mINFO [0m lib389.replica:replica.py:2365 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 is was created [32mINFO [0m lib389.topologies:topologies.py:164 Ensuring master master3 to master1 ... [32mINFO [0m lib389.replica:replica.py:2338 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 already exists [32mINFO [0m lib389.topologies:topologies.py:164 Ensuring master master3 to master2 ... [32mINFO [0m lib389.replica:replica.py:2365 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is was created -------------------------------Captured log call-------------------------------- [32mINFO [0m tests.tickets.ticket48342_test:ticket48342_test.py:19 Add dna plugin config entry...ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 [32mINFO [0m tests.tickets.ticket48342_test:ticket48342_test.py:37 Enable the DNA plugin... [32mINFO [0m tests.tickets.ticket48342_test:ticket48342_test.py:44 Restarting the server... [32mINFO [0m tests.tickets.ticket48342_test:ticket48342_test.py:19 Add dna plugin config entry...ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 [32mINFO [0m tests.tickets.ticket48342_test:ticket48342_test.py:37 Enable the DNA plugin... [32mINFO [0m tests.tickets.ticket48342_test:ticket48342_test.py:44 Restarting the server... [32mINFO [0m tests.tickets.ticket48342_test:ticket48342_test.py:19 Add dna plugin config entry...ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 [32mINFO [0m tests.tickets.ticket48342_test:ticket48342_test.py:37 Enable the DNA plugin... [32mINFO [0m tests.tickets.ticket48342_test:ticket48342_test.py:44 Restarting the server... | |||
Failed | tickets/ticket48637_test.py::test_ticket48637 | 4.90 | |
topology_st = <lib389.topologies.TopologyMain object at 0x7f61c27fd1f0> def test_ticket48637(topology_st): """Test for entry cache corruption This requires automember and managed entry plugins to be configured. Then remove the group that automember would use to trigger a failure when adding a new entry. Automember fails, and then managed entry also fails. Make sure a base search on the entry returns error 32 """ if DEBUGGING: # Add debugging steps(if any)... pass # # Add our setup entries # try: topology_st.standalone.add_s(Entry((PEOPLE_OU, { 'objectclass': 'top organizationalunit'.split(), 'ou': 'people'}))) except ldap.ALREADY_EXISTS: pass except ldap.LDAPError as e: log.fatal('Failed to add people ou: ' + str(e)) assert False try: topology_st.standalone.add_s(Entry((GROUP_OU, { 'objectclass': 'top organizationalunit'.split(), 'ou': 'groups'}))) except ldap.ALREADY_EXISTS: pass except ldap.LDAPError as e: log.fatal('Failed to add groups ou: ' + str(e)) assert False try: topology_st.standalone.add_s(Entry((MEP_OU, { 'objectclass': 'top extensibleObject'.split(), 'ou': 'mep'}))) except ldap.LDAPError as e: log.fatal('Failed to add MEP ou: ' + str(e)) assert False try: topology_st.standalone.add_s(Entry((MEP_TEMPLATE, { 'objectclass': 'top mepTemplateEntry'.split(), 'cn': 'mep template', 'mepRDNAttr': 'cn', 'mepStaticAttr': 'objectclass: groupofuniquenames', 'mepMappedAttr': 'cn: $uid'}))) except ldap.LDAPError as e: log.fatal('Failed to add MEP ou: ' + str(e)) assert False # # Configure automember # try: topology_st.standalone.add_s(Entry((AUTO_DN, { 'cn': 'All Users', 'objectclass': ['top', 'autoMemberDefinition'], 'autoMemberScope': 'dc=example,dc=com', 'autoMemberFilter': 'objectclass=person', 'autoMemberDefaultGroup': GROUP_DN, 'autoMemberGroupingAttr': 'uniquemember:dn'}))) except ldap.LDAPError as e: log.fatal('Failed to configure automember plugin : ' + str(e)) assert False # # Configure managed entry plugin # try: topology_st.standalone.add_s(Entry((MEP_DN, { 'cn': 'MEP Definition', 'objectclass': ['top', 'extensibleObject'], 'originScope': 'ou=people,dc=example,dc=com', 'originFilter': 'objectclass=person', 'managedBase': 'ou=groups,dc=example,dc=com', 'managedTemplate': MEP_TEMPLATE}))) except ldap.LDAPError as e: log.fatal('Failed to configure managed entry plugin : ' + str(e)) assert False # # Restart DS # topology_st.standalone.restart(timeout=30) # # Add entry that should fail since the automember group does not exist # try: topology_st.standalone.add_s(Entry((USER_DN, { 'uid': 'test', 'objectclass': ['top', 'person', 'extensibleObject'], 'sn': 'test', 'cn': 'test'}))) except ldap.LDAPError as e: pass # # Search for the entry - it should not be returned # try: entry = topology_st.standalone.search_s(USER_DN, ldap.SCOPE_SUBTREE, 'objectclass=*') if entry: log.fatal('Entry was incorrectly returned') > assert False E assert False /export/tests/tickets/ticket48637_test.py:139: AssertionError -------------------------------Captured log setup------------------------------- [32mINFO [0m lib389.SetupDs:setup.py:658 Starting installation... [32mINFO [0m lib389.SetupDs:setup.py:686 Completed installation for standalone1 [32mINFO [0m lib389.topologies:topologies.py:109 Instance with parameters {'ldap-port': 38901, 'ldap-secureport': 63601, 'server-id': 'standalone1', 'suffix': 'dc=example,dc=com'} was created. -------------------------------Captured log call-------------------------------- [31mCRITICAL[0m tests.tickets.ticket48637_test:ticket48637_test.py:138 Entry was incorrectly returned | |||
Failed | tickets/ticket48784_test.py::test_ticket48784 | 33.95 | |
Fixture "add_entry" called directly. Fixtures are not meant to be called directly, but are created automatically when test functions request them as parameters. See https://docs.pytest.org/en/latest/fixture.html for more information about fixtures, and https://docs.pytest.org/en/latest/deprecations.html#calling-fixtures-directly about how to update your code. -------------------------------Captured log setup------------------------------- [32mINFO [0m lib389.SetupDs:setup.py:658 Starting installation... [32mINFO [0m lib389.SetupDs:setup.py:686 Completed installation for master1 [32mINFO [0m lib389.topologies:topologies.py:109 Instance with parameters {'ldap-port': 39001, 'ldap-secureport': 63701, 'server-id': 'master1', 'suffix': 'dc=example,dc=com'} was created. [32mINFO [0m lib389.SetupDs:setup.py:658 Starting installation... [32mINFO [0m lib389.SetupDs:setup.py:686 Completed installation for master2 [32mINFO [0m lib389.topologies:topologies.py:109 Instance with parameters {'ldap-port': 39002, 'ldap-secureport': 63702, 'server-id': 'master2', 'suffix': 'dc=example,dc=com'} was created. [32mINFO [0m lib389.topologies:topologies.py:142 Creating replication topology. [32mINFO [0m lib389.topologies:topologies.py:156 Joining master master2 to master1 ... [32mINFO [0m lib389.replica:replica.py:2084 SUCCESS: bootstrap to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 completed [32mINFO [0m lib389.replica:replica.py:2365 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is was created [32mINFO [0m lib389.replica:replica.py:2365 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is was created [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is NOT working (expect d5660df7-4c7a-4503-9690-a337acc1156a / got description=None) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect b363c5a2-8588-4890-a511-89800b1d3596 / got description=d5660df7-4c7a-4503-9690-a337acc1156a) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is working [32mINFO [0m lib389.replica:replica.py:2153 SUCCESS: joined master from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 [32mINFO [0m lib389.topologies:topologies.py:164 Ensuring master master1 to master2 ... [32mINFO [0m lib389.replica:replica.py:2338 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 already exists [32mINFO [0m lib389.topologies:topologies.py:164 Ensuring master master2 to master1 ... [32mINFO [0m lib389.replica:replica.py:2338 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 already exists -------------------------------Captured log call-------------------------------- [32mINFO [0m tests.tickets.ticket48784_test:ticket48784_test.py:90 Ticket 48784 - Allow usage of OpenLDAP libraries that don't use NSS for crypto [32mINFO [0m tests.tickets.ticket48784_test:ticket48784_test.py:50 ######################### Configure SSL/TLS agreements ###################### [32mINFO [0m tests.tickets.ticket48784_test:ticket48784_test.py:51 ######################## master1 <-- startTLS -> master2 ##################### [32mINFO [0m tests.tickets.ticket48784_test:ticket48784_test.py:53 ##### Update the agreement of master1 [32mINFO [0m tests.tickets.ticket48784_test:ticket48784_test.py:58 ##### Update the agreement of master2 [32mINFO [0m tests.tickets.ticket48784_test:ticket48784_test.py:68 ######################### Configure SSL/TLS agreements Done ###################### | |||
Failed | tickets/ticket48798_test.py::test_ticket48798 | 8.91 | |
topology_st = <lib389.topologies.TopologyMain object at 0x7f61c22b1be0> def test_ticket48798(topology_st): """ Test DH param sizes offered by DS. """ topology_st.standalone.enable_tls() # Confirm that we have a connection, and that it has DH # Open a socket to the port. # Check the security settings. > size = check_socket_dh_param_size(topology_st.standalone.host, topology_st.standalone.sslport) /export/tests/tickets/ticket48798_test.py:46: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /export/tests/tickets/ticket48798_test.py:23: in check_socket_dh_param_size output = check_output(cmd, shell=True) /usr/lib64/python3.8/subprocess.py:411: in check_output return run(*popenargs, stdout=PIPE, timeout=timeout, check=True, _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = None, capture_output = False, timeout = None, check = True popenargs = ('echo quit | openssl s_client -connect ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:63601 -msg -cipher DH | grep -A 1 ServerKeyExchange',) kwargs = {'shell': True, 'stdout': -1} process = <subprocess.Popen object at 0x7f61c22bcac0>, stdout = b'' stderr = None, retcode = 1 def run(*popenargs, input=None, capture_output=False, timeout=None, check=False, **kwargs): """Run command with arguments and return a CompletedProcess instance. The returned instance will have attributes args, returncode, stdout and stderr. By default, stdout and stderr are not captured, and those attributes will be None. Pass stdout=PIPE and/or stderr=PIPE in order to capture them. If check is True and the exit code was non-zero, it raises a CalledProcessError. The CalledProcessError object will have the return code in the returncode attribute, and output & stderr attributes if those streams were captured. If timeout is given, and the process takes too long, a TimeoutExpired exception will be raised. There is an optional argument "input", allowing you to pass bytes or a string to the subprocess's stdin. If you use this argument you may not also use the Popen constructor's "stdin" argument, as it will be used internally. By default, all communication is in bytes, and therefore any "input" should be bytes, and the stdout and stderr will be bytes. If in text mode, any "input" should be a string, and stdout and stderr will be strings decoded according to locale encoding, or by "encoding" if set. Text mode is triggered by setting any of text, encoding, errors or universal_newlines. The other arguments are the same as for the Popen constructor. """ if input is not None: if kwargs.get('stdin') is not None: raise ValueError('stdin and input arguments may not both be used.') kwargs['stdin'] = PIPE if capture_output: if kwargs.get('stdout') is not None or kwargs.get('stderr') is not None: raise ValueError('stdout and stderr arguments may not be used ' 'with capture_output.') kwargs['stdout'] = PIPE kwargs['stderr'] = PIPE with Popen(*popenargs, **kwargs) as process: try: stdout, stderr = process.communicate(input, timeout=timeout) except TimeoutExpired as exc: process.kill() if _mswindows: # Windows accumulates the output in a single blocking # read() call run on child threads, with the timeout # being done in a join() on those threads. communicate() # _after_ kill() is required to collect that and add it # to the exception. exc.stdout, exc.stderr = process.communicate() else: # POSIX _communicate already populated the output so # far into the TimeoutExpired exception. process.wait() raise except: # Including KeyboardInterrupt, communicate handled that. process.kill() # We don't call process.wait() as .__exit__ does that for us. raise retcode = process.poll() if check and retcode: > raise CalledProcessError(retcode, process.args, output=stdout, stderr=stderr) E subprocess.CalledProcessError: Command 'echo quit | openssl s_client -connect ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:63601 -msg -cipher DH | grep -A 1 ServerKeyExchange' returned non-zero exit status 1. /usr/lib64/python3.8/subprocess.py:512: CalledProcessError -------------------------------Captured log setup------------------------------- [32mINFO [0m lib389.SetupDs:setup.py:658 Starting installation... [32mINFO [0m lib389.SetupDs:setup.py:686 Completed installation for standalone1 [32mINFO [0m lib389.topologies:topologies.py:109 Instance with parameters {'ldap-port': 38901, 'ldap-secureport': 63601, 'server-id': 'standalone1', 'suffix': 'dc=example,dc=com'} was created. ------------------------------Captured stderr call------------------------------ depth=1 C = AU, ST = Queensland, L = 389ds, O = testing, CN = ssca.389ds.example.com verify return:1 depth=0 C = AU, ST = Queensland, L = 389ds, O = testing, GN = 407c1ca6-1ff7-4ba7-87c7-979c63638741, CN = ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com verify return:1 DONE | |||
Failed | tickets/ticket48808_test.py::test_ticket48808 | 6.81 | |
topology_st = <lib389.topologies.TopologyMain object at 0x7f61c2441370> create_user = None def test_ticket48808(topology_st, create_user): log.info('Run multiple paging controls on a single connection') users_num = 100 page_size = 30 users_list = add_users(topology_st, users_num) search_flt = r'(uid=test*)' searchreq_attrlist = ['dn', 'sn'] log.info('Set user bind') topology_st.standalone.simple_bind_s(TEST_USER_DN, TEST_USER_PWD) log.info('Create simple paged results control instance') req_ctrl = SimplePagedResultsControl(True, size=page_size, cookie='') controls = [req_ctrl] for ii in range(3): log.info('Iteration %d' % ii) msgid = topology_st.standalone.search_ext(DEFAULT_SUFFIX, ldap.SCOPE_SUBTREE, search_flt, searchreq_attrlist, serverctrls=controls) rtype, rdata, rmsgid, rctrls = topology_st.standalone.result3(msgid) pctrls = [ c for c in rctrls if c.controlType == SimplePagedResultsControl.controlType ] req_ctrl.cookie = pctrls[0].cookie msgid = topology_st.standalone.search_ext(DEFAULT_SUFFIX, ldap.SCOPE_SUBTREE, search_flt, searchreq_attrlist, serverctrls=controls) log.info('Set Directory Manager bind back') topology_st.standalone.simple_bind_s(DN_DM, PASSWORD) del_users(topology_st, users_list) log.info('Abandon the search') users_num = 10 page_size = 0 users_list = add_users(topology_st, users_num) search_flt = r'(uid=test*)' searchreq_attrlist = ['dn', 'sn'] log.info('Set user bind') topology_st.standalone.simple_bind_s(TEST_USER_DN, TEST_USER_PWD) log.info('Create simple paged results control instance') req_ctrl = SimplePagedResultsControl(True, size=page_size, cookie='') controls = [req_ctrl] msgid = topology_st.standalone.search_ext(DEFAULT_SUFFIX, ldap.SCOPE_SUBTREE, search_flt, searchreq_attrlist, serverctrls=controls) rtype, rdata, rmsgid, rctrls = topology_st.standalone.result3(msgid) pctrls = [ c for c in rctrls if c.controlType == SimplePagedResultsControl.controlType ] assert not pctrls[0].cookie log.info('Set Directory Manager bind back') topology_st.standalone.simple_bind_s(DN_DM, PASSWORD) del_users(topology_st, users_list) log.info("Search should fail with 'nsPagedSizeLimit = 5'" "and 'nsslapd-pagedsizelimit = 15' with 10 users") conf_attr = b'15' user_attr = b'5' expected_rs = ldap.SIZELIMIT_EXCEEDED users_num = 10 page_size = 10 users_list = add_users(topology_st, users_num) search_flt = r'(uid=test*)' searchreq_attrlist = ['dn', 'sn'] conf_attr_bck = change_conf_attr(topology_st, DN_CONFIG, 'nsslapd-pagedsizelimit', conf_attr) user_attr_bck = change_conf_attr(topology_st, TEST_USER_DN, 'nsPagedSizeLimit', user_attr) log.info('Set user bind') topology_st.standalone.simple_bind_s(TEST_USER_DN, TEST_USER_PWD) log.info('Create simple paged results control instance') req_ctrl = SimplePagedResultsControl(True, size=page_size, cookie='') controls = [req_ctrl] log.info('Expect to fail with SIZELIMIT_EXCEEDED') with pytest.raises(expected_rs): > all_results = paged_search(topology_st, controls, search_flt, searchreq_attrlist) E Failed: DID NOT RAISE <class 'ldap.SIZELIMIT_EXCEEDED'> /export/tests/tickets/ticket48808_test.py:252: Failed -------------------------------Captured log setup------------------------------- [32mINFO [0m lib389.SetupDs:setup.py:658 Starting installation... [32mINFO [0m lib389.SetupDs:setup.py:686 Completed installation for standalone1 [32mINFO [0m lib389.topologies:topologies.py:109 Instance with parameters {'ldap-port': 38901, 'ldap-secureport': 63601, 'server-id': 'standalone1', 'suffix': 'dc=example,dc=com'} was created. -------------------------------Captured log call-------------------------------- [32mINFO [0m tests.tickets.ticket48808_test:ticket48808_test.py:159 Run multiple paging controls on a single connection [32mINFO [0m tests.tickets.ticket48808_test:ticket48808_test.py:48 Adding 100 users [32mINFO [0m tests.tickets.ticket48808_test:ticket48808_test.py:166 Set user bind [32mINFO [0m tests.tickets.ticket48808_test:ticket48808_test.py:169 Create simple paged results control instance [32mINFO [0m tests.tickets.ticket48808_test:ticket48808_test.py:174 Iteration 0 [32mINFO [0m tests.tickets.ticket48808_test:ticket48808_test.py:174 Iteration 1 [32mINFO [0m tests.tickets.ticket48808_test:ticket48808_test.py:174 Iteration 2 [32mINFO [0m tests.tickets.ticket48808_test:ticket48808_test.py:193 Set Directory Manager bind back [32mINFO [0m tests.tickets.ticket48808_test:ticket48808_test.py:75 Deleting 100 users [32mINFO [0m tests.tickets.ticket48808_test:ticket48808_test.py:197 Abandon the search [32mINFO [0m tests.tickets.ticket48808_test:ticket48808_test.py:48 Adding 10 users [32mINFO [0m tests.tickets.ticket48808_test:ticket48808_test.py:204 Set user bind [32mINFO [0m tests.tickets.ticket48808_test:ticket48808_test.py:207 Create simple paged results control instance [32mINFO [0m tests.tickets.ticket48808_test:ticket48808_test.py:224 Set Directory Manager bind back [32mINFO [0m tests.tickets.ticket48808_test:ticket48808_test.py:75 Deleting 10 users [32mINFO [0m tests.tickets.ticket48808_test:ticket48808_test.py:228 Search should fail with 'nsPagedSizeLimit = 5'and 'nsslapd-pagedsizelimit = 15' with 10 users [32mINFO [0m tests.tickets.ticket48808_test:ticket48808_test.py:48 Adding 10 users [32mINFO [0m tests.tickets.ticket48808_test:ticket48808_test.py:95 Set nsslapd-pagedsizelimit to b'15'. Previous value - [b'0']. Modified suffix - cn=config. [32mINFO [0m tests.tickets.ticket48808_test:ticket48808_test.py:95 Set nsPagedSizeLimit to b'5'. Previous value - None. Modified suffix - uid=simplepaged_test,dc=example,dc=com. [32mINFO [0m tests.tickets.ticket48808_test:ticket48808_test.py:243 Set user bind [32mINFO [0m tests.tickets.ticket48808_test:ticket48808_test.py:246 Create simple paged results control instance [32mINFO [0m tests.tickets.ticket48808_test:ticket48808_test.py:250 Expect to fail with SIZELIMIT_EXCEEDED [32mINFO [0m tests.tickets.ticket48808_test:ticket48808_test.py:130 Getting page 0 | |||
Failed | tickets/ticket48896_test.py::test_ticket48896 | 0.14 | |
server = <lib389.DirSrv object at 0x7f61c27db070>, curpw = 'password' newpw = 'Abcd012+', expstr = 'be ok', rc = 0 def replace_pw(server, curpw, newpw, expstr, rc): log.info('Binding as {%s, %s}' % (TESTDN, curpw)) server.simple_bind_s(TESTDN, curpw) hit = 0 log.info('Replacing password: %s -> %s, which should %s' % (curpw, newpw, expstr)) try: > server.modify_s(TESTDN, [(ldap.MOD_REPLACE, 'userPassword', ensure_bytes(newpw))]) /export/tests/tickets/ticket48896_test.py:53: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ args = ('uid=buser123,dc=example,dc=com', [(2, 'userPassword', b'Abcd012+')]) kwargs = {} c_stack = [FrameInfo(frame=<frame at 0x5576b8f405f0, file '/usr/local/lib/python3.8/site-packages/lib389/__init__.py', line 180,...mbda>', code_context=[' self._inner_hookexec = lambda hook, methods, kwargs: hook.multicall(\n'], index=0), ...] frame = FrameInfo(frame=<frame at 0x5576b8c26f40, file '/export/tests/tickets/ticket48896_test.py', line 57, code replace_pw>,...code_context=[" server.modify_s(TESTDN, [(ldap.MOD_REPLACE, 'userPassword', ensure_bytes(newpw))])\n"], index=0) def inner(*args, **kwargs): if name in [ 'add_s', 'bind_s', 'delete_s', 'modify_s', 'modrdn_s', 'rename_s', 'sasl_interactive_bind_s', 'search_s', 'search_ext_s', 'simple_bind_s', 'unbind_s', 'getEntry', ] and not ('escapehatch' in kwargs and kwargs['escapehatch'] == 'i am sure'): c_stack = inspect.stack() frame = c_stack[1] warnings.warn(DeprecationWarning("Use of raw ldap function %s. This will be removed in a future release. " "Found in: %s:%s" % (name, frame.filename, frame.lineno))) # Later, we will add a sleep here to make it even more painful. # Finally, it will raise an exception. elif 'escapehatch' in kwargs: kwargs.pop('escapehatch') if name == 'result': objtype, data = f(*args, **kwargs) # data is either a 2-tuple or a list of 2-tuples # print data if data: if isinstance(data, tuple): return objtype, Entry(data) elif isinstance(data, list): # AD sends back these search references # if objtype == ldap.RES_SEARCH_RESULT and \ # isinstance(data[-1],tuple) and \ # not data[-1][0]: # print "Received search reference: " # pprint.pprint(data[-1][1]) # data.pop() # remove the last non-entry element return objtype, [Entry(x) for x in data] else: raise TypeError("unknown data type %s returned by result" % type(data)) else: return objtype, data elif name.startswith('add'): # the first arg is self # the second and third arg are the dn and the data to send # We need to convert the Entry into the format used by # python-ldap ent = args[0] if isinstance(ent, Entry): return f(ent.dn, ent.toTupleList(), *args[2:]) else: return f(*args, **kwargs) else: > return f(*args, **kwargs) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.DirSrv object at 0x7f61c27db070> dn = 'uid=buser123,dc=example,dc=com' modlist = [(2, 'userPassword', b'Abcd012+')] def modify_s(self,dn,modlist): > return self.modify_ext_s(dn,modlist,None,None) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:640: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ args = ('uid=buser123,dc=example,dc=com', [(2, 'userPassword', b'Abcd012+')], None, None) kwargs = {} def inner(*args, **kwargs): if name in [ 'add_s', 'bind_s', 'delete_s', 'modify_s', 'modrdn_s', 'rename_s', 'sasl_interactive_bind_s', 'search_s', 'search_ext_s', 'simple_bind_s', 'unbind_s', 'getEntry', ] and not ('escapehatch' in kwargs and kwargs['escapehatch'] == 'i am sure'): c_stack = inspect.stack() frame = c_stack[1] warnings.warn(DeprecationWarning("Use of raw ldap function %s. This will be removed in a future release. " "Found in: %s:%s" % (name, frame.filename, frame.lineno))) # Later, we will add a sleep here to make it even more painful. # Finally, it will raise an exception. elif 'escapehatch' in kwargs: kwargs.pop('escapehatch') if name == 'result': objtype, data = f(*args, **kwargs) # data is either a 2-tuple or a list of 2-tuples # print data if data: if isinstance(data, tuple): return objtype, Entry(data) elif isinstance(data, list): # AD sends back these search references # if objtype == ldap.RES_SEARCH_RESULT and \ # isinstance(data[-1],tuple) and \ # not data[-1][0]: # print "Received search reference: " # pprint.pprint(data[-1][1]) # data.pop() # remove the last non-entry element return objtype, [Entry(x) for x in data] else: raise TypeError("unknown data type %s returned by result" % type(data)) else: return objtype, data elif name.startswith('add'): # the first arg is self # the second and third arg are the dn and the data to send # We need to convert the Entry into the format used by # python-ldap ent = args[0] if isinstance(ent, Entry): return f(ent.dn, ent.toTupleList(), *args[2:]) else: return f(*args, **kwargs) else: > return f(*args, **kwargs) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.DirSrv object at 0x7f61c27db070> dn = 'uid=buser123,dc=example,dc=com' modlist = [(2, 'userPassword', b'Abcd012+')], serverctrls = None clientctrls = None def modify_ext_s(self,dn,modlist,serverctrls=None,clientctrls=None): msgid = self.modify_ext(dn,modlist,serverctrls,clientctrls) > resp_type, resp_data, resp_msgid, resp_ctrls = self.result3(msgid,all=1,timeout=self.timeout) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:613: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ args = (8,), kwargs = {'all': 1, 'timeout': -1} def inner(*args, **kwargs): if name in [ 'add_s', 'bind_s', 'delete_s', 'modify_s', 'modrdn_s', 'rename_s', 'sasl_interactive_bind_s', 'search_s', 'search_ext_s', 'simple_bind_s', 'unbind_s', 'getEntry', ] and not ('escapehatch' in kwargs and kwargs['escapehatch'] == 'i am sure'): c_stack = inspect.stack() frame = c_stack[1] warnings.warn(DeprecationWarning("Use of raw ldap function %s. This will be removed in a future release. " "Found in: %s:%s" % (name, frame.filename, frame.lineno))) # Later, we will add a sleep here to make it even more painful. # Finally, it will raise an exception. elif 'escapehatch' in kwargs: kwargs.pop('escapehatch') if name == 'result': objtype, data = f(*args, **kwargs) # data is either a 2-tuple or a list of 2-tuples # print data if data: if isinstance(data, tuple): return objtype, Entry(data) elif isinstance(data, list): # AD sends back these search references # if objtype == ldap.RES_SEARCH_RESULT and \ # isinstance(data[-1],tuple) and \ # not data[-1][0]: # print "Received search reference: " # pprint.pprint(data[-1][1]) # data.pop() # remove the last non-entry element return objtype, [Entry(x) for x in data] else: raise TypeError("unknown data type %s returned by result" % type(data)) else: return objtype, data elif name.startswith('add'): # the first arg is self # the second and third arg are the dn and the data to send # We need to convert the Entry into the format used by # python-ldap ent = args[0] if isinstance(ent, Entry): return f(ent.dn, ent.toTupleList(), *args[2:]) else: return f(*args, **kwargs) else: > return f(*args, **kwargs) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.DirSrv object at 0x7f61c27db070>, msgid = 8, all = 1 timeout = -1, resp_ctrl_classes = None def result3(self,msgid=ldap.RES_ANY,all=1,timeout=None,resp_ctrl_classes=None): > resp_type, resp_data, resp_msgid, decoded_resp_ctrls, retoid, retval = self.result4( msgid,all,timeout, add_ctrls=0,add_intermediates=0,add_extop=0, resp_ctrl_classes=resp_ctrl_classes ) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:764: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ args = (8, 1, -1) kwargs = {'add_ctrls': 0, 'add_extop': 0, 'add_intermediates': 0, 'resp_ctrl_classes': None} def inner(*args, **kwargs): if name in [ 'add_s', 'bind_s', 'delete_s', 'modify_s', 'modrdn_s', 'rename_s', 'sasl_interactive_bind_s', 'search_s', 'search_ext_s', 'simple_bind_s', 'unbind_s', 'getEntry', ] and not ('escapehatch' in kwargs and kwargs['escapehatch'] == 'i am sure'): c_stack = inspect.stack() frame = c_stack[1] warnings.warn(DeprecationWarning("Use of raw ldap function %s. This will be removed in a future release. " "Found in: %s:%s" % (name, frame.filename, frame.lineno))) # Later, we will add a sleep here to make it even more painful. # Finally, it will raise an exception. elif 'escapehatch' in kwargs: kwargs.pop('escapehatch') if name == 'result': objtype, data = f(*args, **kwargs) # data is either a 2-tuple or a list of 2-tuples # print data if data: if isinstance(data, tuple): return objtype, Entry(data) elif isinstance(data, list): # AD sends back these search references # if objtype == ldap.RES_SEARCH_RESULT and \ # isinstance(data[-1],tuple) and \ # not data[-1][0]: # print "Received search reference: " # pprint.pprint(data[-1][1]) # data.pop() # remove the last non-entry element return objtype, [Entry(x) for x in data] else: raise TypeError("unknown data type %s returned by result" % type(data)) else: return objtype, data elif name.startswith('add'): # the first arg is self # the second and third arg are the dn and the data to send # We need to convert the Entry into the format used by # python-ldap ent = args[0] if isinstance(ent, Entry): return f(ent.dn, ent.toTupleList(), *args[2:]) else: return f(*args, **kwargs) else: > return f(*args, **kwargs) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.DirSrv object at 0x7f61c27db070>, msgid = 8, all = 1 timeout = -1, add_ctrls = 0, add_intermediates = 0, add_extop = 0 resp_ctrl_classes = None def result4(self,msgid=ldap.RES_ANY,all=1,timeout=None,add_ctrls=0,add_intermediates=0,add_extop=0,resp_ctrl_classes=None): if timeout is None: timeout = self.timeout > ldap_result = self._ldap_call(self._l.result4,msgid,all,timeout,add_ctrls,add_intermediates,add_extop) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:774: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ args = (<built-in method result4 of LDAP object at 0x7f61c24e84b0>, 8, 1, -1, 0, 0, ...) kwargs = {} def inner(*args, **kwargs): if name in [ 'add_s', 'bind_s', 'delete_s', 'modify_s', 'modrdn_s', 'rename_s', 'sasl_interactive_bind_s', 'search_s', 'search_ext_s', 'simple_bind_s', 'unbind_s', 'getEntry', ] and not ('escapehatch' in kwargs and kwargs['escapehatch'] == 'i am sure'): c_stack = inspect.stack() frame = c_stack[1] warnings.warn(DeprecationWarning("Use of raw ldap function %s. This will be removed in a future release. " "Found in: %s:%s" % (name, frame.filename, frame.lineno))) # Later, we will add a sleep here to make it even more painful. # Finally, it will raise an exception. elif 'escapehatch' in kwargs: kwargs.pop('escapehatch') if name == 'result': objtype, data = f(*args, **kwargs) # data is either a 2-tuple or a list of 2-tuples # print data if data: if isinstance(data, tuple): return objtype, Entry(data) elif isinstance(data, list): # AD sends back these search references # if objtype == ldap.RES_SEARCH_RESULT and \ # isinstance(data[-1],tuple) and \ # not data[-1][0]: # print "Received search reference: " # pprint.pprint(data[-1][1]) # data.pop() # remove the last non-entry element return objtype, [Entry(x) for x in data] else: raise TypeError("unknown data type %s returned by result" % type(data)) else: return objtype, data elif name.startswith('add'): # the first arg is self # the second and third arg are the dn and the data to send # We need to convert the Entry into the format used by # python-ldap ent = args[0] if isinstance(ent, Entry): return f(ent.dn, ent.toTupleList(), *args[2:]) else: return f(*args, **kwargs) else: > return f(*args, **kwargs) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.DirSrv object at 0x7f61c27db070> func = <built-in method result4 of LDAP object at 0x7f61c24e84b0> args = (8, 1, -1, 0, 0, 0), kwargs = {}, diagnostic_message_success = None exc_type = None, exc_value = None, exc_traceback = None def _ldap_call(self,func,*args,**kwargs): """ Wrapper method mainly for serializing calls into OpenLDAP libs and trace logs """ self._ldap_object_lock.acquire() if __debug__: if self._trace_level>=1: self._trace_file.write('*** %s %s - %s\n%s\n' % ( repr(self), self._uri, '.'.join((self.__class__.__name__,func.__name__)), pprint.pformat((args,kwargs)) )) if self._trace_level>=9: traceback.print_stack(limit=self._trace_stack_limit,file=self._trace_file) diagnostic_message_success = None try: try: result = func(*args,**kwargs) if __debug__ and self._trace_level>=2: if func.__name__!="unbind_ext": diagnostic_message_success = self._l.get_option(ldap.OPT_DIAGNOSTIC_MESSAGE) finally: self._ldap_object_lock.release() except LDAPError as e: exc_type,exc_value,exc_traceback = sys.exc_info() try: if 'info' not in e.args[0] and 'errno' in e.args[0]: e.args[0]['info'] = strerror(e.args[0]['errno']) except IndexError: pass if __debug__ and self._trace_level>=2: self._trace_file.write('=> LDAPError - %s: %s\n' % (e.__class__.__name__,str(e))) try: > reraise(exc_type, exc_value, exc_traceback) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:340: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ exc_type = <class 'ldap.INSUFFICIENT_ACCESS'> exc_value = INSUFFICIENT_ACCESS({'msgtype': 103, 'msgid': 8, 'result': 50, 'desc': 'Insufficient access', 'ctrls': [], 'info': "Insufficient 'write' privilege to the 'userPassword' attribute of entry 'uid=buser123,dc=example,dc=com'.\n"}) exc_traceback = <traceback object at 0x7f61c2cb8f00> def reraise(exc_type, exc_value, exc_traceback): """Re-raise an exception given information from sys.exc_info() Note that unlike six.reraise, this does not support replacing the traceback. All arguments must come from a single sys.exc_info() call. """ # In Python 3, all exception info is contained in one object. > raise exc_value /usr/local/lib64/python3.8/site-packages/ldap/compat.py:46: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.DirSrv object at 0x7f61c27db070> func = <built-in method result4 of LDAP object at 0x7f61c24e84b0> args = (8, 1, -1, 0, 0, 0), kwargs = {}, diagnostic_message_success = None exc_type = None, exc_value = None, exc_traceback = None def _ldap_call(self,func,*args,**kwargs): """ Wrapper method mainly for serializing calls into OpenLDAP libs and trace logs """ self._ldap_object_lock.acquire() if __debug__: if self._trace_level>=1: self._trace_file.write('*** %s %s - %s\n%s\n' % ( repr(self), self._uri, '.'.join((self.__class__.__name__,func.__name__)), pprint.pformat((args,kwargs)) )) if self._trace_level>=9: traceback.print_stack(limit=self._trace_stack_limit,file=self._trace_file) diagnostic_message_success = None try: try: > result = func(*args,**kwargs) E ldap.INSUFFICIENT_ACCESS: {'msgtype': 103, 'msgid': 8, 'result': 50, 'desc': 'Insufficient access', 'ctrls': [], 'info': "Insufficient 'write' privilege to the 'userPassword' attribute of entry 'uid=buser123,dc=example,dc=com'.\n"} /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:324: INSUFFICIENT_ACCESS During handling of the above exception, another exception occurred: topology_st = <lib389.topologies.TopologyMain object at 0x7f61c27db340> def test_ticket48896(topology_st): """ """ log.info('Testing Ticket 48896 - Default Setting for passwordMinTokenLength does not work') log.info("Setting global password policy with password syntax.") topology_st.standalone.simple_bind_s(DN_DM, PASSWORD) topology_st.standalone.modify_s(CONFIG_DN, [(ldap.MOD_REPLACE, 'passwordCheckSyntax', b'on'), (ldap.MOD_REPLACE, 'nsslapd-pwpolicy-local', b'on')]) config = topology_st.standalone.search_s(CONFIG_DN, ldap.SCOPE_BASE, 'cn=*') mintokenlen = config[0].getValue('passwordMinTokenLength') history = config[0].getValue('passwordInHistory') log.info('Default passwordMinTokenLength == %s' % mintokenlen) log.info('Default passwordInHistory == %s' % history) log.info('Adding a user.') curpw = 'password' topology_st.standalone.add_s(Entry((TESTDN, {'objectclass': "top person organizationalPerson inetOrgPerson".split(), 'cn': 'test user', 'sn': 'user', 'userPassword': curpw}))) newpw = 'Abcd012+' exp = 'be ok' rc = 0 > replace_pw(topology_st.standalone, curpw, newpw, exp, rc) /export/tests/tickets/ticket48896_test.py:94: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ server = <lib389.DirSrv object at 0x7f61c27db070>, curpw = 'password' newpw = 'Abcd012+', expstr = 'be ok', rc = 0 def replace_pw(server, curpw, newpw, expstr, rc): log.info('Binding as {%s, %s}' % (TESTDN, curpw)) server.simple_bind_s(TESTDN, curpw) hit = 0 log.info('Replacing password: %s -> %s, which should %s' % (curpw, newpw, expstr)) try: server.modify_s(TESTDN, [(ldap.MOD_REPLACE, 'userPassword', ensure_bytes(newpw))]) except Exception as e: log.info("Exception (expected): %s" % type(e).__name__) hit = 1 > assert isinstance(e, rc) E TypeError: isinstance() arg 2 must be a type or tuple of types /export/tests/tickets/ticket48896_test.py:57: TypeError -------------------------------Captured log setup------------------------------- [32mINFO [0m lib389.SetupDs:setup.py:658 Starting installation... [32mINFO [0m lib389.SetupDs:setup.py:686 Completed installation for standalone1 [32mINFO [0m lib389.topologies:topologies.py:109 Instance with parameters {'ldap-port': 38901, 'ldap-secureport': 63601, 'server-id': 'standalone1', 'suffix': 'dc=example,dc=com'} was created. -------------------------------Captured log call-------------------------------- [32mINFO [0m tests.tickets.ticket48896_test:ticket48896_test.py:69 Testing Ticket 48896 - Default Setting for passwordMinTokenLength does not work [32mINFO [0m tests.tickets.ticket48896_test:ticket48896_test.py:71 Setting global password policy with password syntax. [32mINFO [0m tests.tickets.ticket48896_test:ticket48896_test.py:80 Default passwordMinTokenLength == b'3' [32mINFO [0m tests.tickets.ticket48896_test:ticket48896_test.py:81 Default passwordInHistory == b'6' [32mINFO [0m tests.tickets.ticket48896_test:ticket48896_test.py:83 Adding a user. [32mINFO [0m tests.tickets.ticket48896_test:ticket48896_test.py:47 Binding as {uid=buser123,dc=example,dc=com, password} [32mINFO [0m tests.tickets.ticket48896_test:ticket48896_test.py:51 Replacing password: password -> Abcd012+, which should be ok [32mINFO [0m tests.tickets.ticket48896_test:ticket48896_test.py:55 Exception (expected): INSUFFICIENT_ACCESS | |||
Failed | tickets/ticket48916_test.py::test_ticket48916 | 51.30 | |
topology_m2 = <lib389.topologies.TopologyMain object at 0x7f61c21deca0> def test_ticket48916(topology_m2): """ https://bugzilla.redhat.com/show_bug.cgi?id=1353629 This is an issue with ID exhaustion in DNA causing a crash. To access each DirSrv instance use: topology_m2.ms["master1"], topology_m2.ms["master2"], ..., topology_m2.hub1, ..., topology_m2.consumer1,... """ if DEBUGGING: # Add debugging steps(if any)... pass # Enable the plugin on both servers dna_m1 = topology_m2.ms["master1"].plugins.get('Distributed Numeric Assignment Plugin') dna_m2 = topology_m2.ms["master2"].plugins.get('Distributed Numeric Assignment Plugin') # Configure it # Create the container for the ranges to go into. topology_m2.ms["master1"].add_s(Entry( ('ou=Ranges,%s' % DEFAULT_SUFFIX, { 'objectClass': 'top organizationalUnit'.split(' '), 'ou': 'Ranges', }) )) # Create the dnaAdmin? # For now we just pinch the dn from the dna_m* types, and add the relevant child config # but in the future, this could be a better plugin template type from lib389 config_dn = dna_m1.dn topology_m2.ms["master1"].add_s(Entry( ('cn=uids,%s' % config_dn, { 'objectClass': 'top dnaPluginConfig'.split(' '), 'cn': 'uids', 'dnatype': 'uidNumber gidNumber'.split(' '), 'dnafilter': '(objectclass=posixAccount)', 'dnascope': '%s' % DEFAULT_SUFFIX, 'dnaNextValue': '1', 'dnaMaxValue': '50', 'dnasharedcfgdn': 'ou=Ranges,%s' % DEFAULT_SUFFIX, 'dnaThreshold': '0', 'dnaRangeRequestTimeout': '60', 'dnaMagicRegen': '-1', 'dnaRemoteBindDN': 'uid=dnaAdmin,ou=People,%s' % DEFAULT_SUFFIX, 'dnaRemoteBindCred': 'secret123', 'dnaNextRange': '80-90' }) )) topology_m2.ms["master2"].add_s(Entry( ('cn=uids,%s' % config_dn, { 'objectClass': 'top dnaPluginConfig'.split(' '), 'cn': 'uids', 'dnatype': 'uidNumber gidNumber'.split(' '), 'dnafilter': '(objectclass=posixAccount)', 'dnascope': '%s' % DEFAULT_SUFFIX, 'dnaNextValue': '61', 'dnaMaxValue': '70', 'dnasharedcfgdn': 'ou=Ranges,%s' % DEFAULT_SUFFIX, 'dnaThreshold': '2', 'dnaRangeRequestTimeout': '60', 'dnaMagicRegen': '-1', 'dnaRemoteBindDN': 'uid=dnaAdmin,ou=People,%s' % DEFAULT_SUFFIX, 'dnaRemoteBindCred': 'secret123', }) )) # Enable the plugins dna_m1.enable() dna_m2.enable() # Restart the instances topology_m2.ms["master1"].restart(60) topology_m2.ms["master2"].restart(60) # Wait for a replication ..... time.sleep(40) # Allocate the 10 members to exhaust for i in range(1, 11): _create_user(topology_m2.ms["master2"], i) # Allocate the 11th > _create_user(topology_m2.ms["master2"], 11) /export/tests/tickets/ticket48916_test.py:126: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /export/tests/tickets/ticket48916_test.py:21: in _create_user inst.add_s(Entry( /usr/local/lib/python3.8/site-packages/lib389/__init__.py:176: in inner return f(ent.dn, ent.toTupleList(), *args[2:]) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:439: in add_s return self.add_ext_s(dn,modlist,None,None) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:178: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:425: in add_ext_s resp_type, resp_data, resp_msgid, resp_ctrls = self.result3(msgid,all=1,timeout=self.timeout) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:764: in result3 resp_type, resp_data, resp_msgid, decoded_resp_ctrls, retoid, retval = self.result4( /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:774: in result4 ldap_result = self._ldap_call(self._l.result4,msgid,all,timeout,add_ctrls,add_intermediates,add_extop) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:340: in _ldap_call reraise(exc_type, exc_value, exc_traceback) /usr/local/lib64/python3.8/site-packages/ldap/compat.py:46: in reraise raise exc_value _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.DirSrv object at 0x7f61c21c9220> func = <built-in method result4 of LDAP object at 0x7f61c2180990> args = (13, 1, -1, 0, 0, 0), kwargs = {}, diagnostic_message_success = None exc_type = None, exc_value = None, exc_traceback = None def _ldap_call(self,func,*args,**kwargs): """ Wrapper method mainly for serializing calls into OpenLDAP libs and trace logs """ self._ldap_object_lock.acquire() if __debug__: if self._trace_level>=1: self._trace_file.write('*** %s %s - %s\n%s\n' % ( repr(self), self._uri, '.'.join((self.__class__.__name__,func.__name__)), pprint.pformat((args,kwargs)) )) if self._trace_level>=9: traceback.print_stack(limit=self._trace_stack_limit,file=self._trace_file) diagnostic_message_success = None try: try: > result = func(*args,**kwargs) E ldap.OPERATIONS_ERROR: {'msgtype': 105, 'msgid': 13, 'result': 1, 'desc': 'Operations error', 'ctrls': [], 'info': 'Allocation of a new value for range cn=uids,cn=distributed numeric assignment plugin,cn=plugins,cn=config failed! Unable to proceed.'} /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:324: OPERATIONS_ERROR -------------------------------Captured log setup------------------------------- [32mINFO [0m lib389.SetupDs:setup.py:658 Starting installation... [32mINFO [0m lib389.SetupDs:setup.py:686 Completed installation for master1 [32mINFO [0m lib389.topologies:topologies.py:109 Instance with parameters {'ldap-port': 39001, 'ldap-secureport': 63701, 'server-id': 'master1', 'suffix': 'dc=example,dc=com'} was created. [32mINFO [0m lib389.SetupDs:setup.py:658 Starting installation... [32mINFO [0m lib389.SetupDs:setup.py:686 Completed installation for master2 [32mINFO [0m lib389.topologies:topologies.py:109 Instance with parameters {'ldap-port': 39002, 'ldap-secureport': 63702, 'server-id': 'master2', 'suffix': 'dc=example,dc=com'} was created. [32mINFO [0m lib389.topologies:topologies.py:142 Creating replication topology. [32mINFO [0m lib389.topologies:topologies.py:156 Joining master master2 to master1 ... [32mINFO [0m lib389.replica:replica.py:2084 SUCCESS: bootstrap to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 completed [32mINFO [0m lib389.replica:replica.py:2365 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is was created [32mINFO [0m lib389.replica:replica.py:2365 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is was created [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is NOT working (expect 4c91a235-664a-4b61-a5d3-97d6a7dab280 / got description=None) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 0e8149bd-3a06-4e79-9a1b-88cc84959ada / got description=4c91a235-664a-4b61-a5d3-97d6a7dab280) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is working [32mINFO [0m lib389.replica:replica.py:2153 SUCCESS: joined master from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 [32mINFO [0m lib389.topologies:topologies.py:164 Ensuring master master1 to master2 ... [32mINFO [0m lib389.replica:replica.py:2338 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 already exists [32mINFO [0m lib389.topologies:topologies.py:164 Ensuring master master2 to master1 ... [32mINFO [0m lib389.replica:replica.py:2338 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 already exists | |||
Failed | tickets/ticket48956_test.py::test_ticket48956 | 6.54 | |
topology_st = <lib389.topologies.TopologyMain object at 0x7f61c2417fa0> def test_ticket48956(topology_st): """Write your testcase here... Also, if you need any testcase initialization, please, write additional fixture for that(include finalizer). """ topology_st.standalone.modify_s(ACCT_POLICY_PLUGIN_DN, [(ldap.MOD_REPLACE, 'nsslapd-pluginarg0', ensure_bytes(ACCT_POLICY_CONFIG_DN))]) topology_st.standalone.modify_s(ACCT_POLICY_CONFIG_DN, [(ldap.MOD_REPLACE, 'alwaysrecordlogin', b'yes'), (ldap.MOD_REPLACE, 'stateattrname', b'lastLoginTime'), (ldap.MOD_REPLACE, 'altstateattrname', b'createTimestamp'), (ldap.MOD_REPLACE, 'specattrname', b'acctPolicySubentry'), (ldap.MOD_REPLACE, 'limitattrname', b'accountInactivityLimit')]) # Enable the plugins topology_st.standalone.plugins.enable(name=PLUGIN_ACCT_POLICY) topology_st.standalone.restart(timeout=10) # Check inactivity on standard suffix (short) > _check_inactivity(topology_st, SUFFIX) /export/tests/tickets/ticket48956_test.py:107: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /export/tests/tickets/ticket48956_test.py:78: in _check_inactivity assert (_check_status(topology_st, TEST_USER_DN, b'- activated')) /export/tests/tickets/ticket48956_test.py:39: in _check_status output = subprocess.check_output([nsaccountstatus, '-Z', topology_st.standalone.serverid, /usr/lib64/python3.8/subprocess.py:411: in check_output return run(*popenargs, stdout=PIPE, timeout=timeout, check=True, /usr/lib64/python3.8/subprocess.py:489: in run with Popen(*popenargs, **kwargs) as process: /usr/lib64/python3.8/subprocess.py:854: in __init__ self._execute_child(args, executable, preexec_fn, close_fds, _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <subprocess.Popen object at 0x7f61c2457340> args = ['/usr/sbin/ns-accountstatus.pl', '-Z', 'standalone1', '-D', 'cn=Directory Manager', '-w', ...] executable = b'/usr/sbin/ns-accountstatus.pl', preexec_fn = None close_fds = True, pass_fds = (), cwd = None, env = None, startupinfo = None creationflags = 0, shell = False, p2cread = -1, p2cwrite = -1, c2pread = 44 c2pwrite = 48, errread = -1, errwrite = -1, restore_signals = True start_new_session = False def _execute_child(self, args, executable, preexec_fn, close_fds, pass_fds, cwd, env, startupinfo, creationflags, shell, p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite, restore_signals, start_new_session): """Execute program (POSIX version)""" if isinstance(args, (str, bytes)): args = [args] elif isinstance(args, os.PathLike): if shell: raise TypeError('path-like args is not allowed when ' 'shell is true') args = [args] else: args = list(args) if shell: # On Android the default shell is at '/system/bin/sh'. unix_shell = ('/system/bin/sh' if hasattr(sys, 'getandroidapilevel') else '/bin/sh') args = [unix_shell, "-c"] + args if executable: args[0] = executable if executable is None: executable = args[0] sys.audit("subprocess.Popen", executable, args, cwd, env) if (_USE_POSIX_SPAWN and os.path.dirname(executable) and preexec_fn is None and not close_fds and not pass_fds and cwd is None and (p2cread == -1 or p2cread > 2) and (c2pwrite == -1 or c2pwrite > 2) and (errwrite == -1 or errwrite > 2) and not start_new_session): self._posix_spawn(args, executable, env, restore_signals, p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite) return orig_executable = executable # For transferring possible exec failure from child to parent. # Data format: "exception name:hex errno:description" # Pickle is not used; it is complex and involves memory allocation. errpipe_read, errpipe_write = os.pipe() # errpipe_write must not be in the standard io 0, 1, or 2 fd range. low_fds_to_close = [] while errpipe_write < 3: low_fds_to_close.append(errpipe_write) errpipe_write = os.dup(errpipe_write) for low_fd in low_fds_to_close: os.close(low_fd) try: try: # We must avoid complex work that could involve # malloc or free in the child process to avoid # potential deadlocks, thus we do all this here. # and pass it to fork_exec() if env is not None: env_list = [] for k, v in env.items(): k = os.fsencode(k) if b'=' in k: raise ValueError("illegal environment variable name") env_list.append(k + b'=' + os.fsencode(v)) else: env_list = None # Use execv instead of execve. executable = os.fsencode(executable) if os.path.dirname(executable): executable_list = (executable,) else: # This matches the behavior of os._execvpe(). executable_list = tuple( os.path.join(os.fsencode(dir), executable) for dir in os.get_exec_path(env)) fds_to_keep = set(pass_fds) fds_to_keep.add(errpipe_write) self.pid = _posixsubprocess.fork_exec( args, executable_list, close_fds, tuple(sorted(map(int, fds_to_keep))), cwd, env_list, p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite, errpipe_read, errpipe_write, restore_signals, start_new_session, preexec_fn) self._child_created = True finally: # be sure the FD is closed no matter what os.close(errpipe_write) self._close_pipe_fds(p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite) # Wait for exec to fail or succeed; possibly raising an # exception (limited in size) errpipe_data = bytearray() while True: part = os.read(errpipe_read, 50000) errpipe_data += part if not part or len(errpipe_data) > 50000: break finally: # be sure the FD is closed no matter what os.close(errpipe_read) if errpipe_data: try: pid, sts = os.waitpid(self.pid, 0) if pid == self.pid: self._handle_exitstatus(sts) else: self.returncode = sys.maxsize except ChildProcessError: pass try: exception_name, hex_errno, err_msg = ( errpipe_data.split(b':', 2)) # The encoding here should match the encoding # written in by the subprocess implementations # like _posixsubprocess err_msg = err_msg.decode() except ValueError: exception_name = b'SubprocessError' hex_errno = b'0' err_msg = 'Bad exception data from child: {!r}'.format( bytes(errpipe_data)) child_exception_type = getattr( builtins, exception_name.decode('ascii'), SubprocessError) if issubclass(child_exception_type, OSError) and hex_errno: errno_num = int(hex_errno, 16) child_exec_never_called = (err_msg == "noexec") if child_exec_never_called: err_msg = "" # The error must be from chdir(cwd). err_filename = cwd else: err_filename = orig_executable if errno_num != 0: err_msg = os.strerror(errno_num) > raise child_exception_type(errno_num, err_msg, err_filename) E FileNotFoundError: [Errno 2] No such file or directory: '/usr/sbin/ns-accountstatus.pl' /usr/lib64/python3.8/subprocess.py:1702: FileNotFoundError -------------------------------Captured log setup------------------------------- [32mINFO [0m lib389.SetupDs:setup.py:658 Starting installation... [32mINFO [0m lib389.SetupDs:setup.py:686 Completed installation for standalone1 [32mINFO [0m lib389.topologies:topologies.py:109 Instance with parameters {'ldap-port': 38901, 'ldap-secureport': 63601, 'server-id': 'standalone1', 'suffix': 'dc=example,dc=com'} was created. -------------------------------Captured log call-------------------------------- [32mINFO [0m tests.tickets.ticket48956_test:ticket48956_test.py:54 ######################### Adding Account Policy entry: cn=Account Inactivation Policy,dc=example,dc=com ###################### [32mINFO [0m tests.tickets.ticket48956_test:ticket48956_test.py:61 ######################### Adding Test User entry: uid=ticket48956user,dc=example,dc=com ###################### | |||
Failed | tickets/ticket48961_test.py::test_ticket48961_storagescheme | 0.02 | |
topology_st = <lib389.topologies.TopologyMain object at 0x7f61c213ebe0> def test_ticket48961_storagescheme(topology_st): """ Test deleting of the storage scheme. """ default = topology_st.standalone.config.get_attr_val('passwordStorageScheme') # Change it topology_st.standalone.config.set('passwordStorageScheme', 'CLEAR') # Now delete it > topology_st.standalone.config.remove('passwordStorageScheme', None) /export/tests/tickets/ticket48961_test.py:28: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/local/lib/python3.8/site-packages/lib389/_mapped_object.py:316: in remove self.set(key, value, action=ldap.MOD_DELETE) /usr/local/lib/python3.8/site-packages/lib389/_mapped_object.py:446: in set return self._instance.modify_ext_s(self._dn, [(action, key, value)], /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:613: in modify_ext_s resp_type, resp_data, resp_msgid, resp_ctrls = self.result3(msgid,all=1,timeout=self.timeout) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:764: in result3 resp_type, resp_data, resp_msgid, decoded_resp_ctrls, retoid, retval = self.result4( /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:774: in result4 ldap_result = self._ldap_call(self._l.result4,msgid,all,timeout,add_ctrls,add_intermediates,add_extop) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:340: in _ldap_call reraise(exc_type, exc_value, exc_traceback) /usr/local/lib64/python3.8/site-packages/ldap/compat.py:46: in reraise raise exc_value _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.DirSrv object at 0x7f61c213eb80> func = <built-in method result4 of LDAP object at 0x7f61c245fbd0> args = (5, 1, -1, 0, 0, 0), kwargs = {}, diagnostic_message_success = None exc_type = None, exc_value = None, exc_traceback = None def _ldap_call(self,func,*args,**kwargs): """ Wrapper method mainly for serializing calls into OpenLDAP libs and trace logs """ self._ldap_object_lock.acquire() if __debug__: if self._trace_level>=1: self._trace_file.write('*** %s %s - %s\n%s\n' % ( repr(self), self._uri, '.'.join((self.__class__.__name__,func.__name__)), pprint.pformat((args,kwargs)) )) if self._trace_level>=9: traceback.print_stack(limit=self._trace_stack_limit,file=self._trace_file) diagnostic_message_success = None try: try: > result = func(*args,**kwargs) E ldap.OPERATIONS_ERROR: {'msgtype': 103, 'msgid': 5, 'result': 1, 'desc': 'Operations error', 'ctrls': [], 'info': 'passwordStorageScheme: deleting the value is not allowed.'} /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:324: OPERATIONS_ERROR -------------------------------Captured log setup------------------------------- [32mINFO [0m lib389.SetupDs:setup.py:658 Starting installation... [32mINFO [0m lib389.SetupDs:setup.py:686 Completed installation for standalone1 [32mINFO [0m lib389.topologies:topologies.py:109 Instance with parameters {'ldap-port': 38901, 'ldap-secureport': 63601, 'server-id': 'standalone1', 'suffix': 'dc=example,dc=com'} was created. | |||
Failed | tickets/ticket48961_test.py::test_ticket48961_deleteall | 0.00 | |
topology_st = <lib389.topologies.TopologyMain object at 0x7f61c213ebe0> def test_ticket48961_deleteall(topology_st): """ Test that we can delete all valid attrs, and that a few are rejected. """ attr_to_test = { 'nsslapd-listenhost': 'localhost', 'nsslapd-securelistenhost': 'localhost', 'nsslapd-allowed-sasl-mechanisms': 'GSSAPI', 'nsslapd-svrtab': 'Some bogus data', # This one could reset? } attr_to_fail = { # These are the values that should always be dn dse.ldif too 'nsslapd-localuser': 'dirsrv', 'nsslapd-defaultnamingcontext': 'dc=example,dc=com', # Can't delete 'nsslapd-accesslog': '/opt/dirsrv/var/log/dirsrv/slapd-standalone/access', 'nsslapd-auditlog': '/opt/dirsrv/var/log/dirsrv/slapd-standalone/audit', 'nsslapd-errorlog': '/opt/dirsrv/var/log/dirsrv/slapd-standalone/errors', 'nsslapd-tmpdir': '/tmp', 'nsslapd-rundir': '/opt/dirsrv/var/run/dirsrv', 'nsslapd-bakdir': '/opt/dirsrv/var/lib/dirsrv/slapd-standalone/bak', 'nsslapd-certdir': '/opt/dirsrv/etc/dirsrv/slapd-standalone', 'nsslapd-instancedir': '/opt/dirsrv/lib/dirsrv/slapd-standalone', 'nsslapd-ldifdir': '/opt/dirsrv/var/lib/dirsrv/slapd-standalone/ldif', 'nsslapd-lockdir': '/opt/dirsrv/var/lock/dirsrv/slapd-standalone', 'nsslapd-schemadir': '/opt/dirsrv/etc/dirsrv/slapd-standalone/schema', 'nsslapd-workingdir': '/opt/dirsrv/var/log/dirsrv/slapd-standalone', 'nsslapd-localhost': 'localhost.localdomain', # These can't be reset, but might be in dse.ldif. Probably in libglobs. 'nsslapd-certmap-basedn': 'cn=certmap,cn=config', 'nsslapd-port': '38931', # Can't delete 'nsslapd-secureport': '636', # Can't delete 'nsslapd-conntablesize': '1048576', 'nsslapd-rootpw': '{SSHA512}...', # These are hardcoded server magic. 'nsslapd-hash-filters': 'off', # Can't delete 'nsslapd-requiresrestart': 'cn=config:nsslapd-port', # Can't change 'nsslapd-plugin': 'cn=case ignore string syntax,cn=plugins,cn=config', # Can't change 'nsslapd-privatenamespaces': 'cn=schema', # Can't change 'nsslapd-allowed-to-delete-attrs': 'None', # Can't delete 'nsslapd-accesslog-list': 'List!', # Can't delete 'nsslapd-auditfaillog-list': 'List!', 'nsslapd-auditlog-list': 'List!', 'nsslapd-errorlog-list': 'List!', 'nsslapd-config': 'cn=config', 'nsslapd-versionstring': '389-Directory/1.3.6.0', 'objectclass': '', 'cn': '', # These are the odd values 'nsslapd-backendconfig': 'cn=config,cn=userRoot,cn=ldbm database,cn=plugins,cn=config', # Doesn't exist? 'nsslapd-betype': 'ldbm database', # Doesn't exist? 'nsslapd-connection-buffer': 1, # Has an ldap problem 'nsslapd-malloc-mmap-threshold': '-10', # Defunct anyway 'nsslapd-malloc-mxfast': '-10', 'nsslapd-malloc-trim-threshold': '-10', 'nsslapd-referralmode': '', 'nsslapd-saslpath': '', 'passwordadmindn': '', } > config_entry = topology_st.standalone.config.raw_entry() /export/tests/tickets/ticket48961_test.py:101: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.config.Config object at 0x7f61c213aeb0>, name = 'raw_entry' def __getattr__(self, name): """This enables a bit of magic to allow us to wrap any function ending with _json to it's form without json, then transformed. It means your function *must* return it's values as a dict of: { attr : [val, val, ...], attr : [], ... } to be supported. """ if (name.endswith('_json')): int_name = name.replace('_json', '') pfunc = partial(self._jsonify, getattr(self, int_name)) return pfunc else: > raise AttributeError("'%s' object has no attribute '%s'" % (self.__class__.__name__, name)) E AttributeError: 'Config' object has no attribute 'raw_entry' /usr/local/lib/python3.8/site-packages/lib389/_mapped_object.py:199: AttributeError | |||
Failed | tickets/ticket49039_test.py::test_ticket49039 | 12.33 | |
topo = <lib389.topologies.TopologyMain object at 0x7f61c20f9070> def test_ticket49039(topo): """Test "password must change" verses "password min age". Min age should not block password update if the password was reset. """ # Setup SSL (for ldappasswd test) topo.standalone.enable_tls() # Configure password policy try: policy = PwPolicyManager(topo.standalone) policy.set_global_policy(properties={'nsslapd-pwpolicy-local': 'on', 'passwordMustChange': 'on', 'passwordExp': 'on', 'passwordMaxAge': '86400000', 'passwordMinAge': '8640000', 'passwordChange': 'on'}) except ldap.LDAPError as e: log.fatal('Failed to set password policy: ' + str(e)) # Add user, bind, and set password try: topo.standalone.add_s(Entry((USER_DN, { 'objectclass': 'top extensibleObject'.split(), 'uid': 'user1', 'userpassword': PASSWORD }))) except ldap.LDAPError as e: log.fatal('Failed to add user: error ' + e.args[0]['desc']) assert False # Reset password as RootDN try: topo.standalone.modify_s(USER_DN, [(ldap.MOD_REPLACE, 'userpassword', ensure_bytes(PASSWORD))]) except ldap.LDAPError as e: log.fatal('Failed to bind: error ' + e.args[0]['desc']) assert False time.sleep(1) # Reset password as user try: topo.standalone.simple_bind_s(USER_DN, PASSWORD) except ldap.LDAPError as e: log.fatal('Failed to bind: error ' + e.args[0]['desc']) assert False try: > topo.standalone.modify_s(USER_DN, [(ldap.MOD_REPLACE, 'userpassword', ensure_bytes(PASSWORD))]) /export/tests/tickets/ticket49039_test.py:75: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ args = ('uid=user,dc=example,dc=com', [(2, 'userpassword', b'password')]) kwargs = {} c_stack = [FrameInfo(frame=<frame at 0x7f61c30cb640, file '/usr/local/lib/python3.8/site-packages/lib389/__init__.py', line 180,...93, function='_hookexec', code_context=[' return self._inner_hookexec(hook, methods, kwargs)\n'], index=0), ...] frame = FrameInfo(frame=<frame at 0x5576b8f272d0, file '/export/tests/tickets/ticket49039_test.py', line 78, code test_ticket4...[" topo.standalone.modify_s(USER_DN, [(ldap.MOD_REPLACE, 'userpassword', ensure_bytes(PASSWORD))])\n"], index=0) def inner(*args, **kwargs): if name in [ 'add_s', 'bind_s', 'delete_s', 'modify_s', 'modrdn_s', 'rename_s', 'sasl_interactive_bind_s', 'search_s', 'search_ext_s', 'simple_bind_s', 'unbind_s', 'getEntry', ] and not ('escapehatch' in kwargs and kwargs['escapehatch'] == 'i am sure'): c_stack = inspect.stack() frame = c_stack[1] warnings.warn(DeprecationWarning("Use of raw ldap function %s. This will be removed in a future release. " "Found in: %s:%s" % (name, frame.filename, frame.lineno))) # Later, we will add a sleep here to make it even more painful. # Finally, it will raise an exception. elif 'escapehatch' in kwargs: kwargs.pop('escapehatch') if name == 'result': objtype, data = f(*args, **kwargs) # data is either a 2-tuple or a list of 2-tuples # print data if data: if isinstance(data, tuple): return objtype, Entry(data) elif isinstance(data, list): # AD sends back these search references # if objtype == ldap.RES_SEARCH_RESULT and \ # isinstance(data[-1],tuple) and \ # not data[-1][0]: # print "Received search reference: " # pprint.pprint(data[-1][1]) # data.pop() # remove the last non-entry element return objtype, [Entry(x) for x in data] else: raise TypeError("unknown data type %s returned by result" % type(data)) else: return objtype, data elif name.startswith('add'): # the first arg is self # the second and third arg are the dn and the data to send # We need to convert the Entry into the format used by # python-ldap ent = args[0] if isinstance(ent, Entry): return f(ent.dn, ent.toTupleList(), *args[2:]) else: return f(*args, **kwargs) else: > return f(*args, **kwargs) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.DirSrv object at 0x7f61c20f3fd0> dn = 'uid=user,dc=example,dc=com', modlist = [(2, 'userpassword', b'password')] def modify_s(self,dn,modlist): > return self.modify_ext_s(dn,modlist,None,None) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:640: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ args = ('uid=user,dc=example,dc=com', [(2, 'userpassword', b'password')], None, None) kwargs = {} def inner(*args, **kwargs): if name in [ 'add_s', 'bind_s', 'delete_s', 'modify_s', 'modrdn_s', 'rename_s', 'sasl_interactive_bind_s', 'search_s', 'search_ext_s', 'simple_bind_s', 'unbind_s', 'getEntry', ] and not ('escapehatch' in kwargs and kwargs['escapehatch'] == 'i am sure'): c_stack = inspect.stack() frame = c_stack[1] warnings.warn(DeprecationWarning("Use of raw ldap function %s. This will be removed in a future release. " "Found in: %s:%s" % (name, frame.filename, frame.lineno))) # Later, we will add a sleep here to make it even more painful. # Finally, it will raise an exception. elif 'escapehatch' in kwargs: kwargs.pop('escapehatch') if name == 'result': objtype, data = f(*args, **kwargs) # data is either a 2-tuple or a list of 2-tuples # print data if data: if isinstance(data, tuple): return objtype, Entry(data) elif isinstance(data, list): # AD sends back these search references # if objtype == ldap.RES_SEARCH_RESULT and \ # isinstance(data[-1],tuple) and \ # not data[-1][0]: # print "Received search reference: " # pprint.pprint(data[-1][1]) # data.pop() # remove the last non-entry element return objtype, [Entry(x) for x in data] else: raise TypeError("unknown data type %s returned by result" % type(data)) else: return objtype, data elif name.startswith('add'): # the first arg is self # the second and third arg are the dn and the data to send # We need to convert the Entry into the format used by # python-ldap ent = args[0] if isinstance(ent, Entry): return f(ent.dn, ent.toTupleList(), *args[2:]) else: return f(*args, **kwargs) else: > return f(*args, **kwargs) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.DirSrv object at 0x7f61c20f3fd0> dn = 'uid=user,dc=example,dc=com', modlist = [(2, 'userpassword', b'password')] serverctrls = None, clientctrls = None def modify_ext_s(self,dn,modlist,serverctrls=None,clientctrls=None): msgid = self.modify_ext(dn,modlist,serverctrls,clientctrls) > resp_type, resp_data, resp_msgid, resp_ctrls = self.result3(msgid,all=1,timeout=self.timeout) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:613: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ args = (7,), kwargs = {'all': 1, 'timeout': -1} def inner(*args, **kwargs): if name in [ 'add_s', 'bind_s', 'delete_s', 'modify_s', 'modrdn_s', 'rename_s', 'sasl_interactive_bind_s', 'search_s', 'search_ext_s', 'simple_bind_s', 'unbind_s', 'getEntry', ] and not ('escapehatch' in kwargs and kwargs['escapehatch'] == 'i am sure'): c_stack = inspect.stack() frame = c_stack[1] warnings.warn(DeprecationWarning("Use of raw ldap function %s. This will be removed in a future release. " "Found in: %s:%s" % (name, frame.filename, frame.lineno))) # Later, we will add a sleep here to make it even more painful. # Finally, it will raise an exception. elif 'escapehatch' in kwargs: kwargs.pop('escapehatch') if name == 'result': objtype, data = f(*args, **kwargs) # data is either a 2-tuple or a list of 2-tuples # print data if data: if isinstance(data, tuple): return objtype, Entry(data) elif isinstance(data, list): # AD sends back these search references # if objtype == ldap.RES_SEARCH_RESULT and \ # isinstance(data[-1],tuple) and \ # not data[-1][0]: # print "Received search reference: " # pprint.pprint(data[-1][1]) # data.pop() # remove the last non-entry element return objtype, [Entry(x) for x in data] else: raise TypeError("unknown data type %s returned by result" % type(data)) else: return objtype, data elif name.startswith('add'): # the first arg is self # the second and third arg are the dn and the data to send # We need to convert the Entry into the format used by # python-ldap ent = args[0] if isinstance(ent, Entry): return f(ent.dn, ent.toTupleList(), *args[2:]) else: return f(*args, **kwargs) else: > return f(*args, **kwargs) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.DirSrv object at 0x7f61c20f3fd0>, msgid = 7, all = 1 timeout = -1, resp_ctrl_classes = None def result3(self,msgid=ldap.RES_ANY,all=1,timeout=None,resp_ctrl_classes=None): > resp_type, resp_data, resp_msgid, decoded_resp_ctrls, retoid, retval = self.result4( msgid,all,timeout, add_ctrls=0,add_intermediates=0,add_extop=0, resp_ctrl_classes=resp_ctrl_classes ) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:764: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ args = (7, 1, -1) kwargs = {'add_ctrls': 0, 'add_extop': 0, 'add_intermediates': 0, 'resp_ctrl_classes': None} def inner(*args, **kwargs): if name in [ 'add_s', 'bind_s', 'delete_s', 'modify_s', 'modrdn_s', 'rename_s', 'sasl_interactive_bind_s', 'search_s', 'search_ext_s', 'simple_bind_s', 'unbind_s', 'getEntry', ] and not ('escapehatch' in kwargs and kwargs['escapehatch'] == 'i am sure'): c_stack = inspect.stack() frame = c_stack[1] warnings.warn(DeprecationWarning("Use of raw ldap function %s. This will be removed in a future release. " "Found in: %s:%s" % (name, frame.filename, frame.lineno))) # Later, we will add a sleep here to make it even more painful. # Finally, it will raise an exception. elif 'escapehatch' in kwargs: kwargs.pop('escapehatch') if name == 'result': objtype, data = f(*args, **kwargs) # data is either a 2-tuple or a list of 2-tuples # print data if data: if isinstance(data, tuple): return objtype, Entry(data) elif isinstance(data, list): # AD sends back these search references # if objtype == ldap.RES_SEARCH_RESULT and \ # isinstance(data[-1],tuple) and \ # not data[-1][0]: # print "Received search reference: " # pprint.pprint(data[-1][1]) # data.pop() # remove the last non-entry element return objtype, [Entry(x) for x in data] else: raise TypeError("unknown data type %s returned by result" % type(data)) else: return objtype, data elif name.startswith('add'): # the first arg is self # the second and third arg are the dn and the data to send # We need to convert the Entry into the format used by # python-ldap ent = args[0] if isinstance(ent, Entry): return f(ent.dn, ent.toTupleList(), *args[2:]) else: return f(*args, **kwargs) else: > return f(*args, **kwargs) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.DirSrv object at 0x7f61c20f3fd0>, msgid = 7, all = 1 timeout = -1, add_ctrls = 0, add_intermediates = 0, add_extop = 0 resp_ctrl_classes = None def result4(self,msgid=ldap.RES_ANY,all=1,timeout=None,add_ctrls=0,add_intermediates=0,add_extop=0,resp_ctrl_classes=None): if timeout is None: timeout = self.timeout > ldap_result = self._ldap_call(self._l.result4,msgid,all,timeout,add_ctrls,add_intermediates,add_extop) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:774: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ args = (<built-in method result4 of LDAP object at 0x7f61c20f9120>, 7, 1, -1, 0, 0, ...) kwargs = {} def inner(*args, **kwargs): if name in [ 'add_s', 'bind_s', 'delete_s', 'modify_s', 'modrdn_s', 'rename_s', 'sasl_interactive_bind_s', 'search_s', 'search_ext_s', 'simple_bind_s', 'unbind_s', 'getEntry', ] and not ('escapehatch' in kwargs and kwargs['escapehatch'] == 'i am sure'): c_stack = inspect.stack() frame = c_stack[1] warnings.warn(DeprecationWarning("Use of raw ldap function %s. This will be removed in a future release. " "Found in: %s:%s" % (name, frame.filename, frame.lineno))) # Later, we will add a sleep here to make it even more painful. # Finally, it will raise an exception. elif 'escapehatch' in kwargs: kwargs.pop('escapehatch') if name == 'result': objtype, data = f(*args, **kwargs) # data is either a 2-tuple or a list of 2-tuples # print data if data: if isinstance(data, tuple): return objtype, Entry(data) elif isinstance(data, list): # AD sends back these search references # if objtype == ldap.RES_SEARCH_RESULT and \ # isinstance(data[-1],tuple) and \ # not data[-1][0]: # print "Received search reference: " # pprint.pprint(data[-1][1]) # data.pop() # remove the last non-entry element return objtype, [Entry(x) for x in data] else: raise TypeError("unknown data type %s returned by result" % type(data)) else: return objtype, data elif name.startswith('add'): # the first arg is self # the second and third arg are the dn and the data to send # We need to convert the Entry into the format used by # python-ldap ent = args[0] if isinstance(ent, Entry): return f(ent.dn, ent.toTupleList(), *args[2:]) else: return f(*args, **kwargs) else: > return f(*args, **kwargs) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.DirSrv object at 0x7f61c20f3fd0> func = <built-in method result4 of LDAP object at 0x7f61c20f9120> args = (7, 1, -1, 0, 0, 0), kwargs = {}, diagnostic_message_success = None exc_type = None, exc_value = None, exc_traceback = None def _ldap_call(self,func,*args,**kwargs): """ Wrapper method mainly for serializing calls into OpenLDAP libs and trace logs """ self._ldap_object_lock.acquire() if __debug__: if self._trace_level>=1: self._trace_file.write('*** %s %s - %s\n%s\n' % ( repr(self), self._uri, '.'.join((self.__class__.__name__,func.__name__)), pprint.pformat((args,kwargs)) )) if self._trace_level>=9: traceback.print_stack(limit=self._trace_stack_limit,file=self._trace_file) diagnostic_message_success = None try: try: result = func(*args,**kwargs) if __debug__ and self._trace_level>=2: if func.__name__!="unbind_ext": diagnostic_message_success = self._l.get_option(ldap.OPT_DIAGNOSTIC_MESSAGE) finally: self._ldap_object_lock.release() except LDAPError as e: exc_type,exc_value,exc_traceback = sys.exc_info() try: if 'info' not in e.args[0] and 'errno' in e.args[0]: e.args[0]['info'] = strerror(e.args[0]['errno']) except IndexError: pass if __debug__ and self._trace_level>=2: self._trace_file.write('=> LDAPError - %s: %s\n' % (e.__class__.__name__,str(e))) try: > reraise(exc_type, exc_value, exc_traceback) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:340: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ exc_type = <class 'ldap.INSUFFICIENT_ACCESS'> exc_value = INSUFFICIENT_ACCESS({'msgtype': 103, 'msgid': 7, 'result': 50, 'desc': 'Insufficient access', 'ctrls': [], 'info': "Insufficient 'write' privilege to the 'userPassword' attribute of entry 'uid=user,dc=example,dc=com'.\n"}) exc_traceback = <traceback object at 0x7f61c2723180> def reraise(exc_type, exc_value, exc_traceback): """Re-raise an exception given information from sys.exc_info() Note that unlike six.reraise, this does not support replacing the traceback. All arguments must come from a single sys.exc_info() call. """ # In Python 3, all exception info is contained in one object. > raise exc_value /usr/local/lib64/python3.8/site-packages/ldap/compat.py:46: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.DirSrv object at 0x7f61c20f3fd0> func = <built-in method result4 of LDAP object at 0x7f61c20f9120> args = (7, 1, -1, 0, 0, 0), kwargs = {}, diagnostic_message_success = None exc_type = None, exc_value = None, exc_traceback = None def _ldap_call(self,func,*args,**kwargs): """ Wrapper method mainly for serializing calls into OpenLDAP libs and trace logs """ self._ldap_object_lock.acquire() if __debug__: if self._trace_level>=1: self._trace_file.write('*** %s %s - %s\n%s\n' % ( repr(self), self._uri, '.'.join((self.__class__.__name__,func.__name__)), pprint.pformat((args,kwargs)) )) if self._trace_level>=9: traceback.print_stack(limit=self._trace_stack_limit,file=self._trace_file) diagnostic_message_success = None try: try: > result = func(*args,**kwargs) E ldap.INSUFFICIENT_ACCESS: {'msgtype': 103, 'msgid': 7, 'result': 50, 'desc': 'Insufficient access', 'ctrls': [], 'info': "Insufficient 'write' privilege to the 'userPassword' attribute of entry 'uid=user,dc=example,dc=com'.\n"} /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:324: INSUFFICIENT_ACCESS During handling of the above exception, another exception occurred: topo = <lib389.topologies.TopologyMain object at 0x7f61c20f9070> def test_ticket49039(topo): """Test "password must change" verses "password min age". Min age should not block password update if the password was reset. """ # Setup SSL (for ldappasswd test) topo.standalone.enable_tls() # Configure password policy try: policy = PwPolicyManager(topo.standalone) policy.set_global_policy(properties={'nsslapd-pwpolicy-local': 'on', 'passwordMustChange': 'on', 'passwordExp': 'on', 'passwordMaxAge': '86400000', 'passwordMinAge': '8640000', 'passwordChange': 'on'}) except ldap.LDAPError as e: log.fatal('Failed to set password policy: ' + str(e)) # Add user, bind, and set password try: topo.standalone.add_s(Entry((USER_DN, { 'objectclass': 'top extensibleObject'.split(), 'uid': 'user1', 'userpassword': PASSWORD }))) except ldap.LDAPError as e: log.fatal('Failed to add user: error ' + e.args[0]['desc']) assert False # Reset password as RootDN try: topo.standalone.modify_s(USER_DN, [(ldap.MOD_REPLACE, 'userpassword', ensure_bytes(PASSWORD))]) except ldap.LDAPError as e: log.fatal('Failed to bind: error ' + e.args[0]['desc']) assert False time.sleep(1) # Reset password as user try: topo.standalone.simple_bind_s(USER_DN, PASSWORD) except ldap.LDAPError as e: log.fatal('Failed to bind: error ' + e.args[0]['desc']) assert False try: topo.standalone.modify_s(USER_DN, [(ldap.MOD_REPLACE, 'userpassword', ensure_bytes(PASSWORD))]) except ldap.LDAPError as e: log.fatal('Failed to change password: error ' + e.args[0]['desc']) > assert False E assert False /export/tests/tickets/ticket49039_test.py:78: AssertionError -------------------------------Captured log setup------------------------------- [32mINFO [0m lib389.SetupDs:setup.py:658 Starting installation... [32mINFO [0m lib389.SetupDs:setup.py:686 Completed installation for standalone1 [32mINFO [0m lib389.topologies:topologies.py:109 Instance with parameters {'ldap-port': 38901, 'ldap-secureport': 63601, 'server-id': 'standalone1', 'suffix': 'dc=example,dc=com'} was created. -------------------------------Captured log call-------------------------------- [31mCRITICAL[0m tests.tickets.ticket49039_test:ticket49039_test.py:77 Failed to change password: error Insufficient access | |||
Failed | tickets/ticket49072_test.py::test_ticket49072_basedn | 4.67 | |
topo = <lib389.topologies.TopologyMain object at 0x7f61c1fb1ac0> def test_ticket49072_basedn(topo): """memberOf fixup task does not validate args :id: dce9b898-119d-42b8-a236-1130e59bfe18 :feature: memberOf :setup: Standalone instance, with memberOf plugin :steps: 1. Run fixup-memberOf.pl with invalid DN entry 2. Check if error log reports "Failed to get be backend" :expectedresults: Fixup-memberOf.pl task should complete, but errors logged. """ log.info("Ticket 49072 memberof fixup task with invalid basedn...") topo.standalone.plugins.enable(name=PLUGIN_MEMBER_OF) topo.standalone.restart(timeout=10) if ds_is_older('1.3'): inst_dir = topo.standalone.get_inst_dir() memof_task = os.path.join(inst_dir, FIXUP_MEMOF) try: output = subprocess.check_output([memof_task, '-D', DN_DM, '-w', PASSWORD, '-b', TEST_BASEDN, '-f', FILTER]) except subprocess.CalledProcessError as err: output = err.output else: sbin_dir = topo.standalone.get_sbin_dir() memof_task = os.path.join(sbin_dir, FIXUP_MEMOF) try: > output = subprocess.check_output( [memof_task, '-D', DN_DM, '-w', PASSWORD, '-b', TEST_BASEDN, '-Z', SERVERID_STANDALONE, '-f', FILTER]) /export/tests/tickets/ticket49072_test.py:55: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/lib64/python3.8/subprocess.py:411: in check_output return run(*popenargs, stdout=PIPE, timeout=timeout, check=True, /usr/lib64/python3.8/subprocess.py:489: in run with Popen(*popenargs, **kwargs) as process: /usr/lib64/python3.8/subprocess.py:854: in __init__ self._execute_child(args, executable, preexec_fn, close_fds, _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <subprocess.Popen object at 0x7f61c1f84cd0> args = ['/usr/sbin/fixup-memberof.pl', '-D', 'cn=Directory Manager', '-w', 'password', '-b', ...] executable = b'/usr/sbin/fixup-memberof.pl', preexec_fn = None, close_fds = True pass_fds = (), cwd = None, env = None, startupinfo = None, creationflags = 0 shell = False, p2cread = -1, p2cwrite = -1, c2pread = 51, c2pwrite = 52 errread = -1, errwrite = -1, restore_signals = True, start_new_session = False def _execute_child(self, args, executable, preexec_fn, close_fds, pass_fds, cwd, env, startupinfo, creationflags, shell, p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite, restore_signals, start_new_session): """Execute program (POSIX version)""" if isinstance(args, (str, bytes)): args = [args] elif isinstance(args, os.PathLike): if shell: raise TypeError('path-like args is not allowed when ' 'shell is true') args = [args] else: args = list(args) if shell: # On Android the default shell is at '/system/bin/sh'. unix_shell = ('/system/bin/sh' if hasattr(sys, 'getandroidapilevel') else '/bin/sh') args = [unix_shell, "-c"] + args if executable: args[0] = executable if executable is None: executable = args[0] sys.audit("subprocess.Popen", executable, args, cwd, env) if (_USE_POSIX_SPAWN and os.path.dirname(executable) and preexec_fn is None and not close_fds and not pass_fds and cwd is None and (p2cread == -1 or p2cread > 2) and (c2pwrite == -1 or c2pwrite > 2) and (errwrite == -1 or errwrite > 2) and not start_new_session): self._posix_spawn(args, executable, env, restore_signals, p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite) return orig_executable = executable # For transferring possible exec failure from child to parent. # Data format: "exception name:hex errno:description" # Pickle is not used; it is complex and involves memory allocation. errpipe_read, errpipe_write = os.pipe() # errpipe_write must not be in the standard io 0, 1, or 2 fd range. low_fds_to_close = [] while errpipe_write < 3: low_fds_to_close.append(errpipe_write) errpipe_write = os.dup(errpipe_write) for low_fd in low_fds_to_close: os.close(low_fd) try: try: # We must avoid complex work that could involve # malloc or free in the child process to avoid # potential deadlocks, thus we do all this here. # and pass it to fork_exec() if env is not None: env_list = [] for k, v in env.items(): k = os.fsencode(k) if b'=' in k: raise ValueError("illegal environment variable name") env_list.append(k + b'=' + os.fsencode(v)) else: env_list = None # Use execv instead of execve. executable = os.fsencode(executable) if os.path.dirname(executable): executable_list = (executable,) else: # This matches the behavior of os._execvpe(). executable_list = tuple( os.path.join(os.fsencode(dir), executable) for dir in os.get_exec_path(env)) fds_to_keep = set(pass_fds) fds_to_keep.add(errpipe_write) self.pid = _posixsubprocess.fork_exec( args, executable_list, close_fds, tuple(sorted(map(int, fds_to_keep))), cwd, env_list, p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite, errpipe_read, errpipe_write, restore_signals, start_new_session, preexec_fn) self._child_created = True finally: # be sure the FD is closed no matter what os.close(errpipe_write) self._close_pipe_fds(p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite) # Wait for exec to fail or succeed; possibly raising an # exception (limited in size) errpipe_data = bytearray() while True: part = os.read(errpipe_read, 50000) errpipe_data += part if not part or len(errpipe_data) > 50000: break finally: # be sure the FD is closed no matter what os.close(errpipe_read) if errpipe_data: try: pid, sts = os.waitpid(self.pid, 0) if pid == self.pid: self._handle_exitstatus(sts) else: self.returncode = sys.maxsize except ChildProcessError: pass try: exception_name, hex_errno, err_msg = ( errpipe_data.split(b':', 2)) # The encoding here should match the encoding # written in by the subprocess implementations # like _posixsubprocess err_msg = err_msg.decode() except ValueError: exception_name = b'SubprocessError' hex_errno = b'0' err_msg = 'Bad exception data from child: {!r}'.format( bytes(errpipe_data)) child_exception_type = getattr( builtins, exception_name.decode('ascii'), SubprocessError) if issubclass(child_exception_type, OSError) and hex_errno: errno_num = int(hex_errno, 16) child_exec_never_called = (err_msg == "noexec") if child_exec_never_called: err_msg = "" # The error must be from chdir(cwd). err_filename = cwd else: err_filename = orig_executable if errno_num != 0: err_msg = os.strerror(errno_num) > raise child_exception_type(errno_num, err_msg, err_filename) E FileNotFoundError: [Errno 2] No such file or directory: '/usr/sbin/fixup-memberof.pl' /usr/lib64/python3.8/subprocess.py:1702: FileNotFoundError -------------------------------Captured log setup------------------------------- [32mINFO [0m lib389.SetupDs:setup.py:658 Starting installation... [32mINFO [0m lib389.SetupDs:setup.py:686 Completed installation for standalone1 [32mINFO [0m lib389.topologies:topologies.py:109 Instance with parameters {'ldap-port': 38901, 'ldap-secureport': 63601, 'server-id': 'standalone1', 'suffix': 'dc=example,dc=com'} was created. -------------------------------Captured log call-------------------------------- [32mINFO [0m tests.tickets.ticket49072_test:ticket49072_test.py:40 Ticket 49072 memberof fixup task with invalid basedn... | |||
Failed | tickets/ticket49072_test.py::test_ticket49072_filter | 10.04 | |
topo = <lib389.topologies.TopologyMain object at 0x7f61c1fb1ac0> def test_ticket49072_filter(topo): """memberOf fixup task does not validate args :id: dde9e893-119d-42c8-a236-1190e56bfe98 :feature: memberOf :setup: Standalone instance, with memberOf plugin :steps: 1. Run fixup-memberOf.pl with invalid filter 2. Check if error log reports "Bad search filter" :expectedresults: Fixup-memberOf.pl task should complete, but errors logged. """ log.info("Ticket 49072 memberof fixup task with invalid filter...") log.info('Wait for 10 secs and check if task is completed') time.sleep(10) task_memof = 'cn=memberOf task,cn=tasks,cn=config' if topo.standalone.search_s(task_memof, ldap.SCOPE_SUBTREE, 'cn=memberOf_fixup*', ['dn:']): log.info('memberof task is still running, wait for +10 secs') time.sleep(10) if ds_is_older('1.3'): inst_dir = topo.standalone.get_inst_dir() memof_task = os.path.join(inst_dir, FIXUP_MEMOF) try: output = subprocess.check_output([memof_task, '-D', DN_DM, '-w', PASSWORD, '-b', SUFFIX, '-f', TEST_FILTER]) except subprocess.CalledProcessError as err: output = err.output else: sbin_dir = topo.standalone.get_sbin_dir() memof_task = os.path.join(sbin_dir, FIXUP_MEMOF) try: > output = subprocess.check_output( [memof_task, '-D', DN_DM, '-w', PASSWORD, '-b', SUFFIX, '-Z', SERVERID_STANDALONE, '-f', TEST_FILTER]) /export/tests/tickets/ticket49072_test.py:96: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/lib64/python3.8/subprocess.py:411: in check_output return run(*popenargs, stdout=PIPE, timeout=timeout, check=True, /usr/lib64/python3.8/subprocess.py:489: in run with Popen(*popenargs, **kwargs) as process: /usr/lib64/python3.8/subprocess.py:854: in __init__ self._execute_child(args, executable, preexec_fn, close_fds, _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <subprocess.Popen object at 0x7f61c1fb1850> args = ['/usr/sbin/fixup-memberof.pl', '-D', 'cn=Directory Manager', '-w', 'password', '-b', ...] executable = b'/usr/sbin/fixup-memberof.pl', preexec_fn = None, close_fds = True pass_fds = (), cwd = None, env = None, startupinfo = None, creationflags = 0 shell = False, p2cread = -1, p2cwrite = -1, c2pread = 48, c2pwrite = 51 errread = -1, errwrite = -1, restore_signals = True, start_new_session = False def _execute_child(self, args, executable, preexec_fn, close_fds, pass_fds, cwd, env, startupinfo, creationflags, shell, p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite, restore_signals, start_new_session): """Execute program (POSIX version)""" if isinstance(args, (str, bytes)): args = [args] elif isinstance(args, os.PathLike): if shell: raise TypeError('path-like args is not allowed when ' 'shell is true') args = [args] else: args = list(args) if shell: # On Android the default shell is at '/system/bin/sh'. unix_shell = ('/system/bin/sh' if hasattr(sys, 'getandroidapilevel') else '/bin/sh') args = [unix_shell, "-c"] + args if executable: args[0] = executable if executable is None: executable = args[0] sys.audit("subprocess.Popen", executable, args, cwd, env) if (_USE_POSIX_SPAWN and os.path.dirname(executable) and preexec_fn is None and not close_fds and not pass_fds and cwd is None and (p2cread == -1 or p2cread > 2) and (c2pwrite == -1 or c2pwrite > 2) and (errwrite == -1 or errwrite > 2) and not start_new_session): self._posix_spawn(args, executable, env, restore_signals, p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite) return orig_executable = executable # For transferring possible exec failure from child to parent. # Data format: "exception name:hex errno:description" # Pickle is not used; it is complex and involves memory allocation. errpipe_read, errpipe_write = os.pipe() # errpipe_write must not be in the standard io 0, 1, or 2 fd range. low_fds_to_close = [] while errpipe_write < 3: low_fds_to_close.append(errpipe_write) errpipe_write = os.dup(errpipe_write) for low_fd in low_fds_to_close: os.close(low_fd) try: try: # We must avoid complex work that could involve # malloc or free in the child process to avoid # potential deadlocks, thus we do all this here. # and pass it to fork_exec() if env is not None: env_list = [] for k, v in env.items(): k = os.fsencode(k) if b'=' in k: raise ValueError("illegal environment variable name") env_list.append(k + b'=' + os.fsencode(v)) else: env_list = None # Use execv instead of execve. executable = os.fsencode(executable) if os.path.dirname(executable): executable_list = (executable,) else: # This matches the behavior of os._execvpe(). executable_list = tuple( os.path.join(os.fsencode(dir), executable) for dir in os.get_exec_path(env)) fds_to_keep = set(pass_fds) fds_to_keep.add(errpipe_write) self.pid = _posixsubprocess.fork_exec( args, executable_list, close_fds, tuple(sorted(map(int, fds_to_keep))), cwd, env_list, p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite, errpipe_read, errpipe_write, restore_signals, start_new_session, preexec_fn) self._child_created = True finally: # be sure the FD is closed no matter what os.close(errpipe_write) self._close_pipe_fds(p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite) # Wait for exec to fail or succeed; possibly raising an # exception (limited in size) errpipe_data = bytearray() while True: part = os.read(errpipe_read, 50000) errpipe_data += part if not part or len(errpipe_data) > 50000: break finally: # be sure the FD is closed no matter what os.close(errpipe_read) if errpipe_data: try: pid, sts = os.waitpid(self.pid, 0) if pid == self.pid: self._handle_exitstatus(sts) else: self.returncode = sys.maxsize except ChildProcessError: pass try: exception_name, hex_errno, err_msg = ( errpipe_data.split(b':', 2)) # The encoding here should match the encoding # written in by the subprocess implementations # like _posixsubprocess err_msg = err_msg.decode() except ValueError: exception_name = b'SubprocessError' hex_errno = b'0' err_msg = 'Bad exception data from child: {!r}'.format( bytes(errpipe_data)) child_exception_type = getattr( builtins, exception_name.decode('ascii'), SubprocessError) if issubclass(child_exception_type, OSError) and hex_errno: errno_num = int(hex_errno, 16) child_exec_never_called = (err_msg == "noexec") if child_exec_never_called: err_msg = "" # The error must be from chdir(cwd). err_filename = cwd else: err_filename = orig_executable if errno_num != 0: err_msg = os.strerror(errno_num) > raise child_exception_type(errno_num, err_msg, err_filename) E FileNotFoundError: [Errno 2] No such file or directory: '/usr/sbin/fixup-memberof.pl' /usr/lib64/python3.8/subprocess.py:1702: FileNotFoundError -------------------------------Captured log call-------------------------------- [32mINFO [0m tests.tickets.ticket49072_test:ticket49072_test.py:77 Ticket 49072 memberof fixup task with invalid filter... [32mINFO [0m tests.tickets.ticket49072_test:ticket49072_test.py:78 Wait for 10 secs and check if task is completed | |||
Failed | tickets/ticket49073_test.py::test_ticket49073 | 8.35 | |
topology_m2 = <lib389.topologies.TopologyMain object at 0x7f61c1e03430> def test_ticket49073(topology_m2): """Write your replication test here. To access each DirSrv instance use: topology_m2.ms["master1"], topology_m2.ms["master2"], ..., topology_m2.hub1, ..., topology_m2.consumer1,... Also, if you need any testcase initialization, please, write additional fixture for that(include finalizer). """ topology_m2.ms["master1"].plugins.enable(name=PLUGIN_MEMBER_OF) topology_m2.ms["master1"].restart(timeout=10) topology_m2.ms["master2"].plugins.enable(name=PLUGIN_MEMBER_OF) topology_m2.ms["master2"].restart(timeout=10) # Configure fractional to prevent total init to send memberof ents = topology_m2.ms["master1"].agreement.list(suffix=SUFFIX) assert len(ents) == 1 log.info('update %s to add nsDS5ReplicatedAttributeListTotal' % ents[0].dn) > topology_m2.ms["master1"].modify_s(ents[0].dn, [(ldap.MOD_REPLACE, 'nsDS5ReplicatedAttributeListTotal', '(objectclass=*) $ EXCLUDE '), (ldap.MOD_REPLACE, 'nsDS5ReplicatedAttributeList', '(objectclass=*) $ EXCLUDE memberOf')]) /export/tests/tickets/ticket49073_test.py:97: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:640: in modify_s return self.modify_ext_s(dn,modlist,None,None) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:612: in modify_ext_s msgid = self.modify_ext(dn,modlist,serverctrls,clientctrls) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:609: in modify_ext return self._ldap_call(self._l.modify_ext,dn,modlist,RequestControlTuples(serverctrls),RequestControlTuples(clientctrls)) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.DirSrv object at 0x7f61c1e033d0> func = <built-in method modify_ext of LDAP object at 0x7f61c1fa4570> args = ('cn=002,cn=replica,cn=dc\\3Dexample\\2Cdc\\3Dcom,cn=mapping tree,cn=config', [(2, 'nsDS5ReplicatedAttributeListTotal', '(objectclass=*) $ EXCLUDE '), (2, 'nsDS5ReplicatedAttributeList', '(objectclass=*) $ EXCLUDE memberOf')], None, None) kwargs = {}, diagnostic_message_success = None def _ldap_call(self,func,*args,**kwargs): """ Wrapper method mainly for serializing calls into OpenLDAP libs and trace logs """ self._ldap_object_lock.acquire() if __debug__: if self._trace_level>=1: self._trace_file.write('*** %s %s - %s\n%s\n' % ( repr(self), self._uri, '.'.join((self.__class__.__name__,func.__name__)), pprint.pformat((args,kwargs)) )) if self._trace_level>=9: traceback.print_stack(limit=self._trace_stack_limit,file=self._trace_file) diagnostic_message_success = None try: try: > result = func(*args,**kwargs) E TypeError: ('Tuple_to_LDAPMod(): expected a byte string in the list', '(') /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:324: TypeError -------------------------------Captured log setup------------------------------- [32mINFO [0m lib389.SetupDs:setup.py:658 Starting installation... [32mINFO [0m lib389.SetupDs:setup.py:686 Completed installation for master1 [32mINFO [0m lib389.topologies:topologies.py:109 Instance with parameters {'ldap-port': 39001, 'ldap-secureport': 63701, 'server-id': 'master1', 'suffix': 'dc=example,dc=com'} was created. [32mINFO [0m lib389.SetupDs:setup.py:658 Starting installation... [32mINFO [0m lib389.SetupDs:setup.py:686 Completed installation for master2 [32mINFO [0m lib389.topologies:topologies.py:109 Instance with parameters {'ldap-port': 39002, 'ldap-secureport': 63702, 'server-id': 'master2', 'suffix': 'dc=example,dc=com'} was created. [32mINFO [0m lib389.topologies:topologies.py:142 Creating replication topology. [32mINFO [0m lib389.topologies:topologies.py:156 Joining master master2 to master1 ... [32mINFO [0m lib389.replica:replica.py:2084 SUCCESS: bootstrap to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 completed [32mINFO [0m lib389.replica:replica.py:2365 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is was created [32mINFO [0m lib389.replica:replica.py:2365 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is was created [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is NOT working (expect 16530668-4b00-4173-a9bc-d746ced92fb5 / got description=None) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect c0efb6ec-e970-442a-a48a-baa60107173f / got description=16530668-4b00-4173-a9bc-d746ced92fb5) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is working [32mINFO [0m lib389.replica:replica.py:2153 SUCCESS: joined master from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 [32mINFO [0m lib389.topologies:topologies.py:164 Ensuring master master1 to master2 ... [32mINFO [0m lib389.replica:replica.py:2338 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 already exists [32mINFO [0m lib389.topologies:topologies.py:164 Ensuring master master2 to master1 ... [32mINFO [0m lib389.replica:replica.py:2338 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 already exists -------------------------------Captured log call-------------------------------- [32mINFO [0m tests.tickets.ticket49073_test:ticket49073_test.py:96 update cn=002,cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config to add nsDS5ReplicatedAttributeListTotal | |||
Failed | tickets/ticket49104_test.py::test_ticket49104_setup | 0.01 | |
topology_st = <lib389.topologies.TopologyMain object at 0x7f61c2145250> def test_ticket49104_setup(topology_st): """ Generate an ldif file having 10K entries and import it. """ # Generate a test ldif (100k entries) ldif_dir = topology_st.standalone.get_ldif_dir() import_ldif = ldif_dir + '/49104.ldif' try: > topology_st.standalone.buildLDIF(100000, import_ldif) /export/tests/tickets/ticket49104_test.py:30: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.DirSrv object at 0x7f61c2145670>, num = 100000 ldif_file = '/var/lib/dirsrv/slapd-standalone1/ldif/49104.ldif' suffix = 'dc=example,dc=com' def buildLDIF(self, num, ldif_file, suffix='dc=example,dc=com'): """Generate a simple ldif file using the dbgen.pl script, and set the ownership and permissions to match the user that the server runs as. @param num - number of entries to create @param ldif_file - ldif file name(including the path) @suffix - DN of the parent entry in the ldif file @return - nothing @raise - OSError """ > raise Exception("Perl tools disabled on this system. Try dbgen py module.") E Exception: Perl tools disabled on this system. Try dbgen py module. /usr/local/lib/python3.8/site-packages/lib389/__init__.py:3236: Exception -------------------------------Captured log setup------------------------------- [32mINFO [0m lib389.SetupDs:setup.py:658 Starting installation... [32mINFO [0m lib389.SetupDs:setup.py:686 Completed installation for standalone1 [32mINFO [0m lib389.topologies:topologies.py:109 Instance with parameters {'ldap-port': 38901, 'ldap-secureport': 63601, 'server-id': 'standalone1', 'suffix': 'dc=example,dc=com'} was created. | |||
Failed | tickets/ticket49192_test.py::test_ticket49192 | 0.00 | |
topo = <lib389.topologies.TopologyMain object at 0x7f61c1fecd30> def test_ticket49192(topo): """Trigger deadlock when removing suffix """ # # Create a second suffix/backend # log.info('Creating second backend...') > topo.standalone.backends.create(None, properties={ BACKEND_NAME: "Second_Backend", 'suffix': "o=hang.com", }) /export/tests/tickets/ticket49192_test.py:35: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/local/lib/python3.8/site-packages/lib389/_mapped_object.py:1169: in create return co.create(rdn, properties, self._basedn) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.backend.Backend object at 0x7f61c25e7df0>, dn = None properties = {'name': 'Second_Backend', 'suffix': 'o=hang.com'} basedn = 'cn=ldbm database,cn=plugins,cn=config' def create(self, dn=None, properties=None, basedn=DN_LDBM): """Add a new backend entry, create mapping tree, and, if requested, sample entries :param dn: DN of the new entry :type dn: str :param properties: Attributes and parameters for the new entry :type properties: dict :param basedn: Base DN of the new entry :type basedn: str :returns: DSLdapObject of the created entry """ sample_entries = False parent_suffix = False # normalize suffix (remove spaces between comps) if dn is not None: dn_comps = ldap.dn.explode_dn(dn.lower()) dn = ",".join(dn_comps) if properties is not None: > suffix_dn = properties['nsslapd-suffix'].lower() E KeyError: 'nsslapd-suffix' /usr/local/lib/python3.8/site-packages/lib389/backend.py:609: KeyError -------------------------------Captured log setup------------------------------- [32mINFO [0m lib389.SetupDs:setup.py:658 Starting installation... [32mINFO [0m lib389.SetupDs:setup.py:686 Completed installation for standalone1 [32mINFO [0m lib389.topologies:topologies.py:109 Instance with parameters {'ldap-port': 38901, 'ldap-secureport': 63601, 'server-id': 'standalone1', 'suffix': 'dc=example,dc=com'} was created. -------------------------------Captured log call-------------------------------- [32mINFO [0m tests.tickets.ticket49192_test:ticket49192_test.py:34 Creating second backend... | |||
Failed | tickets/ticket49287_test.py::test_ticket49287 | 16.11 | |
self = <lib389.mappingTree.MappingTreeLegacy object at 0x7f61c1d3bca0> suffix = 'dc=test,dc=com', bename = 'test', parent = None def create(self, suffix=None, bename=None, parent=None): ''' Create a mapping tree entry (under "cn=mapping tree,cn=config"), for the 'suffix' and that is stored in 'bename' backend. 'bename' backend must exist before creating the mapping tree entry. If a 'parent' is provided that means that we are creating a sub-suffix mapping tree. @param suffix - suffix mapped by this mapping tree entry. It will be the common name ('cn') of the entry @param benamebase - backend common name (e.g. 'userRoot') @param parent - if provided is a parent suffix of 'suffix' @return DN of the mapping tree entry @raise ldap.NO_SUCH_OBJECT - if the backend entry or parent mapping tree does not exist ValueError - if missing a parameter, ''' # Check suffix is provided if not suffix: raise ValueError("suffix is mandatory") else: nsuffix = normalizeDN(suffix) # Check backend name is provided if not bename: raise ValueError("backend name is mandatory") # Check that if the parent suffix is provided then # it exists a mapping tree for it if parent: nparent = normalizeDN(parent) filt = suffixfilt(parent) try: entry = self.conn.getEntry(DN_MAPPING_TREE, ldap.SCOPE_SUBTREE, filt) pass except NoSuchEntryError: raise ValueError("parent suffix has no mapping tree") else: nparent = "" # Check if suffix exists, return filt = suffixfilt(suffix) try: entry = self.conn.getEntry(DN_MAPPING_TREE, ldap.SCOPE_SUBTREE, filt) return entry except ldap.NO_SUCH_OBJECT: entry = None # # Now start the real work # # fix me when we can actually used escaped DNs dn = ','.join(('cn="%s"' % nsuffix, DN_MAPPING_TREE)) entry = Entry(dn) entry.update({ 'objectclass': ['top', 'extensibleObject', MT_OBJECTCLASS_VALUE], 'nsslapd-state': 'backend', # the value in the dn has to be DN escaped # internal code will add the quoted value - unquoted value is # useful for searching. MT_PROPNAME_TO_ATTRNAME[MT_SUFFIX]: nsuffix, MT_PROPNAME_TO_ATTRNAME[MT_BACKEND]: bename }) # possibly add the parent if parent: entry.setValues(MT_PROPNAME_TO_ATTRNAME[MT_PARENT_SUFFIX], nparent) try: self.log.debug("Creating entry: %s", entry.dn) self.log.info("Entry %r", entry) > self.conn.add_s(entry) /usr/local/lib/python3.8/site-packages/lib389/mappingTree.py:155: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ args = (dn: cn="dc=test,dc=com",cn=mapping tree,cn=config cn: dc=test,dc=com nsslapd-backend: test nsslapd-state: backend objectclass: top objectclass: extensibleObject objectclass: nsMappingTree ,) kwargs = {} c_stack = [FrameInfo(frame=<frame at 0x7f61c2f95c40, file '/usr/local/lib/python3.8/site-packages/lib389/__init__.py', line 176,...neno=187, function='_multicall', code_context=[' res = hook_impl.function(*args)\n'], index=0), ...] frame = FrameInfo(frame=<frame at 0x5576b77742c0, file '/usr/local/lib/python3.8/site-packages/lib389/mappingTree.py', line 15.../lib389/mappingTree.py', lineno=155, function='create', code_context=[' self.conn.add_s(entry)\n'], index=0) ent = dn: cn="dc=test,dc=com",cn=mapping tree,cn=config cn: dc=test,dc=com nsslapd-backend: test nsslapd-state: backend objectclass: top objectclass: extensibleObject objectclass: nsMappingTree def inner(*args, **kwargs): if name in [ 'add_s', 'bind_s', 'delete_s', 'modify_s', 'modrdn_s', 'rename_s', 'sasl_interactive_bind_s', 'search_s', 'search_ext_s', 'simple_bind_s', 'unbind_s', 'getEntry', ] and not ('escapehatch' in kwargs and kwargs['escapehatch'] == 'i am sure'): c_stack = inspect.stack() frame = c_stack[1] warnings.warn(DeprecationWarning("Use of raw ldap function %s. This will be removed in a future release. " "Found in: %s:%s" % (name, frame.filename, frame.lineno))) # Later, we will add a sleep here to make it even more painful. # Finally, it will raise an exception. elif 'escapehatch' in kwargs: kwargs.pop('escapehatch') if name == 'result': objtype, data = f(*args, **kwargs) # data is either a 2-tuple or a list of 2-tuples # print data if data: if isinstance(data, tuple): return objtype, Entry(data) elif isinstance(data, list): # AD sends back these search references # if objtype == ldap.RES_SEARCH_RESULT and \ # isinstance(data[-1],tuple) and \ # not data[-1][0]: # print "Received search reference: " # pprint.pprint(data[-1][1]) # data.pop() # remove the last non-entry element return objtype, [Entry(x) for x in data] else: raise TypeError("unknown data type %s returned by result" % type(data)) else: return objtype, data elif name.startswith('add'): # the first arg is self # the second and third arg are the dn and the data to send # We need to convert the Entry into the format used by # python-ldap ent = args[0] if isinstance(ent, Entry): > return f(ent.dn, ent.toTupleList(), *args[2:]) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:176: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.DirSrv object at 0x7f61c2360b50> dn = 'cn="dc=test,dc=com",cn=mapping tree,cn=config' modlist = [('objectclass', [b'top', b'extensibleObject', b'nsMappingTree']), ('nsslapd-state', [b'backend']), ('cn', [b'dc=test,dc=com']), ('nsslapd-backend', [b'test'])] def add_s(self,dn,modlist): > return self.add_ext_s(dn,modlist,None,None) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:439: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ args = ('cn="dc=test,dc=com",cn=mapping tree,cn=config', [('objectclass', [b'top', b'extensibleObject', b'nsMappingTree']), ('nsslapd-state', [b'backend']), ('cn', [b'dc=test,dc=com']), ('nsslapd-backend', [b'test'])], None, None) kwargs = {}, ent = 'cn="dc=test,dc=com",cn=mapping tree,cn=config' def inner(*args, **kwargs): if name in [ 'add_s', 'bind_s', 'delete_s', 'modify_s', 'modrdn_s', 'rename_s', 'sasl_interactive_bind_s', 'search_s', 'search_ext_s', 'simple_bind_s', 'unbind_s', 'getEntry', ] and not ('escapehatch' in kwargs and kwargs['escapehatch'] == 'i am sure'): c_stack = inspect.stack() frame = c_stack[1] warnings.warn(DeprecationWarning("Use of raw ldap function %s. This will be removed in a future release. " "Found in: %s:%s" % (name, frame.filename, frame.lineno))) # Later, we will add a sleep here to make it even more painful. # Finally, it will raise an exception. elif 'escapehatch' in kwargs: kwargs.pop('escapehatch') if name == 'result': objtype, data = f(*args, **kwargs) # data is either a 2-tuple or a list of 2-tuples # print data if data: if isinstance(data, tuple): return objtype, Entry(data) elif isinstance(data, list): # AD sends back these search references # if objtype == ldap.RES_SEARCH_RESULT and \ # isinstance(data[-1],tuple) and \ # not data[-1][0]: # print "Received search reference: " # pprint.pprint(data[-1][1]) # data.pop() # remove the last non-entry element return objtype, [Entry(x) for x in data] else: raise TypeError("unknown data type %s returned by result" % type(data)) else: return objtype, data elif name.startswith('add'): # the first arg is self # the second and third arg are the dn and the data to send # We need to convert the Entry into the format used by # python-ldap ent = args[0] if isinstance(ent, Entry): return f(ent.dn, ent.toTupleList(), *args[2:]) else: > return f(*args, **kwargs) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:178: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.DirSrv object at 0x7f61c2360b50> dn = 'cn="dc=test,dc=com",cn=mapping tree,cn=config' modlist = [('objectclass', [b'top', b'extensibleObject', b'nsMappingTree']), ('nsslapd-state', [b'backend']), ('cn', [b'dc=test,dc=com']), ('nsslapd-backend', [b'test'])] serverctrls = None, clientctrls = None def add_ext_s(self,dn,modlist,serverctrls=None,clientctrls=None): msgid = self.add_ext(dn,modlist,serverctrls,clientctrls) > resp_type, resp_data, resp_msgid, resp_ctrls = self.result3(msgid,all=1,timeout=self.timeout) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:425: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ args = (4,), kwargs = {'all': 1, 'timeout': -1} def inner(*args, **kwargs): if name in [ 'add_s', 'bind_s', 'delete_s', 'modify_s', 'modrdn_s', 'rename_s', 'sasl_interactive_bind_s', 'search_s', 'search_ext_s', 'simple_bind_s', 'unbind_s', 'getEntry', ] and not ('escapehatch' in kwargs and kwargs['escapehatch'] == 'i am sure'): c_stack = inspect.stack() frame = c_stack[1] warnings.warn(DeprecationWarning("Use of raw ldap function %s. This will be removed in a future release. " "Found in: %s:%s" % (name, frame.filename, frame.lineno))) # Later, we will add a sleep here to make it even more painful. # Finally, it will raise an exception. elif 'escapehatch' in kwargs: kwargs.pop('escapehatch') if name == 'result': objtype, data = f(*args, **kwargs) # data is either a 2-tuple or a list of 2-tuples # print data if data: if isinstance(data, tuple): return objtype, Entry(data) elif isinstance(data, list): # AD sends back these search references # if objtype == ldap.RES_SEARCH_RESULT and \ # isinstance(data[-1],tuple) and \ # not data[-1][0]: # print "Received search reference: " # pprint.pprint(data[-1][1]) # data.pop() # remove the last non-entry element return objtype, [Entry(x) for x in data] else: raise TypeError("unknown data type %s returned by result" % type(data)) else: return objtype, data elif name.startswith('add'): # the first arg is self # the second and third arg are the dn and the data to send # We need to convert the Entry into the format used by # python-ldap ent = args[0] if isinstance(ent, Entry): return f(ent.dn, ent.toTupleList(), *args[2:]) else: return f(*args, **kwargs) else: > return f(*args, **kwargs) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.DirSrv object at 0x7f61c2360b50>, msgid = 4, all = 1 timeout = -1, resp_ctrl_classes = None def result3(self,msgid=ldap.RES_ANY,all=1,timeout=None,resp_ctrl_classes=None): > resp_type, resp_data, resp_msgid, decoded_resp_ctrls, retoid, retval = self.result4( msgid,all,timeout, add_ctrls=0,add_intermediates=0,add_extop=0, resp_ctrl_classes=resp_ctrl_classes ) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:764: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ args = (4, 1, -1) kwargs = {'add_ctrls': 0, 'add_extop': 0, 'add_intermediates': 0, 'resp_ctrl_classes': None} def inner(*args, **kwargs): if name in [ 'add_s', 'bind_s', 'delete_s', 'modify_s', 'modrdn_s', 'rename_s', 'sasl_interactive_bind_s', 'search_s', 'search_ext_s', 'simple_bind_s', 'unbind_s', 'getEntry', ] and not ('escapehatch' in kwargs and kwargs['escapehatch'] == 'i am sure'): c_stack = inspect.stack() frame = c_stack[1] warnings.warn(DeprecationWarning("Use of raw ldap function %s. This will be removed in a future release. " "Found in: %s:%s" % (name, frame.filename, frame.lineno))) # Later, we will add a sleep here to make it even more painful. # Finally, it will raise an exception. elif 'escapehatch' in kwargs: kwargs.pop('escapehatch') if name == 'result': objtype, data = f(*args, **kwargs) # data is either a 2-tuple or a list of 2-tuples # print data if data: if isinstance(data, tuple): return objtype, Entry(data) elif isinstance(data, list): # AD sends back these search references # if objtype == ldap.RES_SEARCH_RESULT and \ # isinstance(data[-1],tuple) and \ # not data[-1][0]: # print "Received search reference: " # pprint.pprint(data[-1][1]) # data.pop() # remove the last non-entry element return objtype, [Entry(x) for x in data] else: raise TypeError("unknown data type %s returned by result" % type(data)) else: return objtype, data elif name.startswith('add'): # the first arg is self # the second and third arg are the dn and the data to send # We need to convert the Entry into the format used by # python-ldap ent = args[0] if isinstance(ent, Entry): return f(ent.dn, ent.toTupleList(), *args[2:]) else: return f(*args, **kwargs) else: > return f(*args, **kwargs) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.DirSrv object at 0x7f61c2360b50>, msgid = 4, all = 1 timeout = -1, add_ctrls = 0, add_intermediates = 0, add_extop = 0 resp_ctrl_classes = None def result4(self,msgid=ldap.RES_ANY,all=1,timeout=None,add_ctrls=0,add_intermediates=0,add_extop=0,resp_ctrl_classes=None): if timeout is None: timeout = self.timeout > ldap_result = self._ldap_call(self._l.result4,msgid,all,timeout,add_ctrls,add_intermediates,add_extop) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:774: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ args = (<built-in method result4 of LDAP object at 0x7f61c247da20>, 4, 1, -1, 0, 0, ...) kwargs = {} def inner(*args, **kwargs): if name in [ 'add_s', 'bind_s', 'delete_s', 'modify_s', 'modrdn_s', 'rename_s', 'sasl_interactive_bind_s', 'search_s', 'search_ext_s', 'simple_bind_s', 'unbind_s', 'getEntry', ] and not ('escapehatch' in kwargs and kwargs['escapehatch'] == 'i am sure'): c_stack = inspect.stack() frame = c_stack[1] warnings.warn(DeprecationWarning("Use of raw ldap function %s. This will be removed in a future release. " "Found in: %s:%s" % (name, frame.filename, frame.lineno))) # Later, we will add a sleep here to make it even more painful. # Finally, it will raise an exception. elif 'escapehatch' in kwargs: kwargs.pop('escapehatch') if name == 'result': objtype, data = f(*args, **kwargs) # data is either a 2-tuple or a list of 2-tuples # print data if data: if isinstance(data, tuple): return objtype, Entry(data) elif isinstance(data, list): # AD sends back these search references # if objtype == ldap.RES_SEARCH_RESULT and \ # isinstance(data[-1],tuple) and \ # not data[-1][0]: # print "Received search reference: " # pprint.pprint(data[-1][1]) # data.pop() # remove the last non-entry element return objtype, [Entry(x) for x in data] else: raise TypeError("unknown data type %s returned by result" % type(data)) else: return objtype, data elif name.startswith('add'): # the first arg is self # the second and third arg are the dn and the data to send # We need to convert the Entry into the format used by # python-ldap ent = args[0] if isinstance(ent, Entry): return f(ent.dn, ent.toTupleList(), *args[2:]) else: return f(*args, **kwargs) else: > return f(*args, **kwargs) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.DirSrv object at 0x7f61c2360b50> func = <built-in method result4 of LDAP object at 0x7f61c247da20> args = (4, 1, -1, 0, 0, 0), kwargs = {}, diagnostic_message_success = None exc_type = None, exc_value = None, exc_traceback = None def _ldap_call(self,func,*args,**kwargs): """ Wrapper method mainly for serializing calls into OpenLDAP libs and trace logs """ self._ldap_object_lock.acquire() if __debug__: if self._trace_level>=1: self._trace_file.write('*** %s %s - %s\n%s\n' % ( repr(self), self._uri, '.'.join((self.__class__.__name__,func.__name__)), pprint.pformat((args,kwargs)) )) if self._trace_level>=9: traceback.print_stack(limit=self._trace_stack_limit,file=self._trace_file) diagnostic_message_success = None try: try: result = func(*args,**kwargs) if __debug__ and self._trace_level>=2: if func.__name__!="unbind_ext": diagnostic_message_success = self._l.get_option(ldap.OPT_DIAGNOSTIC_MESSAGE) finally: self._ldap_object_lock.release() except LDAPError as e: exc_type,exc_value,exc_traceback = sys.exc_info() try: if 'info' not in e.args[0] and 'errno' in e.args[0]: e.args[0]['info'] = strerror(e.args[0]['errno']) except IndexError: pass if __debug__ and self._trace_level>=2: self._trace_file.write('=> LDAPError - %s: %s\n' % (e.__class__.__name__,str(e))) try: > reraise(exc_type, exc_value, exc_traceback) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:340: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ exc_type = <class 'ldap.UNWILLING_TO_PERFORM'> exc_value = UNWILLING_TO_PERFORM({'msgtype': 105, 'msgid': 4, 'result': 53, 'desc': 'Server is unwilling to perform', 'ctrls': []}) exc_traceback = <traceback object at 0x7f61c2348480> def reraise(exc_type, exc_value, exc_traceback): """Re-raise an exception given information from sys.exc_info() Note that unlike six.reraise, this does not support replacing the traceback. All arguments must come from a single sys.exc_info() call. """ # In Python 3, all exception info is contained in one object. > raise exc_value /usr/local/lib64/python3.8/site-packages/ldap/compat.py:46: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.DirSrv object at 0x7f61c2360b50> func = <built-in method result4 of LDAP object at 0x7f61c247da20> args = (4, 1, -1, 0, 0, 0), kwargs = {}, diagnostic_message_success = None exc_type = None, exc_value = None, exc_traceback = None def _ldap_call(self,func,*args,**kwargs): """ Wrapper method mainly for serializing calls into OpenLDAP libs and trace logs """ self._ldap_object_lock.acquire() if __debug__: if self._trace_level>=1: self._trace_file.write('*** %s %s - %s\n%s\n' % ( repr(self), self._uri, '.'.join((self.__class__.__name__,func.__name__)), pprint.pformat((args,kwargs)) )) if self._trace_level>=9: traceback.print_stack(limit=self._trace_stack_limit,file=self._trace_file) diagnostic_message_success = None try: try: > result = func(*args,**kwargs) E ldap.UNWILLING_TO_PERFORM: {'msgtype': 105, 'msgid': 4, 'result': 53, 'desc': 'Server is unwilling to perform', 'ctrls': []} /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:324: UNWILLING_TO_PERFORM During handling of the above exception, another exception occurred: topology_m2 = <lib389.topologies.TopologyMain object at 0x7f61c2360c10> def test_ticket49287(topology_m2): """ test case for memberof and conflict entries """ # return M1 = topology_m2.ms["master1"] M2 = topology_m2.ms["master2"] config_memberof(M1) config_memberof(M2) _enable_spec_logging(M1) _enable_spec_logging(M2) _disable_nunc_stans(M1) _disable_nunc_stans(M2) M1.restart(timeout=10) M2.restart(timeout=10) testbase = 'dc=test,dc=com' bename = 'test' > create_backend(M1, M2, testbase, bename) /export/tests/tickets/ticket49287_test.py:282: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /export/tests/tickets/ticket49287_test.py:204: in create_backend s1.mappingtree.create(beSuffix, beName) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.mappingTree.MappingTreeLegacy object at 0x7f61c1d3bca0> suffix = 'dc=test,dc=com', bename = 'test', parent = None def create(self, suffix=None, bename=None, parent=None): ''' Create a mapping tree entry (under "cn=mapping tree,cn=config"), for the 'suffix' and that is stored in 'bename' backend. 'bename' backend must exist before creating the mapping tree entry. If a 'parent' is provided that means that we are creating a sub-suffix mapping tree. @param suffix - suffix mapped by this mapping tree entry. It will be the common name ('cn') of the entry @param benamebase - backend common name (e.g. 'userRoot') @param parent - if provided is a parent suffix of 'suffix' @return DN of the mapping tree entry @raise ldap.NO_SUCH_OBJECT - if the backend entry or parent mapping tree does not exist ValueError - if missing a parameter, ''' # Check suffix is provided if not suffix: raise ValueError("suffix is mandatory") else: nsuffix = normalizeDN(suffix) # Check backend name is provided if not bename: raise ValueError("backend name is mandatory") # Check that if the parent suffix is provided then # it exists a mapping tree for it if parent: nparent = normalizeDN(parent) filt = suffixfilt(parent) try: entry = self.conn.getEntry(DN_MAPPING_TREE, ldap.SCOPE_SUBTREE, filt) pass except NoSuchEntryError: raise ValueError("parent suffix has no mapping tree") else: nparent = "" # Check if suffix exists, return filt = suffixfilt(suffix) try: entry = self.conn.getEntry(DN_MAPPING_TREE, ldap.SCOPE_SUBTREE, filt) return entry except ldap.NO_SUCH_OBJECT: entry = None # # Now start the real work # # fix me when we can actually used escaped DNs dn = ','.join(('cn="%s"' % nsuffix, DN_MAPPING_TREE)) entry = Entry(dn) entry.update({ 'objectclass': ['top', 'extensibleObject', MT_OBJECTCLASS_VALUE], 'nsslapd-state': 'backend', # the value in the dn has to be DN escaped # internal code will add the quoted value - unquoted value is # useful for searching. MT_PROPNAME_TO_ATTRNAME[MT_SUFFIX]: nsuffix, MT_PROPNAME_TO_ATTRNAME[MT_BACKEND]: bename }) # possibly add the parent if parent: entry.setValues(MT_PROPNAME_TO_ATTRNAME[MT_PARENT_SUFFIX], nparent) try: self.log.debug("Creating entry: %s", entry.dn) self.log.info("Entry %r", entry) self.conn.add_s(entry) except ldap.LDAPError as e: > raise ldap.LDAPError("Error adding suffix entry " + dn, e) E ldap.LDAPError: ('Error adding suffix entry cn="dc=test,dc=com",cn=mapping tree,cn=config', UNWILLING_TO_PERFORM({'msgtype': 105, 'msgid': 4, 'result': 53, 'desc': 'Server is unwilling to perform', 'ctrls': []})) /usr/local/lib/python3.8/site-packages/lib389/mappingTree.py:157: LDAPError -------------------------------Captured log setup------------------------------- [32mINFO [0m lib389.SetupDs:setup.py:658 Starting installation... [32mINFO [0m lib389.SetupDs:setup.py:686 Completed installation for master1 [32mINFO [0m lib389.topologies:topologies.py:109 Instance with parameters {'ldap-port': 39001, 'ldap-secureport': 63701, 'server-id': 'master1', 'suffix': 'dc=example,dc=com'} was created. [32mINFO [0m lib389.SetupDs:setup.py:658 Starting installation... [32mINFO [0m lib389.SetupDs:setup.py:686 Completed installation for master2 [32mINFO [0m lib389.topologies:topologies.py:109 Instance with parameters {'ldap-port': 39002, 'ldap-secureport': 63702, 'server-id': 'master2', 'suffix': 'dc=example,dc=com'} was created. [32mINFO [0m lib389.topologies:topologies.py:142 Creating replication topology. [32mINFO [0m lib389.topologies:topologies.py:156 Joining master master2 to master1 ... [32mINFO [0m lib389.replica:replica.py:2084 SUCCESS: bootstrap to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 completed [32mINFO [0m lib389.replica:replica.py:2365 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is was created [32mINFO [0m lib389.replica:replica.py:2365 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is was created [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is NOT working (expect e34eebf6-f09d-4c3c-b847-ce27989e9334 / got description=None) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 1addb0ca-5bfd-4516-b13b-c3ba7770e997 / got description=e34eebf6-f09d-4c3c-b847-ce27989e9334) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is working [32mINFO [0m lib389.replica:replica.py:2153 SUCCESS: joined master from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 [32mINFO [0m lib389.topologies:topologies.py:164 Ensuring master master1 to master2 ... [32mINFO [0m lib389.replica:replica.py:2338 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 already exists [32mINFO [0m lib389.topologies:topologies.py:164 Ensuring master master2 to master1 ... [32mINFO [0m lib389.replica:replica.py:2338 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 already exists -------------------------------Captured log call-------------------------------- [32mINFO [0m tests.tickets.ticket49287_test:ticket49287_test.py:77 update cn=002,cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config to add nsDS5ReplicatedAttributeListTotal [32mINFO [0m tests.tickets.ticket49287_test:ticket49287_test.py:77 update cn=001,cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config to add nsDS5ReplicatedAttributeListTotal [32mINFO [0m lib389:mappingTree.py:154 Entry dn: cn="dc=test,dc=com",cn=mapping tree,cn=config cn: dc=test,dc=com nsslapd-backend: test nsslapd-state: backend objectclass: top objectclass: extensibleObject objectclass: nsMappingTree | |||
Failed | tickets/ticket49303_test.py::test_ticket49303 | 17.15 | |
topo = <lib389.topologies.TopologyMain object at 0x7f61c1d10460> def test_ticket49303(topo): """ Test the nsTLSAllowClientRenegotiation setting. """ sslport = SECUREPORT_STANDALONE1 log.info("Ticket 49303 - Allow disabling of SSL renegotiation") # No value set, defaults to reneg allowed enable_ssl(topo.standalone, sslport) > assert try_reneg(HOST_STANDALONE1, sslport) is True E AssertionError: assert False is True E + where False = try_reneg('LOCALHOST', 63601) /export/tests/tickets/ticket49303_test.py:88: AssertionError -------------------------------Captured log setup------------------------------- [32mINFO [0m lib389.SetupDs:setup.py:658 Starting installation... [32mINFO [0m lib389.SetupDs:setup.py:686 Completed installation for standalone1 [32mINFO [0m lib389.topologies:topologies.py:109 Instance with parameters {'ldap-port': 38901, 'ldap-secureport': 63601, 'server-id': 'standalone1', 'suffix': 'dc=example,dc=com'} was created. -------------------------------Captured log call-------------------------------- [32mINFO [0m tests.tickets.ticket49303_test:ticket49303_test.py:84 Ticket 49303 - Allow disabling of SSL renegotiation | |||
Failed | tickets/ticket49412_test.py::test_ticket49412 | 0.00 | |
topo = <lib389.topologies.TopologyMain object at 0x7f61c2357730> def test_ticket49412(topo): """Specify a test case purpose or name here :id: 4c7681ff-0511-4256-9589-bdcad84c13e6 :setup: Fill in set up configuration here :steps: 1. Fill in test case steps here 2. And indent them like this (RST format requirement) :expectedresults: 1. Fill in the result that is expected 2. For each test step """ M1 = topo.ms["master1"] # wrong call with invalid value (should be str(60) # that create replace with NULL value # it should fail with UNWILLING_TO_PERFORM try: > M1.modify_s(CHANGELOG, [(ldap.MOD_REPLACE, MAXAGE_ATTR, 60), (ldap.MOD_REPLACE, TRIMINTERVAL, 10)]) /export/tests/tickets/ticket49412_test.py:44: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:640: in modify_s return self.modify_ext_s(dn,modlist,None,None) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:613: in modify_ext_s resp_type, resp_data, resp_msgid, resp_ctrls = self.result3(msgid,all=1,timeout=self.timeout) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:764: in result3 resp_type, resp_data, resp_msgid, decoded_resp_ctrls, retoid, retval = self.result4( /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:774: in result4 ldap_result = self._ldap_call(self._l.result4,msgid,all,timeout,add_ctrls,add_intermediates,add_extop) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:340: in _ldap_call reraise(exc_type, exc_value, exc_traceback) /usr/local/lib64/python3.8/site-packages/ldap/compat.py:46: in reraise raise exc_value _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.DirSrv object at 0x7f61c237be20> func = <built-in method result4 of LDAP object at 0x7f61c2357a80> args = (39, 1, -1, 0, 0, 0), kwargs = {}, diagnostic_message_success = None exc_type = None, exc_value = None, exc_traceback = None def _ldap_call(self,func,*args,**kwargs): """ Wrapper method mainly for serializing calls into OpenLDAP libs and trace logs """ self._ldap_object_lock.acquire() if __debug__: if self._trace_level>=1: self._trace_file.write('*** %s %s - %s\n%s\n' % ( repr(self), self._uri, '.'.join((self.__class__.__name__,func.__name__)), pprint.pformat((args,kwargs)) )) if self._trace_level>=9: traceback.print_stack(limit=self._trace_stack_limit,file=self._trace_file) diagnostic_message_success = None try: try: > result = func(*args,**kwargs) E ldap.NO_SUCH_OBJECT: {'msgtype': 103, 'msgid': 39, 'result': 32, 'desc': 'No such object', 'ctrls': []} /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:324: NO_SUCH_OBJECT -------------------------------Captured log setup------------------------------- [32mINFO [0m lib389.SetupDs:setup.py:658 Starting installation... [32mINFO [0m lib389.SetupDs:setup.py:686 Completed installation for master1 [32mINFO [0m lib389.topologies:topologies.py:109 Instance with parameters {'ldap-port': 39001, 'ldap-secureport': 63701, 'server-id': 'master1', 'suffix': 'dc=example,dc=com'} was created. [32mINFO [0m lib389.SetupDs:setup.py:658 Starting installation... [32mINFO [0m lib389.SetupDs:setup.py:686 Completed installation for consumer1 [32mINFO [0m lib389.topologies:topologies.py:109 Instance with parameters {'ldap-port': 39201, 'ldap-secureport': 63901, 'server-id': 'consumer1', 'suffix': 'dc=example,dc=com'} was created. [32mINFO [0m lib389.topologies:topologies.py:142 Creating replication topology. [32mINFO [0m lib389.topologies:topologies.py:169 Joining consumer consumer1 from master1 ... [32mINFO [0m lib389.replica:replica.py:2084 SUCCESS: bootstrap to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39201 completed [32mINFO [0m lib389.replica:replica.py:2365 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39201 is was created [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39201 is NOT working (expect 2df3687c-fe15-4455-b29a-bf8ba4690a1e / got description=None) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39201 is working [32mINFO [0m lib389.replica:replica.py:2268 SUCCESS: joined consumer from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39201 [32mINFO [0m lib389.topologies:topologies.py:174 Ensuring consumer consumer1 from master1 ... [32mINFO [0m lib389.replica:replica.py:2338 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39201 already exists | |||
Failed | tickets/ticket49463_test.py::test_ticket_49463 | 277.40 | |
topo = <lib389.topologies.TopologyMain object at 0x7f61c25b1a90> def test_ticket_49463(topo): """Specify a test case purpose or name here :id: 2a68e8be-387d-4ac7-9452-1439e8483c13 :setup: Fill in set up configuration here :steps: 1. Enable fractional replication 2. Enable replication logging 3. Check that replication is working fine 4. Generate skipped updates to create keep alive entries 5. Remove M3 from the topology 6. issue cleanAllRuv FORCE that will run on M1 then propagated M2 and M4 7. Check that Number DEL keep alive '3' is <= 1 8. Check M1 is the originator of cleanAllRuv and M2/M4 the propagated ones 9. Check replication M1,M2 and M4 can recover 10. Remove M4 from the topology 11. Issue cleanAllRuv not force while M2 is stopped (that hangs the cleanAllRuv) 12. Check that nsds5ReplicaCleanRUV is correctly encoded on M1 (last value: 1) 13. Check that nsds5ReplicaCleanRUV encoding survives M1 restart 14. Check that nsds5ReplicaCleanRUV encoding is valid on M2 (last value: 0) 15. Check that (for M4 cleanAllRUV) M1 is Originator and M2 propagation :expectedresults: 1. No report of failure when the RUV is updated """ # Step 1 - Configure fractional (skip telephonenumber) replication M1 = topo.ms["master1"] M2 = topo.ms["master2"] M3 = topo.ms["master3"] M4 = topo.ms["master4"] repl = ReplicationManager(DEFAULT_SUFFIX) fractional_server_to_replica(M1, M2) fractional_server_to_replica(M1, M3) fractional_server_to_replica(M1, M4) fractional_server_to_replica(M2, M1) fractional_server_to_replica(M2, M3) fractional_server_to_replica(M2, M4) fractional_server_to_replica(M3, M1) fractional_server_to_replica(M3, M2) fractional_server_to_replica(M3, M4) fractional_server_to_replica(M4, M1) fractional_server_to_replica(M4, M2) fractional_server_to_replica(M4, M3) # Step 2 - enable internal op logging and replication debug for i in (M1, M2, M3, M4): i.config.loglevel(vals=[256 + 4], service='access') i.config.loglevel(vals=[LOG_REPLICA, LOG_DEFAULT], service='error') # Step 3 - Check that replication is working fine add_user(M1, 11, desc="add to M1") add_user(M2, 21, desc="add to M2") add_user(M3, 31, desc="add to M3") add_user(M4, 41, desc="add to M4") for i in (M1, M2, M3, M4): for j in (M1, M2, M3, M4): if i == j: continue repl.wait_for_replication(i, j) # Step 4 - Generate skipped updates to create keep alive entries for i in (M1, M2, M3, M4): cn = '%s_%d' % (USER_CN, 11) dn = 'uid=%s,ou=People,%s' % (cn, SUFFIX) users = UserAccount(i, dn) for j in range(110): users.set('telephoneNumber', str(j)) # Step 5 - Remove M3 from the topology M3.stop() M1.agreement.delete(suffix=SUFFIX, consumer_host=M3.host, consumer_port=M3.port) M2.agreement.delete(suffix=SUFFIX, consumer_host=M3.host, consumer_port=M3.port) M4.agreement.delete(suffix=SUFFIX, consumer_host=M3.host, consumer_port=M3.port) # Step 6 - Then issue cleanAllRuv FORCE that will run on M1, M2 and M4 M1.tasks.cleanAllRUV(suffix=SUFFIX, replicaid='3', force=True, args={TASK_WAIT: True}) # Step 7 - Count the number of received DEL of the keep alive 3 for i in (M1, M2, M4): i.restart() regex = re.compile(".*DEL dn=.cn=repl keep alive 3.*") for i in (M1, M2, M4): count = count_pattern_accesslog(M1, regex) log.debug("count on %s = %d" % (i, count)) # check that DEL is replicated once (If DEL is kept in the fix) # check that DEL is is not replicated (If DEL is finally no long done in the fix) assert ((count == 1) or (count == 0)) # Step 8 - Check that M1 is Originator of cleanAllRuv and M2, M4 propagation regex = re.compile(".*Original task deletes Keep alive entry .3.*") assert pattern_errorlog(M1, regex) regex = re.compile(".*Propagated task does not delete Keep alive entry .3.*") assert pattern_errorlog(M2, regex) assert pattern_errorlog(M4, regex) # Step 9 - Check replication M1,M2 and M4 can recover add_user(M1, 12, desc="add to M1") add_user(M2, 22, desc="add to M2") for i in (M1, M2, M4): for j in (M1, M2, M4): if i == j: continue repl.wait_for_replication(i, j) # Step 10 - Remove M4 from the topology M4.stop() M1.agreement.delete(suffix=SUFFIX, consumer_host=M4.host, consumer_port=M4.port) M2.agreement.delete(suffix=SUFFIX, consumer_host=M4.host, consumer_port=M4.port) # Step 11 - Issue cleanAllRuv not force while M2 is stopped (that hangs the cleanAllRuv) M2.stop() M1.tasks.cleanAllRUV(suffix=SUFFIX, replicaid='4', force=False, args={TASK_WAIT: False}) # Step 12 # CleanAllRuv is hanging waiting for M2 to restart # Check that nsds5ReplicaCleanRUV is correctly encoded on M1 replicas = Replicas(M1) replica = replicas.list()[0] time.sleep(0.5) replica.present('nsds5ReplicaCleanRUV') log.info("M1: nsds5ReplicaCleanRUV=%s" % replica.get_attr_val_utf8('nsds5replicacleanruv')) regex = re.compile("^4:.*:no:1$") > assert regex.match(replica.get_attr_val_utf8('nsds5replicacleanruv')) E AssertionError: assert None E + where None = <built-in method match of re.Pattern object at 0x7f61c2cd2650>('4:no:1:dc=example,dc=com') E + where <built-in method match of re.Pattern object at 0x7f61c2cd2650> = re.compile('^4:.*:no:1$').match E + and '4:no:1:dc=example,dc=com' = <bound method DSLdapObject.get_attr_val_utf8 of <lib389.replica.Replica object at 0x7f61c25ab4c0>>('nsds5replicacleanruv') E + where <bound method DSLdapObject.get_attr_val_utf8 of <lib389.replica.Replica object at 0x7f61c25ab4c0>> = <lib389.replica.Replica object at 0x7f61c25ab4c0>.get_attr_val_utf8 /export/tests/tickets/ticket49463_test.py:188: AssertionError -------------------------------Captured log setup------------------------------- [32mINFO [0m lib389.SetupDs:setup.py:658 Starting installation... [32mINFO [0m lib389.SetupDs:setup.py:686 Completed installation for master1 [32mINFO [0m lib389.topologies:topologies.py:109 Instance with parameters {'ldap-port': 39001, 'ldap-secureport': 63701, 'server-id': 'master1', 'suffix': 'dc=example,dc=com'} was created. [32mINFO [0m lib389.SetupDs:setup.py:658 Starting installation... [32mINFO [0m lib389.SetupDs:setup.py:686 Completed installation for master2 [32mINFO [0m lib389.topologies:topologies.py:109 Instance with parameters {'ldap-port': 39002, 'ldap-secureport': 63702, 'server-id': 'master2', 'suffix': 'dc=example,dc=com'} was created. [32mINFO [0m lib389.SetupDs:setup.py:658 Starting installation... [32mINFO [0m lib389.SetupDs:setup.py:686 Completed installation for master3 [32mINFO [0m lib389.topologies:topologies.py:109 Instance with parameters {'ldap-port': 39003, 'ldap-secureport': 63703, 'server-id': 'master3', 'suffix': 'dc=example,dc=com'} was created. [32mINFO [0m lib389.SetupDs:setup.py:658 Starting installation... [32mINFO [0m lib389.SetupDs:setup.py:686 Completed installation for master4 [32mINFO [0m lib389.topologies:topologies.py:109 Instance with parameters {'ldap-port': 39004, 'ldap-secureport': 63704, 'server-id': 'master4', 'suffix': 'dc=example,dc=com'} was created. [32mINFO [0m lib389.topologies:topologies.py:142 Creating replication topology. [32mINFO [0m lib389.topologies:topologies.py:156 Joining master master2 to master1 ... [32mINFO [0m lib389.replica:replica.py:2084 SUCCESS: bootstrap to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 completed [32mINFO [0m lib389.replica:replica.py:2365 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is was created [32mINFO [0m lib389.replica:replica.py:2365 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is was created [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is NOT working (expect c4699a07-f205-4259-ad9a-a80918358b6c / got description=None) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect df454208-a46e-4ed8-847e-438057b48158 / got description=c4699a07-f205-4259-ad9a-a80918358b6c) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is working [32mINFO [0m lib389.replica:replica.py:2153 SUCCESS: joined master from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 [32mINFO [0m lib389.topologies:topologies.py:156 Joining master master3 to master1 ... [32mINFO [0m lib389.replica:replica.py:2084 SUCCESS: bootstrap to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 completed [32mINFO [0m lib389.replica:replica.py:2365 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 is was created [32mINFO [0m lib389.replica:replica.py:2365 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is was created [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 is NOT working (expect 11d6075a-dc06-4f32-a927-f4a882df0ced / got description=df454208-a46e-4ed8-847e-438057b48158) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 258edeae-e6e6-47fd-ae26-909f2b52da6d / got description=11d6075a-dc06-4f32-a927-f4a882df0ced) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 258edeae-e6e6-47fd-ae26-909f2b52da6d / got description=11d6075a-dc06-4f32-a927-f4a882df0ced) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 258edeae-e6e6-47fd-ae26-909f2b52da6d / got description=11d6075a-dc06-4f32-a927-f4a882df0ced) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is working [32mINFO [0m lib389.replica:replica.py:2153 SUCCESS: joined master from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 [32mINFO [0m lib389.topologies:topologies.py:156 Joining master master4 to master1 ... [32mINFO [0m lib389.replica:replica.py:2084 SUCCESS: bootstrap to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 completed [32mINFO [0m lib389.replica:replica.py:2365 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 is was created [32mINFO [0m lib389.replica:replica.py:2365 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is was created [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 is NOT working (expect d0a9924b-4331-4b59-84dd-9d9673867be2 / got description=258edeae-e6e6-47fd-ae26-909f2b52da6d) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 0ec0de1d-797d-4862-ae52-4c00fd1e1aa1 / got description=d0a9924b-4331-4b59-84dd-9d9673867be2) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is working [32mINFO [0m lib389.replica:replica.py:2153 SUCCESS: joined master from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 [32mINFO [0m lib389.topologies:topologies.py:164 Ensuring master master1 to master2 ... [32mINFO [0m lib389.replica:replica.py:2338 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 already exists [32mINFO [0m lib389.topologies:topologies.py:164 Ensuring master master1 to master3 ... [32mINFO [0m lib389.replica:replica.py:2338 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 already exists [32mINFO [0m lib389.topologies:topologies.py:164 Ensuring master master1 to master4 ... [32mINFO [0m lib389.replica:replica.py:2338 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 already exists [32mINFO [0m lib389.topologies:topologies.py:164 Ensuring master master2 to master1 ... [32mINFO [0m lib389.replica:replica.py:2338 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 already exists [32mINFO [0m lib389.topologies:topologies.py:164 Ensuring master master2 to master3 ... [32mINFO [0m lib389.replica:replica.py:2365 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 is was created [32mINFO [0m lib389.topologies:topologies.py:164 Ensuring master master2 to master4 ... [32mINFO [0m lib389.replica:replica.py:2365 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 is was created [32mINFO [0m lib389.topologies:topologies.py:164 Ensuring master master3 to master1 ... [32mINFO [0m lib389.replica:replica.py:2338 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 already exists [32mINFO [0m lib389.topologies:topologies.py:164 Ensuring master master3 to master2 ... [32mINFO [0m lib389.replica:replica.py:2365 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is was created [32mINFO [0m lib389.topologies:topologies.py:164 Ensuring master master3 to master4 ... [32mINFO [0m lib389.replica:replica.py:2365 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 is was created [32mINFO [0m lib389.topologies:topologies.py:164 Ensuring master master4 to master1 ... [32mINFO [0m lib389.replica:replica.py:2338 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 already exists [32mINFO [0m lib389.topologies:topologies.py:164 Ensuring master master4 to master2 ... [32mINFO [0m lib389.replica:replica.py:2365 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is was created [32mINFO [0m lib389.topologies:topologies.py:164 Ensuring master master4 to master3 ... [32mINFO [0m lib389.replica:replica.py:2365 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 is was created -------------------------------Captured log call-------------------------------- [32mINFO [0m lib389.replica:replica.py:2338 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 already exists [32mINFO [0m lib389.replica:replica.py:2338 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 already exists [32mINFO [0m lib389.replica:replica.py:2338 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 already exists [32mINFO [0m lib389.replica:replica.py:2338 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 already exists [32mINFO [0m lib389.replica:replica.py:2338 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 already exists [32mINFO [0m lib389.replica:replica.py:2338 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 already exists [32mINFO [0m lib389.replica:replica.py:2338 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 already exists [32mINFO [0m lib389.replica:replica.py:2338 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 already exists [32mINFO [0m lib389.replica:replica.py:2338 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 already exists [32mINFO [0m lib389.replica:replica.py:2338 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 already exists [32mINFO [0m lib389.replica:replica.py:2338 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 already exists [32mINFO [0m lib389.replica:replica.py:2338 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 already exists [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is NOT working (expect a644ee8c-42a2-4dfd-90ce-c27331ea2ac6 / got description=0ec0de1d-797d-4862-ae52-4c00fd1e1aa1) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 is NOT working (expect 0cc64442-2f62-4e8b-a8a7-dcf8097580bb / got description=a644ee8c-42a2-4dfd-90ce-c27331ea2ac6) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 is NOT working (expect 20340fab-e346-4f4f-8f49-68e5035978ed / got description=0cc64442-2f62-4e8b-a8a7-dcf8097580bb) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 is NOT working (expect 20340fab-e346-4f4f-8f49-68e5035978ed / got description=0cc64442-2f62-4e8b-a8a7-dcf8097580bb) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect ff9d328e-1863-443a-9460-119804cf4769 / got description=20340fab-e346-4f4f-8f49-68e5035978ed) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect ff9d328e-1863-443a-9460-119804cf4769 / got description=20340fab-e346-4f4f-8f49-68e5035978ed) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect ff9d328e-1863-443a-9460-119804cf4769 / got description=20340fab-e346-4f4f-8f49-68e5035978ed) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect ff9d328e-1863-443a-9460-119804cf4769 / got description=20340fab-e346-4f4f-8f49-68e5035978ed) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect ff9d328e-1863-443a-9460-119804cf4769 / got description=20340fab-e346-4f4f-8f49-68e5035978ed) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 is NOT working (expect 6a26a30b-4a70-4e27-92fa-18c5f3f6d8d8 / got description=ff9d328e-1863-443a-9460-119804cf4769) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 is NOT working (expect 7a182060-700e-41da-8fe4-123b056429c7 / got description=6a26a30b-4a70-4e27-92fa-18c5f3f6d8d8) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect c5198d23-5cb8-4986-9f78-01dacfde8458 / got description=7a182060-700e-41da-8fe4-123b056429c7) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is NOT working (expect 4459fd3b-dc2f-44b2-af46-b7a36efa6719 / got description=c5198d23-5cb8-4986-9f78-01dacfde8458) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 is NOT working (expect 91e6baa7-64c2-4abe-8259-5d7a19f6beee / got description=c5198d23-5cb8-4986-9f78-01dacfde8458) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 is NOT working (expect 91e6baa7-64c2-4abe-8259-5d7a19f6beee / got description=4459fd3b-dc2f-44b2-af46-b7a36efa6719) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 97bd007d-1190-4a5c-b617-2499aa6f7157 / got description=91e6baa7-64c2-4abe-8259-5d7a19f6beee) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is NOT working (expect d912da07-83c4-48a9-a454-1c562ab33738 / got description=97bd007d-1190-4a5c-b617-2499aa6f7157) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is NOT working (expect d912da07-83c4-48a9-a454-1c562ab33738 / got description=97bd007d-1190-4a5c-b617-2499aa6f7157) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is NOT working (expect d912da07-83c4-48a9-a454-1c562ab33738 / got description=97bd007d-1190-4a5c-b617-2499aa6f7157) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 is NOT working (expect 5984cc20-d507-4882-b0ad-345405d47688 / got description=d912da07-83c4-48a9-a454-1c562ab33738) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 is working [32mINFO [0m lib389:agreement.py:1095 Agreement (cn=003,cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config) was successfully removed [32mINFO [0m lib389:agreement.py:1095 Agreement (cn=003,cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config) was successfully removed [32mINFO [0m lib389:agreement.py:1095 Agreement (cn=003,cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config) was successfully removed [32mINFO [0m lib389:tasks.py:1400 cleanAllRUV task (task-10312020_004732) completed successfully [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is NOT working (expect 27b3a275-12e7-4b6d-ac6e-ef0514b38e13 / got description=5984cc20-d507-4882-b0ad-345405d47688) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 is NOT working (expect 128f3ea2-935c-4b39-a2f0-06400b861b3c / got description=27b3a275-12e7-4b6d-ac6e-ef0514b38e13) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect da43998b-5c61-4e16-81a2-6617693d4c98 / got description=128f3ea2-935c-4b39-a2f0-06400b861b3c) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect da43998b-5c61-4e16-81a2-6617693d4c98 / got description=128f3ea2-935c-4b39-a2f0-06400b861b3c) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect da43998b-5c61-4e16-81a2-6617693d4c98 / got description=128f3ea2-935c-4b39-a2f0-06400b861b3c) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect da43998b-5c61-4e16-81a2-6617693d4c98 / got description=128f3ea2-935c-4b39-a2f0-06400b861b3c) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 is NOT working (expect 24c6ad35-d4ed-41a2-8c29-eb3d36b1d6f7 / got description=da43998b-5c61-4e16-81a2-6617693d4c98) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 1e49be48-a775-4d2d-a0b7-ff027540922d / got description=da43998b-5c61-4e16-81a2-6617693d4c98) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 1e49be48-a775-4d2d-a0b7-ff027540922d / got description=da43998b-5c61-4e16-81a2-6617693d4c98) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 1e49be48-a775-4d2d-a0b7-ff027540922d / got description=da43998b-5c61-4e16-81a2-6617693d4c98) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is NOT working (expect e6481198-70de-47ab-a487-3163b555f679 / got description=1e49be48-a775-4d2d-a0b7-ff027540922d) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39004 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is working [32mINFO [0m lib389:agreement.py:1095 Agreement (cn=004,cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config) was successfully removed [32mINFO [0m lib389:agreement.py:1095 Agreement (cn=004,cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config) was successfully removed [32mINFO [0m lib389:tasks.py:1400 cleanAllRUV task (task-10312020_004825) completed successfully [32mINFO [0m lib389.utils:ticket49463_test.py:186 M1: nsds5ReplicaCleanRUV=4:no:1:dc=example,dc=com | |||
Failed | tickets/ticket50232_test.py::test_ticket50232_normal | 0.72 | |
topology_st = <lib389.topologies.TopologyMain object at 0x7f61c1ad7ac0> def test_ticket50232_normal(topology_st): """ The fix for ticket 50232 The test sequence is: - create suffix - add suffix entry and some child entries - "normally" done after populating suffix: enable replication - get RUV and database generation - export -r - import - get RUV and database generation - assert database generation has not changed """ log.info('Testing Ticket 50232 - export creates not imprtable ldif file, normal creation order') topology_st.standalone.backend.create(NORMAL_SUFFIX, {BACKEND_NAME: NORMAL_BACKEND_NAME}) topology_st.standalone.mappingtree.create(NORMAL_SUFFIX, bename=NORMAL_BACKEND_NAME, parent=None) _populate_suffix(topology_st.standalone, NORMAL_BACKEND_NAME) repl = ReplicationManager(DEFAULT_SUFFIX) > repl._ensure_changelog(topology_st.standalone) /export/tests/tickets/ticket50232_test.py:113: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/local/lib/python3.8/site-packages/lib389/replica.py:1928: in _ensure_changelog cl.create(properties={ /usr/local/lib/python3.8/site-packages/lib389/_mapped_object.py:971: in create return self._create(rdn, properties, basedn, ensure=False) /usr/local/lib/python3.8/site-packages/lib389/_mapped_object.py:946: in _create self._instance.add_ext_s(e, serverctrls=self._server_controls, clientctrls=self._client_controls, escapehatch='i am sure') /usr/local/lib/python3.8/site-packages/lib389/__init__.py:176: in inner return f(ent.dn, ent.toTupleList(), *args[2:]) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:425: in add_ext_s resp_type, resp_data, resp_msgid, resp_ctrls = self.result3(msgid,all=1,timeout=self.timeout) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:764: in result3 resp_type, resp_data, resp_msgid, decoded_resp_ctrls, retoid, retval = self.result4( /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:774: in result4 ldap_result = self._ldap_call(self._l.result4,msgid,all,timeout,add_ctrls,add_intermediates,add_extop) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:340: in _ldap_call reraise(exc_type, exc_value, exc_traceback) /usr/local/lib64/python3.8/site-packages/ldap/compat.py:46: in reraise raise exc_value _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.DirSrv object at 0x7f61c1ad75b0> func = <built-in method result4 of LDAP object at 0x7f61c18730f0> args = (13, 1, -1, 0, 0, 0), kwargs = {}, diagnostic_message_success = None exc_type = None, exc_value = None, exc_traceback = None def _ldap_call(self,func,*args,**kwargs): """ Wrapper method mainly for serializing calls into OpenLDAP libs and trace logs """ self._ldap_object_lock.acquire() if __debug__: if self._trace_level>=1: self._trace_file.write('*** %s %s - %s\n%s\n' % ( repr(self), self._uri, '.'.join((self.__class__.__name__,func.__name__)), pprint.pformat((args,kwargs)) )) if self._trace_level>=9: traceback.print_stack(limit=self._trace_stack_limit,file=self._trace_file) diagnostic_message_success = None try: try: > result = func(*args,**kwargs) E ldap.UNWILLING_TO_PERFORM: {'msgtype': 105, 'msgid': 13, 'result': 53, 'desc': 'Server is unwilling to perform', 'ctrls': [], 'info': 'Changelog configuration is part of the backend configuration'} /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:324: UNWILLING_TO_PERFORM -------------------------------Captured log setup------------------------------- [32mINFO [0m lib389.SetupDs:setup.py:658 Starting installation... [32mINFO [0m lib389.SetupDs:setup.py:686 Completed installation for standalone1 [32mINFO [0m lib389.topologies:topologies.py:109 Instance with parameters {'ldap-port': 38901, 'ldap-secureport': 63601, 'server-id': 'standalone1', 'suffix': 'dc=example,dc=com'} was created. -------------------------------Captured log call-------------------------------- [32mINFO [0m lib389:backend.py:80 List backend with suffix=o=normal [32mINFO [0m lib389:backend.py:290 Creating a local backend [32mINFO [0m lib389:backend.py:76 List backend cn=normal,cn=ldbm database,cn=plugins,cn=config [32mINFO [0m lib389:__init__.py:1713 Found entry dn: cn=normal,cn=ldbm database,cn=plugins,cn=config cn: normal nsslapd-cachememsize: 512000 nsslapd-cachesize: -1 nsslapd-directory: /var/lib/dirsrv/slapd-standalone1/db/normal nsslapd-dncachememsize: 16777216 nsslapd-readonly: off nsslapd-require-index: off nsslapd-require-internalop-index: off nsslapd-suffix: o=normal objectClass: top objectClass: extensibleObject objectClass: nsBackendInstance [32mINFO [0m lib389:mappingTree.py:154 Entry dn: cn="o=normal",cn=mapping tree,cn=config cn: o=normal nsslapd-backend: normal nsslapd-state: backend objectclass: top objectclass: extensibleObject objectclass: nsMappingTree [32mINFO [0m lib389:__init__.py:1713 Found entry dn: cn=o\3Dnormal,cn=mapping tree,cn=config cn: o=normal nsslapd-backend: normal nsslapd-state: backend objectClass: top objectClass: extensibleObject objectClass: nsMappingTree | |||
Failed | tickets/ticket50232_test.py::test_ticket50232_reverse | 0.31 | |
topology_st = <lib389.topologies.TopologyMain object at 0x7f61c1ad7ac0> def test_ticket50232_reverse(topology_st): """ The fix for ticket 50232 The test sequence is: - create suffix - enable replication before suffix enztry is added - add suffix entry and some child entries - get RUV and database generation - export -r - import - get RUV and database generation - assert database generation has not changed """ log.info('Testing Ticket 50232 - export creates not imprtable ldif file, normal creation order') # # Setup Replication # log.info('Setting up replication...') repl = ReplicationManager(DEFAULT_SUFFIX) # repl.create_first_master(topology_st.standalone) # # enable dynamic plugins, memberof and retro cl plugin # topology_st.standalone.backend.create(REVERSE_SUFFIX, {BACKEND_NAME: REVERSE_BACKEND_NAME}) topology_st.standalone.mappingtree.create(REVERSE_SUFFIX, bename=REVERSE_BACKEND_NAME, parent=None) > _enable_replica(topology_st.standalone, REVERSE_SUFFIX) /export/tests/tickets/ticket50232_test.py:155: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /export/tests/tickets/ticket50232_test.py:35: in _enable_replica repl._ensure_changelog(instance) /usr/local/lib/python3.8/site-packages/lib389/replica.py:1928: in _ensure_changelog cl.create(properties={ /usr/local/lib/python3.8/site-packages/lib389/_mapped_object.py:971: in create return self._create(rdn, properties, basedn, ensure=False) /usr/local/lib/python3.8/site-packages/lib389/_mapped_object.py:946: in _create self._instance.add_ext_s(e, serverctrls=self._server_controls, clientctrls=self._client_controls, escapehatch='i am sure') /usr/local/lib/python3.8/site-packages/lib389/__init__.py:176: in inner return f(ent.dn, ent.toTupleList(), *args[2:]) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:425: in add_ext_s resp_type, resp_data, resp_msgid, resp_ctrls = self.result3(msgid,all=1,timeout=self.timeout) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:764: in result3 resp_type, resp_data, resp_msgid, decoded_resp_ctrls, retoid, retval = self.result4( /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:774: in result4 ldap_result = self._ldap_call(self._l.result4,msgid,all,timeout,add_ctrls,add_intermediates,add_extop) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:340: in _ldap_call reraise(exc_type, exc_value, exc_traceback) /usr/local/lib64/python3.8/site-packages/ldap/compat.py:46: in reraise raise exc_value _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.DirSrv object at 0x7f61c1ad75b0> func = <built-in method result4 of LDAP object at 0x7f61c18730f0> args = (22, 1, -1, 0, 0, 0), kwargs = {}, diagnostic_message_success = None exc_type = None, exc_value = None, exc_traceback = None def _ldap_call(self,func,*args,**kwargs): """ Wrapper method mainly for serializing calls into OpenLDAP libs and trace logs """ self._ldap_object_lock.acquire() if __debug__: if self._trace_level>=1: self._trace_file.write('*** %s %s - %s\n%s\n' % ( repr(self), self._uri, '.'.join((self.__class__.__name__,func.__name__)), pprint.pformat((args,kwargs)) )) if self._trace_level>=9: traceback.print_stack(limit=self._trace_stack_limit,file=self._trace_file) diagnostic_message_success = None try: try: > result = func(*args,**kwargs) E ldap.UNWILLING_TO_PERFORM: {'msgtype': 105, 'msgid': 22, 'result': 53, 'desc': 'Server is unwilling to perform', 'ctrls': [], 'info': 'Changelog configuration is part of the backend configuration'} /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:324: UNWILLING_TO_PERFORM -------------------------------Captured log call-------------------------------- [32mINFO [0m lib389:backend.py:80 List backend with suffix=o=reverse [32mINFO [0m lib389:backend.py:290 Creating a local backend [32mINFO [0m lib389:backend.py:76 List backend cn=reverse,cn=ldbm database,cn=plugins,cn=config [32mINFO [0m lib389:__init__.py:1713 Found entry dn: cn=reverse,cn=ldbm database,cn=plugins,cn=config cn: reverse nsslapd-cachememsize: 512000 nsslapd-cachesize: -1 nsslapd-directory: /var/lib/dirsrv/slapd-standalone1/db/reverse nsslapd-dncachememsize: 16777216 nsslapd-readonly: off nsslapd-require-index: off nsslapd-require-internalop-index: off nsslapd-suffix: o=reverse objectClass: top objectClass: extensibleObject objectClass: nsBackendInstance [32mINFO [0m lib389:mappingTree.py:154 Entry dn: cn="o=reverse",cn=mapping tree,cn=config cn: o=reverse nsslapd-backend: reverse nsslapd-state: backend objectclass: top objectclass: extensibleObject objectclass: nsMappingTree [32mINFO [0m lib389:__init__.py:1713 Found entry dn: cn=o\3Dreverse,cn=mapping tree,cn=config cn: o=reverse nsslapd-backend: reverse nsslapd-state: backend objectClass: top objectClass: extensibleObject objectClass: nsMappingTree | |||
Failed | tickets/ticket548_test.py::test_ticket548_test_with_no_policy | 0.10 | |
topology_st = <lib389.topologies.TopologyMain object at 0x7f61c1c70f10> def test_ticket548_test_with_no_policy(topology_st): """ Check shadowAccount under no password policy """ log.info("Case 1. No password policy") log.info("Bind as %s" % DN_DM) topology_st.standalone.simple_bind_s(DN_DM, PASSWORD) log.info('Add an entry' + USER1_DN) try: topology_st.standalone.add_s( Entry((USER1_DN, {'objectclass': "top person organizationalPerson inetOrgPerson shadowAccount".split(), 'sn': '1', 'cn': 'user 1', 'uid': 'user1', 'givenname': 'user', 'mail': 'user1@' + DEFAULT_SUFFIX, 'userpassword': USER_PW}))) except ldap.LDAPError as e: log.fatal('test_ticket548: Failed to add user' + USER1_DN + ': error ' + e.message['desc']) assert False edate = int(time.time() / (60 * 60 * 24)) log.info('Search entry %s' % USER1_DN) log.info("Bind as %s" % USER1_DN) topology_st.standalone.simple_bind_s(USER1_DN, USER_PW) > entry = topology_st.standalone.getEntry(USER1_DN, ldap.SCOPE_BASE, "(objectclass=*)", ['shadowLastChange']) /export/tests/tickets/ticket548_test.py:211: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.DirSrv object at 0x7f61c1c70d30> args = ('uid=user1,dc=example,dc=com', 0, '(objectclass=*)', ['shadowLastChange']) kwargs = {}, res = 6, restype = 101, obj = [] def getEntry(self, *args, **kwargs): """Wrapper around SimpleLDAPObject.search. It is common to just get one entry. @param - entry dn @param - search scope, in ldap.SCOPE_BASE (default), ldap.SCOPE_SUB, ldap.SCOPE_ONE @param filterstr - filterstr, default '(objectClass=*)' from SimpleLDAPObject @param attrlist - list of attributes to retrieve. eg ['cn', 'uid'] @oaram attrsonly - default None from SimpleLDAPObject eg. getEntry(dn, scope, filter, attributes) XXX This cannot return None """ self.log.debug("Retrieving entry with %r", [args]) if len(args) == 1 and 'scope' not in kwargs: args += (ldap.SCOPE_BASE, ) res = self.search(*args, **kwargs) restype, obj = self.result(res) # TODO: why not test restype? if not obj: > raise NoSuchEntryError("no such entry for %r", [args]) E lib389.exceptions.NoSuchEntryError: ('no such entry for %r', [('uid=user1,dc=example,dc=com', 0, '(objectclass=*)', ['shadowLastChange'])]) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:1700: NoSuchEntryError -------------------------------Captured log setup------------------------------- [32mINFO [0m lib389.SetupDs:setup.py:658 Starting installation... [32mINFO [0m lib389.SetupDs:setup.py:686 Completed installation for standalone1 [32mINFO [0m lib389.topologies:topologies.py:109 Instance with parameters {'ldap-port': 38901, 'ldap-secureport': 63601, 'server-id': 'standalone1', 'suffix': 'dc=example,dc=com'} was created. | |||
Failed | tickets/ticket548_test.py::test_ticket548_test_global_policy | 0.16 | |
topology_st = <lib389.topologies.TopologyMain object at 0x7f61c1c70f10> def test_ticket548_test_global_policy(topology_st): """ Check shadowAccount with global password policy """ log.info("Case 2. Check shadowAccount with global password policy") log.info("Bind as %s" % DN_DM) topology_st.standalone.simple_bind_s(DN_DM, PASSWORD) set_global_pwpolicy(topology_st) log.info('Add an entry' + USER2_DN) try: topology_st.standalone.add_s( Entry((USER2_DN, {'objectclass': "top person organizationalPerson inetOrgPerson shadowAccount".split(), 'sn': '2', 'cn': 'user 2', 'uid': 'user2', 'givenname': 'user', 'mail': 'user2@' + DEFAULT_SUFFIX, 'userpassword': USER_PW}))) except ldap.LDAPError as e: log.fatal('test_ticket548: Failed to add user' + USER2_DN + ': error ' + e.message['desc']) assert False edate = int(time.time() / (60 * 60 * 24)) log.info("Bind as %s" % USER1_DN) topology_st.standalone.simple_bind_s(USER1_DN, USER_PW) log.info('Search entry %s' % USER1_DN) > entry = topology_st.standalone.getEntry(USER1_DN, ldap.SCOPE_BASE, "(objectclass=*)") /export/tests/tickets/ticket548_test.py:249: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.DirSrv object at 0x7f61c1c70d30> args = ('uid=user1,dc=example,dc=com', 0, '(objectclass=*)'), kwargs = {} res = 15, restype = 101, obj = [] def getEntry(self, *args, **kwargs): """Wrapper around SimpleLDAPObject.search. It is common to just get one entry. @param - entry dn @param - search scope, in ldap.SCOPE_BASE (default), ldap.SCOPE_SUB, ldap.SCOPE_ONE @param filterstr - filterstr, default '(objectClass=*)' from SimpleLDAPObject @param attrlist - list of attributes to retrieve. eg ['cn', 'uid'] @oaram attrsonly - default None from SimpleLDAPObject eg. getEntry(dn, scope, filter, attributes) XXX This cannot return None """ self.log.debug("Retrieving entry with %r", [args]) if len(args) == 1 and 'scope' not in kwargs: args += (ldap.SCOPE_BASE, ) res = self.search(*args, **kwargs) restype, obj = self.result(res) # TODO: why not test restype? if not obj: > raise NoSuchEntryError("no such entry for %r", [args]) E lib389.exceptions.NoSuchEntryError: ('no such entry for %r', [('uid=user1,dc=example,dc=com', 0, '(objectclass=*)')]) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:1700: NoSuchEntryError | |||
Failed | tickets/ticket548_test.py::test_ticket548_test_subtree_policy | 2.21 | |
topology_st = <lib389.topologies.TopologyMain object at 0x7f61c1c70f10> user = 'uid=user3,dc=example,dc=com', passwd = 'password' newpasswd = 'password0' def update_passwd(topology_st, user, passwd, newpasswd): log.info(" Bind as {%s,%s}" % (user, passwd)) topology_st.standalone.simple_bind_s(user, passwd) try: > topology_st.standalone.modify_s(user, [(ldap.MOD_REPLACE, 'userpassword', newpasswd.encode())]) /export/tests/tickets/ticket548_test.py:160: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ args = ('uid=user3,dc=example,dc=com', [(2, 'userpassword', b'password0')]) kwargs = {} c_stack = [FrameInfo(frame=<frame at 0x7f61c2d2d440, file '/usr/local/lib/python3.8/site-packages/lib389/__init__.py', line 180,...mbda>', code_context=[' self._inner_hookexec = lambda hook, methods, kwargs: hook.multicall(\n'], index=0), ...] frame = FrameInfo(frame=<frame at 0x5576b8e65520, file '/export/tests/tickets/ticket548_test.py', line 164, code update_passwd...[" topology_st.standalone.modify_s(user, [(ldap.MOD_REPLACE, 'userpassword', newpasswd.encode())])\n"], index=0) def inner(*args, **kwargs): if name in [ 'add_s', 'bind_s', 'delete_s', 'modify_s', 'modrdn_s', 'rename_s', 'sasl_interactive_bind_s', 'search_s', 'search_ext_s', 'simple_bind_s', 'unbind_s', 'getEntry', ] and not ('escapehatch' in kwargs and kwargs['escapehatch'] == 'i am sure'): c_stack = inspect.stack() frame = c_stack[1] warnings.warn(DeprecationWarning("Use of raw ldap function %s. This will be removed in a future release. " "Found in: %s:%s" % (name, frame.filename, frame.lineno))) # Later, we will add a sleep here to make it even more painful. # Finally, it will raise an exception. elif 'escapehatch' in kwargs: kwargs.pop('escapehatch') if name == 'result': objtype, data = f(*args, **kwargs) # data is either a 2-tuple or a list of 2-tuples # print data if data: if isinstance(data, tuple): return objtype, Entry(data) elif isinstance(data, list): # AD sends back these search references # if objtype == ldap.RES_SEARCH_RESULT and \ # isinstance(data[-1],tuple) and \ # not data[-1][0]: # print "Received search reference: " # pprint.pprint(data[-1][1]) # data.pop() # remove the last non-entry element return objtype, [Entry(x) for x in data] else: raise TypeError("unknown data type %s returned by result" % type(data)) else: return objtype, data elif name.startswith('add'): # the first arg is self # the second and third arg are the dn and the data to send # We need to convert the Entry into the format used by # python-ldap ent = args[0] if isinstance(ent, Entry): return f(ent.dn, ent.toTupleList(), *args[2:]) else: return f(*args, **kwargs) else: > return f(*args, **kwargs) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.DirSrv object at 0x7f61c1c70d30> dn = 'uid=user3,dc=example,dc=com' modlist = [(2, 'userpassword', b'password0')] def modify_s(self,dn,modlist): > return self.modify_ext_s(dn,modlist,None,None) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:640: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ args = ('uid=user3,dc=example,dc=com', [(2, 'userpassword', b'password0')], None, None) kwargs = {} def inner(*args, **kwargs): if name in [ 'add_s', 'bind_s', 'delete_s', 'modify_s', 'modrdn_s', 'rename_s', 'sasl_interactive_bind_s', 'search_s', 'search_ext_s', 'simple_bind_s', 'unbind_s', 'getEntry', ] and not ('escapehatch' in kwargs and kwargs['escapehatch'] == 'i am sure'): c_stack = inspect.stack() frame = c_stack[1] warnings.warn(DeprecationWarning("Use of raw ldap function %s. This will be removed in a future release. " "Found in: %s:%s" % (name, frame.filename, frame.lineno))) # Later, we will add a sleep here to make it even more painful. # Finally, it will raise an exception. elif 'escapehatch' in kwargs: kwargs.pop('escapehatch') if name == 'result': objtype, data = f(*args, **kwargs) # data is either a 2-tuple or a list of 2-tuples # print data if data: if isinstance(data, tuple): return objtype, Entry(data) elif isinstance(data, list): # AD sends back these search references # if objtype == ldap.RES_SEARCH_RESULT and \ # isinstance(data[-1],tuple) and \ # not data[-1][0]: # print "Received search reference: " # pprint.pprint(data[-1][1]) # data.pop() # remove the last non-entry element return objtype, [Entry(x) for x in data] else: raise TypeError("unknown data type %s returned by result" % type(data)) else: return objtype, data elif name.startswith('add'): # the first arg is self # the second and third arg are the dn and the data to send # We need to convert the Entry into the format used by # python-ldap ent = args[0] if isinstance(ent, Entry): return f(ent.dn, ent.toTupleList(), *args[2:]) else: return f(*args, **kwargs) else: > return f(*args, **kwargs) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.DirSrv object at 0x7f61c1c70d30> dn = 'uid=user3,dc=example,dc=com' modlist = [(2, 'userpassword', b'password0')], serverctrls = None clientctrls = None def modify_ext_s(self,dn,modlist,serverctrls=None,clientctrls=None): msgid = self.modify_ext(dn,modlist,serverctrls,clientctrls) > resp_type, resp_data, resp_msgid, resp_ctrls = self.result3(msgid,all=1,timeout=self.timeout) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:613: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ args = (34,), kwargs = {'all': 1, 'timeout': -1} def inner(*args, **kwargs): if name in [ 'add_s', 'bind_s', 'delete_s', 'modify_s', 'modrdn_s', 'rename_s', 'sasl_interactive_bind_s', 'search_s', 'search_ext_s', 'simple_bind_s', 'unbind_s', 'getEntry', ] and not ('escapehatch' in kwargs and kwargs['escapehatch'] == 'i am sure'): c_stack = inspect.stack() frame = c_stack[1] warnings.warn(DeprecationWarning("Use of raw ldap function %s. This will be removed in a future release. " "Found in: %s:%s" % (name, frame.filename, frame.lineno))) # Later, we will add a sleep here to make it even more painful. # Finally, it will raise an exception. elif 'escapehatch' in kwargs: kwargs.pop('escapehatch') if name == 'result': objtype, data = f(*args, **kwargs) # data is either a 2-tuple or a list of 2-tuples # print data if data: if isinstance(data, tuple): return objtype, Entry(data) elif isinstance(data, list): # AD sends back these search references # if objtype == ldap.RES_SEARCH_RESULT and \ # isinstance(data[-1],tuple) and \ # not data[-1][0]: # print "Received search reference: " # pprint.pprint(data[-1][1]) # data.pop() # remove the last non-entry element return objtype, [Entry(x) for x in data] else: raise TypeError("unknown data type %s returned by result" % type(data)) else: return objtype, data elif name.startswith('add'): # the first arg is self # the second and third arg are the dn and the data to send # We need to convert the Entry into the format used by # python-ldap ent = args[0] if isinstance(ent, Entry): return f(ent.dn, ent.toTupleList(), *args[2:]) else: return f(*args, **kwargs) else: > return f(*args, **kwargs) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.DirSrv object at 0x7f61c1c70d30>, msgid = 34, all = 1 timeout = -1, resp_ctrl_classes = None def result3(self,msgid=ldap.RES_ANY,all=1,timeout=None,resp_ctrl_classes=None): > resp_type, resp_data, resp_msgid, decoded_resp_ctrls, retoid, retval = self.result4( msgid,all,timeout, add_ctrls=0,add_intermediates=0,add_extop=0, resp_ctrl_classes=resp_ctrl_classes ) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:764: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ args = (34, 1, -1) kwargs = {'add_ctrls': 0, 'add_extop': 0, 'add_intermediates': 0, 'resp_ctrl_classes': None} def inner(*args, **kwargs): if name in [ 'add_s', 'bind_s', 'delete_s', 'modify_s', 'modrdn_s', 'rename_s', 'sasl_interactive_bind_s', 'search_s', 'search_ext_s', 'simple_bind_s', 'unbind_s', 'getEntry', ] and not ('escapehatch' in kwargs and kwargs['escapehatch'] == 'i am sure'): c_stack = inspect.stack() frame = c_stack[1] warnings.warn(DeprecationWarning("Use of raw ldap function %s. This will be removed in a future release. " "Found in: %s:%s" % (name, frame.filename, frame.lineno))) # Later, we will add a sleep here to make it even more painful. # Finally, it will raise an exception. elif 'escapehatch' in kwargs: kwargs.pop('escapehatch') if name == 'result': objtype, data = f(*args, **kwargs) # data is either a 2-tuple or a list of 2-tuples # print data if data: if isinstance(data, tuple): return objtype, Entry(data) elif isinstance(data, list): # AD sends back these search references # if objtype == ldap.RES_SEARCH_RESULT and \ # isinstance(data[-1],tuple) and \ # not data[-1][0]: # print "Received search reference: " # pprint.pprint(data[-1][1]) # data.pop() # remove the last non-entry element return objtype, [Entry(x) for x in data] else: raise TypeError("unknown data type %s returned by result" % type(data)) else: return objtype, data elif name.startswith('add'): # the first arg is self # the second and third arg are the dn and the data to send # We need to convert the Entry into the format used by # python-ldap ent = args[0] if isinstance(ent, Entry): return f(ent.dn, ent.toTupleList(), *args[2:]) else: return f(*args, **kwargs) else: > return f(*args, **kwargs) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.DirSrv object at 0x7f61c1c70d30>, msgid = 34, all = 1 timeout = -1, add_ctrls = 0, add_intermediates = 0, add_extop = 0 resp_ctrl_classes = None def result4(self,msgid=ldap.RES_ANY,all=1,timeout=None,add_ctrls=0,add_intermediates=0,add_extop=0,resp_ctrl_classes=None): if timeout is None: timeout = self.timeout > ldap_result = self._ldap_call(self._l.result4,msgid,all,timeout,add_ctrls,add_intermediates,add_extop) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:774: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ args = (<built-in method result4 of LDAP object at 0x7f61c1ac2d20>, 34, 1, -1, 0, 0, ...) kwargs = {} def inner(*args, **kwargs): if name in [ 'add_s', 'bind_s', 'delete_s', 'modify_s', 'modrdn_s', 'rename_s', 'sasl_interactive_bind_s', 'search_s', 'search_ext_s', 'simple_bind_s', 'unbind_s', 'getEntry', ] and not ('escapehatch' in kwargs and kwargs['escapehatch'] == 'i am sure'): c_stack = inspect.stack() frame = c_stack[1] warnings.warn(DeprecationWarning("Use of raw ldap function %s. This will be removed in a future release. " "Found in: %s:%s" % (name, frame.filename, frame.lineno))) # Later, we will add a sleep here to make it even more painful. # Finally, it will raise an exception. elif 'escapehatch' in kwargs: kwargs.pop('escapehatch') if name == 'result': objtype, data = f(*args, **kwargs) # data is either a 2-tuple or a list of 2-tuples # print data if data: if isinstance(data, tuple): return objtype, Entry(data) elif isinstance(data, list): # AD sends back these search references # if objtype == ldap.RES_SEARCH_RESULT and \ # isinstance(data[-1],tuple) and \ # not data[-1][0]: # print "Received search reference: " # pprint.pprint(data[-1][1]) # data.pop() # remove the last non-entry element return objtype, [Entry(x) for x in data] else: raise TypeError("unknown data type %s returned by result" % type(data)) else: return objtype, data elif name.startswith('add'): # the first arg is self # the second and third arg are the dn and the data to send # We need to convert the Entry into the format used by # python-ldap ent = args[0] if isinstance(ent, Entry): return f(ent.dn, ent.toTupleList(), *args[2:]) else: return f(*args, **kwargs) else: > return f(*args, **kwargs) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.DirSrv object at 0x7f61c1c70d30> func = <built-in method result4 of LDAP object at 0x7f61c1ac2d20> args = (34, 1, -1, 0, 0, 0), kwargs = {}, diagnostic_message_success = None exc_type = None, exc_value = None, exc_traceback = None def _ldap_call(self,func,*args,**kwargs): """ Wrapper method mainly for serializing calls into OpenLDAP libs and trace logs """ self._ldap_object_lock.acquire() if __debug__: if self._trace_level>=1: self._trace_file.write('*** %s %s - %s\n%s\n' % ( repr(self), self._uri, '.'.join((self.__class__.__name__,func.__name__)), pprint.pformat((args,kwargs)) )) if self._trace_level>=9: traceback.print_stack(limit=self._trace_stack_limit,file=self._trace_file) diagnostic_message_success = None try: try: result = func(*args,**kwargs) if __debug__ and self._trace_level>=2: if func.__name__!="unbind_ext": diagnostic_message_success = self._l.get_option(ldap.OPT_DIAGNOSTIC_MESSAGE) finally: self._ldap_object_lock.release() except LDAPError as e: exc_type,exc_value,exc_traceback = sys.exc_info() try: if 'info' not in e.args[0] and 'errno' in e.args[0]: e.args[0]['info'] = strerror(e.args[0]['errno']) except IndexError: pass if __debug__ and self._trace_level>=2: self._trace_file.write('=> LDAPError - %s: %s\n' % (e.__class__.__name__,str(e))) try: > reraise(exc_type, exc_value, exc_traceback) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:340: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ exc_type = <class 'ldap.INSUFFICIENT_ACCESS'> exc_value = INSUFFICIENT_ACCESS({'msgtype': 103, 'msgid': 34, 'result': 50, 'desc': 'Insufficient access', 'ctrls': [], 'info': "Insufficient 'write' privilege to the 'userPassword' attribute of entry 'uid=user3,dc=example,dc=com'.\n"}) exc_traceback = <traceback object at 0x7f61c1b9f7c0> def reraise(exc_type, exc_value, exc_traceback): """Re-raise an exception given information from sys.exc_info() Note that unlike six.reraise, this does not support replacing the traceback. All arguments must come from a single sys.exc_info() call. """ # In Python 3, all exception info is contained in one object. > raise exc_value /usr/local/lib64/python3.8/site-packages/ldap/compat.py:46: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.DirSrv object at 0x7f61c1c70d30> func = <built-in method result4 of LDAP object at 0x7f61c1ac2d20> args = (34, 1, -1, 0, 0, 0), kwargs = {}, diagnostic_message_success = None exc_type = None, exc_value = None, exc_traceback = None def _ldap_call(self,func,*args,**kwargs): """ Wrapper method mainly for serializing calls into OpenLDAP libs and trace logs """ self._ldap_object_lock.acquire() if __debug__: if self._trace_level>=1: self._trace_file.write('*** %s %s - %s\n%s\n' % ( repr(self), self._uri, '.'.join((self.__class__.__name__,func.__name__)), pprint.pformat((args,kwargs)) )) if self._trace_level>=9: traceback.print_stack(limit=self._trace_stack_limit,file=self._trace_file) diagnostic_message_success = None try: try: > result = func(*args,**kwargs) E ldap.INSUFFICIENT_ACCESS: {'msgtype': 103, 'msgid': 34, 'result': 50, 'desc': 'Insufficient access', 'ctrls': [], 'info': "Insufficient 'write' privilege to the 'userPassword' attribute of entry 'uid=user3,dc=example,dc=com'.\n"} /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:324: INSUFFICIENT_ACCESS During handling of the above exception, another exception occurred: topology_st = <lib389.topologies.TopologyMain object at 0x7f61c1c70f10> def test_ticket548_test_subtree_policy(topology_st): """ Check shadowAccount with subtree level password policy """ log.info("Case 3. Check shadowAccount with subtree level password policy") log.info("Bind as %s" % DN_DM) topology_st.standalone.simple_bind_s(DN_DM, PASSWORD) # Check the global policy values set_subtree_pwpolicy(topology_st, 2, 20, 6) log.info('Add an entry' + USER3_DN) try: topology_st.standalone.add_s( Entry((USER3_DN, {'objectclass': "top person organizationalPerson inetOrgPerson shadowAccount".split(), 'sn': '3', 'cn': 'user 3', 'uid': 'user3', 'givenname': 'user', 'mail': 'user3@' + DEFAULT_SUFFIX, 'userpassword': USER_PW}))) except ldap.LDAPError as e: log.fatal('test_ticket548: Failed to add user' + USER3_DN + ': error ' + e.message['desc']) assert False log.info('Search entry %s' % USER3_DN) entry0 = topology_st.standalone.getEntry(USER3_DN, ldap.SCOPE_BASE, "(objectclass=*)") log.info('Expecting shadowLastChange 0 since passwordMustChange is on') check_shadow_attr_value(entry0, 'shadowLastChange', 0, USER3_DN) # passwordMinAge -- 2 day check_shadow_attr_value(entry0, 'shadowMin', 2, USER3_DN) # passwordMaxAge -- 20 days check_shadow_attr_value(entry0, 'shadowMax', 20, USER3_DN) # passwordWarning -- 6 days check_shadow_attr_value(entry0, 'shadowWarning', 6, USER3_DN) log.info("Bind as %s" % USER3_DN) topology_st.standalone.simple_bind_s(USER3_DN, USER_PW) log.info('Search entry %s' % USER3_DN) try: entry1 = topology_st.standalone.getEntry(USER3_DN, ldap.SCOPE_BASE, "(objectclass=*)") except ldap.UNWILLING_TO_PERFORM: log.info('test_ticket548: Search by' + USER3_DN + ' failed by UNWILLING_TO_PERFORM as expected') except ldap.LDAPError as e: log.fatal('test_ticket548: Failed to serch user' + USER3_DN + ' by self: error ' + e.message['desc']) assert False log.info("Bind as %s and updating the password with a new one" % USER3_DN) topology_st.standalone.simple_bind_s(USER3_DN, USER_PW) # Bind as DM again, change policy log.info("Bind as %s" % DN_DM) topology_st.standalone.simple_bind_s(DN_DM, PASSWORD) set_subtree_pwpolicy(topology_st, 4, 40, 12) newpasswd = USER_PW + '0' > update_passwd(topology_st, USER3_DN, USER_PW, newpasswd) /export/tests/tickets/ticket548_test.py:372: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ topology_st = <lib389.topologies.TopologyMain object at 0x7f61c1c70f10> user = 'uid=user3,dc=example,dc=com', passwd = 'password' newpasswd = 'password0' def update_passwd(topology_st, user, passwd, newpasswd): log.info(" Bind as {%s,%s}" % (user, passwd)) topology_st.standalone.simple_bind_s(user, passwd) try: topology_st.standalone.modify_s(user, [(ldap.MOD_REPLACE, 'userpassword', newpasswd.encode())]) except ldap.LDAPError as e: > log.fatal('test_ticket548: Failed to update the password ' + cpw + ' of user ' + user + ': error ' + e.message[ 'desc']) E NameError: name 'cpw' is not defined /export/tests/tickets/ticket548_test.py:162: NameError | |||
XFailed | suites/acl/syntax_test.py::test_aci_invalid_syntax_fail[test_targattrfilters_18] | 0.01 | |
topo = <lib389.topologies.TopologyMain object at 0x7f61d4f190d0> real_value = '(target = ldap:///cn=Jeff Vedder,ou=Product Development,dc=example,dc=com)(targetattr="*")(version 3.0; acl "Name of ...3123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123123";)' @pytest.mark.xfail(reason='https://bugzilla.redhat.com/show_bug.cgi?id=1691473') @pytest.mark.parametrize("real_value", [a[1] for a in FAILED], ids=[a[0] for a in FAILED]) def test_aci_invalid_syntax_fail(topo, real_value): """ Try to set wrong ACI syntax. :id: 83c40784-fff5-49c8-9535-7064c9c19e7e :parametrized: yes :setup: Standalone Instance :steps: 1. Create ACI 2. Try to setup the ACI with Instance :expectedresults: 1. It should pass 2. It should not pass """ domain = Domain(topo.standalone, DEFAULT_SUFFIX) with pytest.raises(ldap.INVALID_SYNTAX): > domain.add("aci", real_value) E Failed: DID NOT RAISE <class 'ldap.INVALID_SYNTAX'> suites/acl/syntax_test.py:213: Failed -------------------------------Captured log setup------------------------------- [32mINFO [0m lib389.SetupDs:setup.py:658 Starting installation... [32mINFO [0m lib389.SetupDs:setup.py:686 Completed installation for standalone1 [32mINFO [0m lib389.topologies:topologies.py:109 Instance with parameters {'ldap-port': 38901, 'ldap-secureport': 63601, 'server-id': 'standalone1', 'suffix': 'dc=example,dc=com'} was created. | |||
XFailed | suites/acl/syntax_test.py::test_aci_invalid_syntax_fail[test_targattrfilters_20] | 0.02 | |
topo = <lib389.topologies.TopologyMain object at 0x7f61d4f190d0> real_value = '(target = ldap:///cn=Jeff Vedder,ou=Product Development,dc=example,dc=com)(targetattr="*")(version 3.0; acl "Name of the ACI"; deny(write)userdns="ldap:///anyone";)' @pytest.mark.xfail(reason='https://bugzilla.redhat.com/show_bug.cgi?id=1691473') @pytest.mark.parametrize("real_value", [a[1] for a in FAILED], ids=[a[0] for a in FAILED]) def test_aci_invalid_syntax_fail(topo, real_value): """ Try to set wrong ACI syntax. :id: 83c40784-fff5-49c8-9535-7064c9c19e7e :parametrized: yes :setup: Standalone Instance :steps: 1. Create ACI 2. Try to setup the ACI with Instance :expectedresults: 1. It should pass 2. It should not pass """ domain = Domain(topo.standalone, DEFAULT_SUFFIX) with pytest.raises(ldap.INVALID_SYNTAX): > domain.add("aci", real_value) E Failed: DID NOT RAISE <class 'ldap.INVALID_SYNTAX'> suites/acl/syntax_test.py:213: Failed | |||
XFailed | suites/acl/syntax_test.py::test_aci_invalid_syntax_fail[test_bind_rule_set_with_more_than_three] | 0.01 | |
topo = <lib389.topologies.TopologyMain object at 0x7f61d4f190d0> real_value = '(target = ldap:///dc=example,dc=com)(targetattr="*")(version 3.0; acl "Name of the ACI"; deny absolute (all)userdn="ldap:////////anyone";)' @pytest.mark.xfail(reason='https://bugzilla.redhat.com/show_bug.cgi?id=1691473') @pytest.mark.parametrize("real_value", [a[1] for a in FAILED], ids=[a[0] for a in FAILED]) def test_aci_invalid_syntax_fail(topo, real_value): """ Try to set wrong ACI syntax. :id: 83c40784-fff5-49c8-9535-7064c9c19e7e :parametrized: yes :setup: Standalone Instance :steps: 1. Create ACI 2. Try to setup the ACI with Instance :expectedresults: 1. It should pass 2. It should not pass """ domain = Domain(topo.standalone, DEFAULT_SUFFIX) with pytest.raises(ldap.INVALID_SYNTAX): > domain.add("aci", real_value) E Failed: DID NOT RAISE <class 'ldap.INVALID_SYNTAX'> suites/acl/syntax_test.py:213: Failed | |||
XFailed | suites/acl/userattr_test.py::test_mod_see_also_positive[(LEVEL_3, CHILDREN)] | 0.07 | |
topo = <lib389.topologies.TopologyMain object at 0x7f61d6b6a5e0> _add_user = None, user = 'uid=Grandparent,ou=Inheritance,dc=example,dc=com' entry = 'ou=CHILDREN,ou=PARENTS,ou=GRANDPARENTS,ou=ANCESTORS,ou=Inheritance,dc=example,dc=com' @pytest.mark.parametrize("user,entry", [ (CAN, ROLEDNACCESS), (CAN, USERDNACCESS), (CAN, GROUPDNACCESS), (CAN, LDAPURLACCESS), (CAN, ATTRNAMEACCESS), (LEVEL_0, OU_2), (LEVEL_1, ANCESTORS), (LEVEL_2, GRANDPARENTS), (LEVEL_4, OU_2), (LEVEL_4, ANCESTORS), (LEVEL_4, GRANDPARENTS), (LEVEL_4, PARENTS), (LEVEL_4, CHILDREN), pytest.param(LEVEL_3, CHILDREN, marks=pytest.mark.xfail(reason="May be some bug")), ], ids=[ "(CAN,ROLEDNACCESS)", "(CAN,USERDNACCESS)", "(CAN,GROUPDNACCESS)", "(CAN,LDAPURLACCESS)", "(CAN,ATTRNAMEACCESS)", "(LEVEL_0, OU_2)", "(LEVEL_1,ANCESTORS)", "(LEVEL_2,GRANDPARENTS)", "(LEVEL_4,OU_2)", "(LEVEL_4, ANCESTORS)", "(LEVEL_4,GRANDPARENTS)", "(LEVEL_4,PARENTS)", "(LEVEL_4,CHILDREN)", "(LEVEL_3, CHILDREN)" ]) def test_mod_see_also_positive(topo, _add_user, user, entry): """ Try to set seeAlso on entry with binding specific user, it will success as per the ACI. :id: 65745426-7a01-11e8-8ac2-8c16451d917b :parametrized: yes :setup: Standalone Instance :steps: 1. Add test entry 2. Add ACI 3. User should follow ACI role :expectedresults: 1. Entry should be added 2. Operation should succeed 3. Operation should succeed """ conn = UserAccount(topo.standalone, user).bind(PW_DM) > UserAccount(conn, entry).replace('seeAlso', 'cn=1') suites/acl/userattr_test.py:216: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/local/lib/python3.8/site-packages/lib389/_mapped_object.py:280: in replace self.set(key, value, action=ldap.MOD_REPLACE) /usr/local/lib/python3.8/site-packages/lib389/_mapped_object.py:446: in set return self._instance.modify_ext_s(self._dn, [(action, key, value)], /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:613: in modify_ext_s resp_type, resp_data, resp_msgid, resp_ctrls = self.result3(msgid,all=1,timeout=self.timeout) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:764: in result3 resp_type, resp_data, resp_msgid, decoded_resp_ctrls, retoid, retval = self.result4( /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:774: in result4 ldap_result = self._ldap_call(self._l.result4,msgid,all,timeout,add_ctrls,add_intermediates,add_extop) /usr/local/lib/python3.8/site-packages/lib389/__init__.py:180: in inner return f(*args, **kwargs) /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:340: in _ldap_call reraise(exc_type, exc_value, exc_traceback) /usr/local/lib64/python3.8/site-packages/ldap/compat.py:46: in reraise raise exc_value _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.DirSrv object at 0x7f61d3dc4d60> func = <built-in method result4 of LDAP object at 0x7f61d5223450> args = (5, 1, -1, 0, 0, 0), kwargs = {}, diagnostic_message_success = None exc_type = None, exc_value = None, exc_traceback = None def _ldap_call(self,func,*args,**kwargs): """ Wrapper method mainly for serializing calls into OpenLDAP libs and trace logs """ self._ldap_object_lock.acquire() if __debug__: if self._trace_level>=1: self._trace_file.write('*** %s %s - %s\n%s\n' % ( repr(self), self._uri, '.'.join((self.__class__.__name__,func.__name__)), pprint.pformat((args,kwargs)) )) if self._trace_level>=9: traceback.print_stack(limit=self._trace_stack_limit,file=self._trace_file) diagnostic_message_success = None try: try: > result = func(*args,**kwargs) E ldap.INSUFFICIENT_ACCESS: {'msgtype': 103, 'msgid': 5, 'result': 50, 'desc': 'Insufficient access', 'ctrls': [], 'info': "Insufficient 'write' privilege to the 'seeAlso' attribute of entry 'ou=children,ou=parents,ou=grandparents,ou=ancestors,ou=inheritance,dc=example,dc=com'.\n"} /usr/local/lib64/python3.8/site-packages/ldap/ldapobject.py:324: INSUFFICIENT_ACCESS | |||
XFailed | suites/config/config_test.py::test_defaultnamingcontext_1 | 0.31 | |
topo = <lib389.topologies.TopologyMain object at 0x7f61d38f0fa0> @pytest.mark.xfail(reason="This may fail due to bug 1610234") def test_defaultnamingcontext_1(topo): """This test case should be part of function test_defaultnamingcontext Please move it back after we have a fix for bug 1610234 """ log.info("Remove the original suffix which is currently nsslapd-defaultnamingcontext" "and check nsslapd-defaultnamingcontext become empty.") """ Please remove these declarations after moving the test to function test_defaultnamingcontext """ backends = Backends(topo.standalone) test_db2 = 'test2_db' test_suffix2 = 'dc=test2,dc=com' b2 = backends.create(properties={'cn': test_db2, 'nsslapd-suffix': test_suffix2}) b2.delete() > assert topo.standalone.config.get_attr_val_utf8('nsslapd-defaultnamingcontext') == ' ' E AssertionError: assert 'dc=example,dc=com' == ' ' E Strings contain only whitespace, escaping them using repr() E - ' ' E + 'dc=example,dc=com' suites/config/config_test.py:280: AssertionError -------------------------------Captured log call-------------------------------- [32mINFO [0m tests.suites.config.config_test:config_test.py:268 Remove the original suffix which is currently nsslapd-defaultnamingcontextand check nsslapd-defaultnamingcontext become empty. | |||
XFailed | suites/export/export_test.py::test_dbtasks_db2ldif_with_non_accessible_ldif_file_path_output | 3.62 | |
topo = <lib389.topologies.TopologyMain object at 0x7f61d159fa30> @pytest.mark.bz1860291 @pytest.mark.xfail(reason="bug 1860291") @pytest.mark.skipif(ds_is_older("1.3.10", "1.4.2"), reason="Not implemented") def test_dbtasks_db2ldif_with_non_accessible_ldif_file_path_output(topo): """Export with db2ldif, giving a ldif file path which can't be accessed by the user (dirsrv by default) :id: fcc63387-e650-40a7-b643-baa68c190037 :setup: Standalone Instance - entries imported in the db :steps: 1. Stop the server 2. Launch db2ldif with a non accessible ldif file path 3. check the error reported in the command output :expected results: 1. Operation successful 2. Operation properly fails 3. An clear error message is reported as output of the cli """ export_ldif = '/tmp/nonexistent/export.ldif' log.info("Stopping the instance...") topo.standalone.stop() log.info("Performing an offline export to a non accessible ldif file path - should fail and output a clear error message") expected_output="No such file or directory" > run_db2ldif_and_clear_logs(topo, topo.standalone, DEFAULT_BENAME, export_ldif, expected_output) suites/export/export_test.py:150: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ topology = <lib389.topologies.TopologyMain object at 0x7f61d159fa30> instance = <lib389.DirSrv object at 0x7f61d15b03a0>, backend = 'userRoot' ldif = '/tmp/nonexistent/export.ldif', output_msg = 'No such file or directory' encrypt = False, repl = False def run_db2ldif_and_clear_logs(topology, instance, backend, ldif, output_msg, encrypt=False, repl=False): args = FakeArgs() args.instance = instance.serverid args.backend = backend args.encrypted = encrypt args.replication = repl args.ldif = ldif dbtasks_db2ldif(instance, topology.logcap.log, args) log.info('checking output msg') if not topology.logcap.contains(output_msg): log.error('The output message is not the expected one') > assert False E assert False suites/export/export_test.py:36: AssertionError ------------------------------Captured stderr call------------------------------ ldiffile: /tmp/nonexistent/export.ldif -------------------------------Captured log call-------------------------------- [32mINFO [0m lib389.utils:export_test.py:145 Stopping the instance... [32mINFO [0m lib389.utils:export_test.py:148 Performing an offline export to a non accessible ldif file path - should fail and output a clear error message [31mCRITICAL[0m LogCapture:dbtasks.py:40 db2ldif failed [32mINFO [0m lib389.utils:export_test.py:33 checking output msg [31m[1mERROR [0m lib389.utils:export_test.py:35 The output message is not the expected one | |||
XFailed | suites/healthcheck/healthcheck_test.py::test_healthcheck_unable_to_query_backend | 1.81 | |
topology_st = <lib389.topologies.TopologyMain object at 0x7f61c3f5ce50> @pytest.mark.ds50873 @pytest.mark.bz1796343 @pytest.mark.skipif(ds_is_older("1.4.1"), reason="Not implemented") @pytest.mark.xfail(reason="Will fail because of bz1837315. Set proper version after bug is fixed") def test_healthcheck_unable_to_query_backend(topology_st): """Check if HealthCheck returns DSBLE0002 code :id: 716b1ff1-94bd-4780-98b8-96ff8ef21e30 :setup: Standalone instance :steps: 1. Create DS instance 2. Create a new root suffix and database 3. Disable new suffix 4. Use HealthCheck without --json option 5. Use HealthCheck with --json option :expectedresults: 1. Success 2. Success 3. Success 4. HealthCheck should return code DSBLE0002 5. HealthCheck should return code DSBLE0002 """ RET_CODE = 'DSBLE0002' NEW_SUFFIX = 'dc=test,dc=com' NEW_BACKEND = 'userData' standalone = topology_st.standalone log.info('Create new suffix') backends = Backends(standalone) backends.create(properties={ 'cn': NEW_BACKEND, 'nsslapd-suffix': NEW_SUFFIX, }) log.info('Disable the newly created suffix') mts = MappingTrees(standalone) mt_new = mts.get(NEW_SUFFIX) mt_new.replace('nsslapd-state', 'disabled') run_healthcheck_and_flush_log(topology_st, standalone, RET_CODE, json=False) run_healthcheck_and_flush_log(topology_st, standalone, RET_CODE, json=True) log.info('Enable the suffix again and check if nothing is broken') mt_new.replace('nsslapd-state', 'backend') > run_healthcheck_and_flush_log(topology_st, standalone, RET_CODE, json=False) suites/healthcheck/healthcheck_test.py:453: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ topology = <lib389.topologies.TopologyMain object at 0x7f61c3f5ce50> instance = <lib389.DirSrv object at 0x7f61d0a79850>, searched_code = 'DSBLE0002' json = False, searched_code2 = None, list_checks = False, list_errors = False check = None, searched_list = None def run_healthcheck_and_flush_log(topology, instance, searched_code=None, json=False, searched_code2=None, list_checks=False, list_errors=False, check=None, searched_list=None): args = FakeArgs() args.instance = instance.serverid args.verbose = instance.verbose args.list_errors = list_errors args.list_checks = list_checks args.check = check args.dry_run = False args.json = json log.info('Use healthcheck with --json == {} option'.format(json)) health_check_run(instance, topology.logcap.log, args) if searched_list is not None: for item in searched_list: assert topology.logcap.contains(item) log.info('Healthcheck returned searched item: %s' % item) else: > assert topology.logcap.contains(searched_code) E AssertionError: assert False E + where False = <bound method LogCapture.contains of <LogCapture (NOTSET)>>('DSBLE0002') E + where <bound method LogCapture.contains of <LogCapture (NOTSET)>> = <LogCapture (NOTSET)>.contains E + where <LogCapture (NOTSET)> = <lib389.topologies.TopologyMain object at 0x7f61c3f5ce50>.logcap suites/healthcheck/healthcheck_test.py:49: AssertionError -------------------------------Captured log call-------------------------------- [32mINFO [0m LogCapture:health.py:94 Beginning lint report, this could take a while ... [32mINFO [0m LogCapture:health.py:99 Checking config:hr_timestamp ... [32mINFO [0m LogCapture:health.py:99 Checking config:passwordscheme ... [32mINFO [0m LogCapture:health.py:99 Checking backends:userdata:cl_trimming ... [32mINFO [0m LogCapture:health.py:99 Checking backends:userdata:mappingtree ... [32mINFO [0m LogCapture:health.py:99 Checking backends:userdata:search ... [32mINFO [0m LogCapture:health.py:99 Checking backends:userdata:virt_attrs ... [32mINFO [0m LogCapture:health.py:99 Checking backends:userroot:cl_trimming ... [32mINFO [0m LogCapture:health.py:99 Checking backends:userroot:mappingtree ... [32mINFO [0m LogCapture:health.py:99 Checking backends:userroot:search ... [32mINFO [0m LogCapture:health.py:99 Checking backends:userroot:virt_attrs ... [32mINFO [0m LogCapture:health.py:99 Checking encryption:check_tls_version ... [32mINFO [0m LogCapture:health.py:99 Checking fschecks:file_perms ... [32mINFO [0m LogCapture:health.py:99 Checking refint:attr_indexes ... [32mINFO [0m LogCapture:health.py:99 Checking refint:update_delay ... [32mINFO [0m LogCapture:health.py:99 Checking monitor-disk-space:disk_space ... [32mINFO [0m LogCapture:health.py:99 Checking replication:agmts_status ... [32mINFO [0m LogCapture:health.py:99 Checking replication:conflicts ... [32mINFO [0m LogCapture:health.py:99 Checking dseldif:nsstate ... [32mINFO [0m LogCapture:health.py:99 Checking tls:certificate_expiration ... [32mINFO [0m LogCapture:health.py:99 Checking logs:notes ... [32mINFO [0m LogCapture:health.py:106 Healthcheck complete. [32mINFO [0m LogCapture:health.py:119 4 Issues found! Generating report ... [32mINFO [0m LogCapture:health.py:45 [1] DS Lint Error: DSBLE0001 [32mINFO [0m LogCapture:health.py:46 -------------------------------------------------------------------------------- [32mINFO [0m LogCapture:health.py:47 Severity: MEDIUM [32mINFO [0m LogCapture:health.py:49 Check: backends:userdata:mappingtree [32mINFO [0m LogCapture:health.py:50 Affects: [32mINFO [0m LogCapture:health.py:52 -- userdata [32mINFO [0m LogCapture:health.py:53 Details: [32mINFO [0m LogCapture:health.py:54 ----------- [32mINFO [0m LogCapture:health.py:55 This backend may be missing the correct mapping tree references. Mapping Trees allow the directory server to determine which backend an operation is routed to in the abscence of other information. This is extremely important for correct functioning of LDAP ADD for example. A correct Mapping tree for this backend must contain the suffix name, the database name and be a backend type. IE: cn=o3Dexample,cn=mapping tree,cn=config cn: o=example nsslapd-backend: userRoot nsslapd-state: backend objectClass: top objectClass: extensibleObject objectClass: nsMappingTree [32mINFO [0m LogCapture:health.py:56 Resolution: [32mINFO [0m LogCapture:health.py:57 ----------- [32mINFO [0m LogCapture:health.py:58 Either you need to create the mapping tree, or you need to repair the related mapping tree. You will need to do this by hand by editing cn=config, or stopping the instance and editing dse.ldif. [32mINFO [0m LogCapture:health.py:45 [2] DS Lint Error: DSBLE0002 [32mINFO [0m LogCapture:health.py:46 -------------------------------------------------------------------------------- [32mINFO [0m LogCapture:health.py:47 Severity: HIGH [32mINFO [0m LogCapture:health.py:49 Check: backends:userdata:search [32mINFO [0m LogCapture:health.py:50 Affects: [32mINFO [0m LogCapture:health.py:52 -- dc=test,dc=com [32mINFO [0m LogCapture:health.py:53 Details: [32mINFO [0m LogCapture:health.py:54 ----------- [32mINFO [0m LogCapture:health.py:55 Unable to query the backend. LDAP error ({'msgtype': 101, 'msgid': 26, 'result': 1, 'desc': 'Operations error', 'ctrls': [], 'info': 'Warning: Operation attempted on a disabled node : dc=example,dc=com\n'}) [32mINFO [0m LogCapture:health.py:56 Resolution: [32mINFO [0m LogCapture:health.py:57 ----------- [32mINFO [0m LogCapture:health.py:58 Check the server's error and access logs for more information. [32mINFO [0m LogCapture:health.py:45 [3] DS Lint Error: DSBLE0001 [32mINFO [0m LogCapture:health.py:46 -------------------------------------------------------------------------------- [32mINFO [0m LogCapture:health.py:47 Severity: MEDIUM [32mINFO [0m LogCapture:health.py:49 Check: backends:userdata:mappingtree [32mINFO [0m LogCapture:health.py:50 Affects: [32mINFO [0m LogCapture:health.py:52 -- userdata [32mINFO [0m LogCapture:health.py:53 Details: [32mINFO [0m LogCapture:health.py:54 ----------- [32mINFO [0m LogCapture:health.py:55 This backend may be missing the correct mapping tree references. Mapping Trees allow the directory server to determine which backend an operation is routed to in the abscence of other information. This is extremely important for correct functioning of LDAP ADD for example. A correct Mapping tree for this backend must contain the suffix name, the database name and be a backend type. IE: cn=o3Dexample,cn=mapping tree,cn=config cn: o=example nsslapd-backend: userRoot nsslapd-state: backend objectClass: top objectClass: extensibleObject objectClass: nsMappingTree [32mINFO [0m LogCapture:health.py:56 Resolution: [32mINFO [0m LogCapture:health.py:57 ----------- [32mINFO [0m LogCapture:health.py:58 Either you need to create the mapping tree, or you need to repair the related mapping tree. You will need to do this by hand by editing cn=config, or stopping the instance and editing dse.ldif. [32mINFO [0m LogCapture:health.py:45 [4] DS Lint Error: DSBLE0002 [32mINFO [0m LogCapture:health.py:46 -------------------------------------------------------------------------------- [32mINFO [0m LogCapture:health.py:47 Severity: HIGH [32mINFO [0m LogCapture:health.py:49 Check: backends:userdata:search [32mINFO [0m LogCapture:health.py:50 Affects: [32mINFO [0m LogCapture:health.py:52 -- dc=test,dc=com [32mINFO [0m LogCapture:health.py:53 Details: [32mINFO [0m LogCapture:health.py:54 ----------- [32mINFO [0m LogCapture:health.py:55 Unable to query the backend. LDAP error ({'msgtype': 101, 'msgid': 26, 'result': 1, 'desc': 'Operations error', 'ctrls': [], 'info': 'Warning: Operation attempted on a disabled node : dc=example,dc=com\n'}) [32mINFO [0m LogCapture:health.py:56 Resolution: [32mINFO [0m LogCapture:health.py:57 ----------- [32mINFO [0m LogCapture:health.py:58 Check the server's error and access logs for more information. [32mINFO [0m LogCapture:health.py:124 ===== End Of Report (4 Issues found) ===== [32mINFO [0m LogCapture:health.py:126 [ { "dsle": "DSBLE0001", "severity": "MEDIUM", "description": "Possibly incorrect mapping tree.", "items": [ "userdata" ], "detail": "This backend may be missing the correct mapping tree references. Mapping Trees allow\nthe directory server to determine which backend an operation is routed to in the\nabscence of other information. This is extremely important for correct functioning\nof LDAP ADD for example.\n\nA correct Mapping tree for this backend must contain the suffix name, the database name\nand be a backend type. IE:\n\ncn=o3Dexample,cn=mapping tree,cn=config\ncn: o=example\nnsslapd-backend: userRoot\nnsslapd-state: backend\nobjectClass: top\nobjectClass: extensibleObject\nobjectClass: nsMappingTree\n\n", "fix": "Either you need to create the mapping tree, or you need to repair the related\nmapping tree. You will need to do this by hand by editing cn=config, or stopping\nthe instance and editing dse.ldif.\n", "check": "backends:userdata:mappingtree" }, { "dsle": "DSBLE0002", "severity": "HIGH", "description": "Unable to query backend.", "items": [ "dc=test,dc=com" ], "detail": "Unable to query the backend. LDAP error ({'msgtype': 101, 'msgid': 26, 'result': 1, 'desc': 'Operations error', 'ctrls': [], 'info': 'Warning: Operation attempted on a disabled node : dc=example,dc=com\\n'})", "fix": "Check the server's error and access logs for more information.", "check": "backends:userdata:search" }, { "dsle": "DSBLE0001", "severity": "MEDIUM", "description": "Possibly incorrect mapping tree.", "items": [ "userdata" ], "detail": "This backend may be missing the correct mapping tree references. Mapping Trees allow\nthe directory server to determine which backend an operation is routed to in the\nabscence of other information. This is extremely important for correct functioning\nof LDAP ADD for example.\n\nA correct Mapping tree for this backend must contain the suffix name, the database name\nand be a backend type. IE:\n\ncn=o3Dexample,cn=mapping tree,cn=config\ncn: o=example\nnsslapd-backend: userRoot\nnsslapd-state: backend\nobjectClass: top\nobjectClass: extensibleObject\nobjectClass: nsMappingTree\n\n", "fix": "Either you need to create the mapping tree, or you need to repair the related\nmapping tree. You will need to do this by hand by editing cn=config, or stopping\nthe instance and editing dse.ldif.\n", "check": "backends:userdata:mappingtree" }, { "dsle": "DSBLE0002", "severity": "HIGH", "description": "Unable to query backend.", "items": [ "dc=test,dc=com" ], "detail": "Unable to query the backend. LDAP error ({'msgtype': 101, 'msgid': 26, 'result': 1, 'desc': 'Operations error', 'ctrls': [], 'info': 'Warning: Operation attempted on a disabled node : dc=example,dc=com\\n'})", "fix": "Check the server's error and access logs for more information.", "check": "backends:userdata:search" } ] [32mINFO [0m LogCapture:health.py:94 Beginning lint report, this could take a while ... [32mINFO [0m LogCapture:health.py:99 Checking config:hr_timestamp ... [32mINFO [0m LogCapture:health.py:99 Checking config:passwordscheme ... [32mINFO [0m LogCapture:health.py:99 Checking backends:userdata:cl_trimming ... [32mINFO [0m LogCapture:health.py:99 Checking backends:userdata:mappingtree ... [32mINFO [0m LogCapture:health.py:99 Checking backends:userdata:search ... [32mINFO [0m LogCapture:health.py:99 Checking backends:userdata:virt_attrs ... [32mINFO [0m LogCapture:health.py:99 Checking backends:userroot:cl_trimming ... [32mINFO [0m LogCapture:health.py:99 Checking backends:userroot:mappingtree ... [32mINFO [0m LogCapture:health.py:99 Checking backends:userroot:search ... [32mINFO [0m LogCapture:health.py:99 Checking backends:userroot:virt_attrs ... [32mINFO [0m LogCapture:health.py:99 Checking encryption:check_tls_version ... [32mINFO [0m LogCapture:health.py:99 Checking fschecks:file_perms ... [32mINFO [0m LogCapture:health.py:99 Checking refint:attr_indexes ... [32mINFO [0m LogCapture:health.py:99 Checking refint:update_delay ... [32mINFO [0m LogCapture:health.py:99 Checking monitor-disk-space:disk_space ... [32mINFO [0m LogCapture:health.py:99 Checking replication:agmts_status ... [32mINFO [0m LogCapture:health.py:99 Checking replication:conflicts ... [32mINFO [0m LogCapture:health.py:99 Checking dseldif:nsstate ... [32mINFO [0m LogCapture:health.py:99 Checking tls:certificate_expiration ... [32mINFO [0m LogCapture:health.py:99 Checking logs:notes ... [32mINFO [0m LogCapture:health.py:106 Healthcheck complete. [32mINFO [0m LogCapture:health.py:119 2 Issues found! Generating report ... [32mINFO [0m LogCapture:health.py:45 [1] DS Lint Error: DSBLE0003 [32mINFO [0m LogCapture:health.py:46 -------------------------------------------------------------------------------- [32mINFO [0m LogCapture:health.py:47 Severity: LOW [32mINFO [0m LogCapture:health.py:49 Check: backends:userdata:search [32mINFO [0m LogCapture:health.py:50 Affects: [32mINFO [0m LogCapture:health.py:52 -- dc=test,dc=com [32mINFO [0m LogCapture:health.py:53 Details: [32mINFO [0m LogCapture:health.py:54 ----------- [32mINFO [0m LogCapture:health.py:55 The backend database has not been initialized yet [32mINFO [0m LogCapture:health.py:56 Resolution: [32mINFO [0m LogCapture:health.py:57 ----------- [32mINFO [0m LogCapture:health.py:58 You need to import an LDIF file, or create the suffix entry, in order to initialize the database. [32mINFO [0m LogCapture:health.py:45 [2] DS Lint Error: DSBLE0003 [32mINFO [0m LogCapture:health.py:46 -------------------------------------------------------------------------------- [32mINFO [0m LogCapture:health.py:47 Severity: LOW [32mINFO [0m LogCapture:health.py:49 Check: backends:userdata:search [32mINFO [0m LogCapture:health.py:50 Affects: [32mINFO [0m LogCapture:health.py:52 -- dc=test,dc=com [32mINFO [0m LogCapture:health.py:53 Details: [32mINFO [0m LogCapture:health.py:54 ----------- [32mINFO [0m LogCapture:health.py:55 The backend database has not been initialized yet [32mINFO [0m LogCapture:health.py:56 Resolution: [32mINFO [0m LogCapture:health.py:57 ----------- [32mINFO [0m LogCapture:health.py:58 You need to import an LDIF file, or create the suffix entry, in order to initialize the database. [32mINFO [0m LogCapture:health.py:124 ===== End Of Report (2 Issues found) ===== | |||
XFailed | suites/replication/conflict_resolve_test.py::TestTwoMasters::test_memberof_groups | 0.00 | |
self = <tests.suites.replication.conflict_resolve_test.TestTwoMasters object at 0x7f61c3073670> topology_m2 = <lib389.topologies.TopologyMain object at 0x7f61c2f65f40> base_m2 = <lib389.idm.nscontainer.nsContainer object at 0x7f61c33d6d00> def test_memberof_groups(self, topology_m2, base_m2): """Check that conflict properly resolved for operations with memberOf and groups :id: 77f09b18-03d1-45da-940b-1ad2c2908eb3 :setup: Two master replication, test container for entries, enable plugin logging, audit log, error log for replica and access log for internal :steps: 1. Enable memberOf plugin 2. Add 30 users to m1 and wait for replication to happen 3. Pause replication 4. Create a group on m1 and m2 5. Create a group on m1 and m2, delete from m1 6. Create a group on m1, delete from m1, and create on m2, 7. Create a group on m2 and m1, delete from m1 8. Create two different groups on m2 9. Resume replication 10. Check that the entries on both masters are the same and replication is working :expectedresults: 1. It should pass 2. It should pass 3. It should pass 4. It should pass 5. It should pass 6. It should pass 7. It should pass 8. It should pass 9. It should pass 10. It should pass """ > pytest.xfail("Issue 49591 - work in progress") E _pytest.outcomes.XFailed: Issue 49591 - work in progress suites/replication/conflict_resolve_test.py:402: XFailed | |||
XFailed | suites/replication/conflict_resolve_test.py::TestTwoMasters::test_managed_entries | 0.00 | |
self = <tests.suites.replication.conflict_resolve_test.TestTwoMasters object at 0x7f61c33c73d0> topology_m2 = <lib389.topologies.TopologyMain object at 0x7f61c2f65f40> def test_managed_entries(self, topology_m2): """Check that conflict properly resolved for operations with managed entries :id: 77f09b18-03d1-45da-940b-1ad2c2908eb4 :setup: Two master replication, test container for entries, enable plugin logging, audit log, error log for replica and access log for internal :steps: 1. Create ou=managed_users and ou=managed_groups under test container 2. Configure managed entries plugin and add a template to test container 3. Add a user to m1 and wait for replication to happen 4. Pause replication 5. Create a user on m1 and m2 with a same group ID on both master 6. Create a user on m1 and m2 with a different group ID on both master 7. Resume replication 8. Check that the entries on both masters are the same and replication is working :expectedresults: 1. It should pass 2. It should pass 3. It should pass 4. It should pass 5. It should pass 6. It should pass 7. It should pass 8. It should pass """ > pytest.xfail("Issue 49591 - work in progress") E _pytest.outcomes.XFailed: Issue 49591 - work in progress suites/replication/conflict_resolve_test.py:493: XFailed | |||
XFailed | suites/replication/conflict_resolve_test.py::TestTwoMasters::test_nested_entries_with_children | 0.00 | |
self = <tests.suites.replication.conflict_resolve_test.TestTwoMasters object at 0x7f61c32a3d60> topology_m2 = <lib389.topologies.TopologyMain object at 0x7f61c2f65f40> base_m2 = <lib389.idm.nscontainer.nsContainer object at 0x7f61c3344250> def test_nested_entries_with_children(self, topology_m2, base_m2): """Check that conflict properly resolved for operations with nested entries with children :id: 77f09b18-03d1-45da-940b-1ad2c2908eb5 :setup: Two master replication, test container for entries, enable plugin logging, audit log, error log for replica and access log for internal :steps: 1. Add 15 containers to m1 and wait for replication to happen 2. Pause replication 3. Create parent-child on master2 and master1 4. Create parent-child on master1 and master2 5. Create parent-child on master1 and master2 different child rdn 6. Create parent-child on master1 and delete parent on master2 7. Create parent on master1, delete it and parent-child on master2, delete them 8. Create parent on master1, delete it and parent-two children on master2 9. Create parent-two children on master1 and parent-child on master2, delete them 10. Create three subsets inside existing container entry, applying only part of changes on m2 11. Create more combinations of the subset with parent-child on m1 and parent on m2 12. Delete container on m1, modify user1 on m1, create parent on m2 and modify user2 on m2 13. Resume replication 14. Check that the entries on both masters are the same and replication is working :expectedresults: 1. It should pass 2. It should pass 3. It should pass 4. It should pass 5. It should pass 6. It should pass 7. It should pass 8. It should pass 9. It should pass 10. It should pass 11. It should pass 12. It should pass 13. It should pass 14. It should pass """ > pytest.xfail("Issue 49591 - work in progress") E _pytest.outcomes.XFailed: Issue 49591 - work in progress suites/replication/conflict_resolve_test.py:584: XFailed | |||
XFailed | suites/replication/conflict_resolve_test.py::TestThreeMasters::test_nested_entries | 0.00 | |
self = <tests.suites.replication.conflict_resolve_test.TestThreeMasters object at 0x7f61c33767f0> topology_m3 = <lib389.topologies.TopologyMain object at 0x7f61c33766d0> base_m3 = <lib389.idm.nscontainer.nsContainer object at 0x7f61c32eabb0> def test_nested_entries(self, topology_m3, base_m3): """Check that conflict properly resolved for operations with nested entries with children :id: 77f09b18-03d1-45da-940b-1ad2c2908eb6 :setup: Three master replication, test container for entries, enable plugin logging, audit log, error log for replica and access log for internal :steps: 1. Add 15 containers to m1 and wait for replication to happen 2. Pause replication 3. Create two child entries under each of two entries 4. Create three child entries under each of three entries 5. Create two parents on m1 and m2, then on m1 - create a child and delete one parent, on m2 - delete one parent and create a child 6. Test a few more parent-child combinations with three instances 7. Resume replication 8. Check that the entries on both masters are the same and replication is working :expectedresults: 1. It should pass 2. It should pass 3. It should pass 4. It should pass 5. It should pass 6. It should pass 7. It should pass 8. It should pass """ > pytest.xfail("Issue 49591 - work in progress") E _pytest.outcomes.XFailed: Issue 49591 - work in progress suites/replication/conflict_resolve_test.py:968: XFailed -------------------------------Captured log setup------------------------------- [32mINFO [0m lib389.SetupDs:setup.py:658 Starting installation... [32mINFO [0m lib389.SetupDs:setup.py:686 Completed installation for master1 [32mINFO [0m lib389.topologies:topologies.py:109 Instance with parameters {'ldap-port': 39001, 'ldap-secureport': 63701, 'server-id': 'master1', 'suffix': 'dc=example,dc=com'} was created. [32mINFO [0m lib389.SetupDs:setup.py:658 Starting installation... [32mINFO [0m lib389.SetupDs:setup.py:686 Completed installation for master2 [32mINFO [0m lib389.topologies:topologies.py:109 Instance with parameters {'ldap-port': 39002, 'ldap-secureport': 63702, 'server-id': 'master2', 'suffix': 'dc=example,dc=com'} was created. [32mINFO [0m lib389.SetupDs:setup.py:658 Starting installation... [32mINFO [0m lib389.SetupDs:setup.py:686 Completed installation for master3 [32mINFO [0m lib389.topologies:topologies.py:109 Instance with parameters {'ldap-port': 39003, 'ldap-secureport': 63703, 'server-id': 'master3', 'suffix': 'dc=example,dc=com'} was created. [32mINFO [0m lib389.topologies:topologies.py:142 Creating replication topology. [32mINFO [0m lib389.topologies:topologies.py:156 Joining master master2 to master1 ... [32mINFO [0m lib389.replica:replica.py:2084 SUCCESS: bootstrap to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 completed [32mINFO [0m lib389.replica:replica.py:2365 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is was created [32mINFO [0m lib389.replica:replica.py:2365 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is was created [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is NOT working (expect 8a67e0d4-54e5-475c-bc2a-942534ef82c3 / got description=None) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 0edc2c81-d24c-4816-9e80-107a17289396 / got description=8a67e0d4-54e5-475c-bc2a-942534ef82c3) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is working [32mINFO [0m lib389.replica:replica.py:2153 SUCCESS: joined master from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 [32mINFO [0m lib389.topologies:topologies.py:156 Joining master master3 to master1 ... [32mINFO [0m lib389.replica:replica.py:2084 SUCCESS: bootstrap to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 completed [32mINFO [0m lib389.replica:replica.py:2365 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 is was created [32mINFO [0m lib389.replica:replica.py:2365 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is was created [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 is NOT working (expect 630bcc7f-d015-4183-b201-424c587b90e4 / got description=0edc2c81-d24c-4816-9e80-107a17289396) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 27ba1062-0c24-47d9-ae6f-3f4d501edf4e / got description=630bcc7f-d015-4183-b201-424c587b90e4) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 27ba1062-0c24-47d9-ae6f-3f4d501edf4e / got description=630bcc7f-d015-4183-b201-424c587b90e4) [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect 27ba1062-0c24-47d9-ae6f-3f4d501edf4e / got description=630bcc7f-d015-4183-b201-424c587b90e4) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is working [32mINFO [0m lib389.replica:replica.py:2153 SUCCESS: joined master from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 [32mINFO [0m lib389.topologies:topologies.py:164 Ensuring master master1 to master2 ... [32mINFO [0m lib389.replica:replica.py:2338 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 already exists [32mINFO [0m lib389.topologies:topologies.py:164 Ensuring master master1 to master3 ... [32mINFO [0m lib389.replica:replica.py:2338 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 already exists [32mINFO [0m lib389.topologies:topologies.py:164 Ensuring master master2 to master1 ... [32mINFO [0m lib389.replica:replica.py:2338 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 already exists [32mINFO [0m lib389.topologies:topologies.py:164 Ensuring master master2 to master3 ... [32mINFO [0m lib389.replica:replica.py:2365 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 is was created [32mINFO [0m lib389.topologies:topologies.py:164 Ensuring master master3 to master1 ... [32mINFO [0m lib389.replica:replica.py:2338 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 already exists [32mINFO [0m lib389.topologies:topologies.py:164 Ensuring master master3 to master2 ... [32mINFO [0m lib389.replica:replica.py:2365 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39003 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is was created | |||
XFailed | suites/replication/replica_config_test.py::test_agmt_num_add[nsds5ReplicaPort-0-65535-9999999999999999999999999999999999999999999999999999999999999999999-invalid-389] | 0.12 | |
topo = <lib389.topologies.TopologyMain object at 0x7f61c2fbd1c0> attr = 'nsds5ReplicaPort', too_small = '0', too_big = '65535' overflow = '9999999999999999999999999999999999999999999999999999999999999999999' notnum = 'invalid', valid = '389' @pytest.mark.xfail(reason="Agreement validation current does not work.") @pytest.mark.parametrize("attr, too_small, too_big, overflow, notnum, valid", agmt_attrs) def test_agmt_num_add(topo, attr, too_small, too_big, overflow, notnum, valid): """Test all the number values you can set for a replica config entry :id: a8b47d4a-a089-4d70-8070-e6181209bf94 :parametrized: yes :setup: standalone instance :steps: 1. Use a value that is too small 2. Use a value that is too big 3. Use a value that overflows the int 4. Use a value with character value (not a number) 5. Use a valid value :expectedresults: 1. Add is rejected 2. Add is rejected 3. Add is rejected 4. Add is rejected 5. Add is allowed """ agmt_reset(topo) replica = replica_setup(topo) agmts = Agreements(topo.standalone, basedn=replica.dn) # Test too small perform_invalid_create(agmts, agmt_dict, attr, too_small) # Test too big > perform_invalid_create(agmts, agmt_dict, attr, too_big) suites/replication/replica_config_test.py:217: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ many = <lib389.agreement.Agreements object at 0x7f61c2f89fa0> properties = {'cn': 'test_agreement', 'nsDS5ReplicaBindDN': 'uid=tester', 'nsDS5ReplicaBindMethod': 'SIMPLE', 'nsDS5ReplicaHost': 'localhost.localdomain', ...} attr = 'nsds5ReplicaPort', value = '65535' def perform_invalid_create(many, properties, attr, value): my_properties = copy.deepcopy(properties) my_properties[attr] = value with pytest.raises(ldap.LDAPError) as ei: > many.create(properties=my_properties) E Failed: DID NOT RAISE <class 'ldap.LDAPError'> suites/replication/replica_config_test.py:108: Failed | |||
XFailed | suites/replication/replica_config_test.py::test_agmt_num_modify[nsds5ReplicaPort-0-65535-9999999999999999999999999999999999999999999999999999999999999999999-invalid-389] | 0.20 | |
topo = <lib389.topologies.TopologyMain object at 0x7f61c2fbd1c0> attr = 'nsds5ReplicaPort', too_small = '0', too_big = '65535' overflow = '9999999999999999999999999999999999999999999999999999999999999999999' notnum = 'invalid', valid = '389' @pytest.mark.xfail(reason="Agreement validation current does not work.") @pytest.mark.parametrize("attr, too_small, too_big, overflow, notnum, valid", agmt_attrs) def test_agmt_num_modify(topo, attr, too_small, too_big, overflow, notnum, valid): """Test all the number values you can set for a replica config entry :id: a8b47d4a-a089-4d70-8070-e6181209bf95 :parametrized: yes :setup: standalone instance :steps: 1. Replace a value that is too small 2. Replace a value that is too big 3. Replace a value that overflows the int 4. Replace a value with character value (not a number) 5. Replace a vlue with a valid value :expectedresults: 1. Value is rejected 2. Value is rejected 3. Value is rejected 4. Value is rejected 5. Value is allowed """ agmt = agmt_setup(topo) # Value too small > perform_invalid_modify(agmt, attr, too_small) suites/replication/replica_config_test.py:253: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ o = <lib389.agreement.Agreement object at 0x7f61c3069d00> attr = 'nsds5ReplicaPort', value = '0' def perform_invalid_modify(o, attr, value): with pytest.raises(ldap.LDAPError) as ei: > o.replace(attr, value) E Failed: DID NOT RAISE <class 'ldap.LDAPError'> suites/replication/replica_config_test.py:113: Failed | |||
XFailed | suites/replication/replica_config_test.py::test_agmt_num_modify[nsds5ReplicaTimeout--1-9223372036854775807-9999999999999999999999999999999999999999999999999999999999999999999-invalid-6] | 0.21 | |
topo = <lib389.topologies.TopologyMain object at 0x7f61c2fbd1c0> attr = 'nsds5ReplicaTimeout', too_small = '-1', too_big = '9223372036854775807' overflow = '9999999999999999999999999999999999999999999999999999999999999999999' notnum = 'invalid', valid = '6' @pytest.mark.xfail(reason="Agreement validation current does not work.") @pytest.mark.parametrize("attr, too_small, too_big, overflow, notnum, valid", agmt_attrs) def test_agmt_num_modify(topo, attr, too_small, too_big, overflow, notnum, valid): """Test all the number values you can set for a replica config entry :id: a8b47d4a-a089-4d70-8070-e6181209bf95 :parametrized: yes :setup: standalone instance :steps: 1. Replace a value that is too small 2. Replace a value that is too big 3. Replace a value that overflows the int 4. Replace a value with character value (not a number) 5. Replace a vlue with a valid value :expectedresults: 1. Value is rejected 2. Value is rejected 3. Value is rejected 4. Value is rejected 5. Value is allowed """ agmt = agmt_setup(topo) # Value too small perform_invalid_modify(agmt, attr, too_small) # Value too big > perform_invalid_modify(agmt, attr, too_big) suites/replication/replica_config_test.py:255: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ o = <lib389.agreement.Agreement object at 0x7f61c3267190> attr = 'nsds5ReplicaTimeout', value = '9223372036854775807' def perform_invalid_modify(o, attr, value): with pytest.raises(ldap.LDAPError) as ei: > o.replace(attr, value) E Failed: DID NOT RAISE <class 'ldap.LDAPError'> suites/replication/replica_config_test.py:113: Failed | |||
XFailed | suites/replication/replica_config_test.py::test_agmt_num_modify[nsds5ReplicaBusyWaitTime--1-9223372036854775807-9999999999999999999999999999999999999999999999999999999999999999999-invalid-6] | 0.20 | |
topo = <lib389.topologies.TopologyMain object at 0x7f61c2fbd1c0> attr = 'nsds5ReplicaBusyWaitTime', too_small = '-1' too_big = '9223372036854775807' overflow = '9999999999999999999999999999999999999999999999999999999999999999999' notnum = 'invalid', valid = '6' @pytest.mark.xfail(reason="Agreement validation current does not work.") @pytest.mark.parametrize("attr, too_small, too_big, overflow, notnum, valid", agmt_attrs) def test_agmt_num_modify(topo, attr, too_small, too_big, overflow, notnum, valid): """Test all the number values you can set for a replica config entry :id: a8b47d4a-a089-4d70-8070-e6181209bf95 :parametrized: yes :setup: standalone instance :steps: 1. Replace a value that is too small 2. Replace a value that is too big 3. Replace a value that overflows the int 4. Replace a value with character value (not a number) 5. Replace a vlue with a valid value :expectedresults: 1. Value is rejected 2. Value is rejected 3. Value is rejected 4. Value is rejected 5. Value is allowed """ agmt = agmt_setup(topo) # Value too small perform_invalid_modify(agmt, attr, too_small) # Value too big > perform_invalid_modify(agmt, attr, too_big) suites/replication/replica_config_test.py:255: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ o = <lib389.agreement.Agreement object at 0x7f61c2f43550> attr = 'nsds5ReplicaBusyWaitTime', value = '9223372036854775807' def perform_invalid_modify(o, attr, value): with pytest.raises(ldap.LDAPError) as ei: > o.replace(attr, value) E Failed: DID NOT RAISE <class 'ldap.LDAPError'> suites/replication/replica_config_test.py:113: Failed | |||
XFailed | suites/replication/replica_config_test.py::test_agmt_num_modify[nsds5ReplicaSessionPauseTime--1-9223372036854775807-9999999999999999999999999999999999999999999999999999999999999999999-invalid-6] | 0.20 | |
topo = <lib389.topologies.TopologyMain object at 0x7f61c2fbd1c0> attr = 'nsds5ReplicaSessionPauseTime', too_small = '-1' too_big = '9223372036854775807' overflow = '9999999999999999999999999999999999999999999999999999999999999999999' notnum = 'invalid', valid = '6' @pytest.mark.xfail(reason="Agreement validation current does not work.") @pytest.mark.parametrize("attr, too_small, too_big, overflow, notnum, valid", agmt_attrs) def test_agmt_num_modify(topo, attr, too_small, too_big, overflow, notnum, valid): """Test all the number values you can set for a replica config entry :id: a8b47d4a-a089-4d70-8070-e6181209bf95 :parametrized: yes :setup: standalone instance :steps: 1. Replace a value that is too small 2. Replace a value that is too big 3. Replace a value that overflows the int 4. Replace a value with character value (not a number) 5. Replace a vlue with a valid value :expectedresults: 1. Value is rejected 2. Value is rejected 3. Value is rejected 4. Value is rejected 5. Value is allowed """ agmt = agmt_setup(topo) # Value too small perform_invalid_modify(agmt, attr, too_small) # Value too big > perform_invalid_modify(agmt, attr, too_big) suites/replication/replica_config_test.py:255: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ o = <lib389.agreement.Agreement object at 0x7f61c324daf0> attr = 'nsds5ReplicaSessionPauseTime', value = '9223372036854775807' def perform_invalid_modify(o, attr, value): with pytest.raises(ldap.LDAPError) as ei: > o.replace(attr, value) E Failed: DID NOT RAISE <class 'ldap.LDAPError'> suites/replication/replica_config_test.py:113: Failed | |||
XFailed | suites/replication/replica_config_test.py::test_agmt_num_modify[nsds5ReplicaFlowControlWindow--1-9223372036854775807-9999999999999999999999999999999999999999999999999999999999999999999-invalid-6] | 0.22 | |
topo = <lib389.topologies.TopologyMain object at 0x7f61c2fbd1c0> attr = 'nsds5ReplicaFlowControlWindow', too_small = '-1' too_big = '9223372036854775807' overflow = '9999999999999999999999999999999999999999999999999999999999999999999' notnum = 'invalid', valid = '6' @pytest.mark.xfail(reason="Agreement validation current does not work.") @pytest.mark.parametrize("attr, too_small, too_big, overflow, notnum, valid", agmt_attrs) def test_agmt_num_modify(topo, attr, too_small, too_big, overflow, notnum, valid): """Test all the number values you can set for a replica config entry :id: a8b47d4a-a089-4d70-8070-e6181209bf95 :parametrized: yes :setup: standalone instance :steps: 1. Replace a value that is too small 2. Replace a value that is too big 3. Replace a value that overflows the int 4. Replace a value with character value (not a number) 5. Replace a vlue with a valid value :expectedresults: 1. Value is rejected 2. Value is rejected 3. Value is rejected 4. Value is rejected 5. Value is allowed """ agmt = agmt_setup(topo) # Value too small perform_invalid_modify(agmt, attr, too_small) # Value too big > perform_invalid_modify(agmt, attr, too_big) suites/replication/replica_config_test.py:255: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ o = <lib389.agreement.Agreement object at 0x7f61c3279a60> attr = 'nsds5ReplicaFlowControlWindow', value = '9223372036854775807' def perform_invalid_modify(o, attr, value): with pytest.raises(ldap.LDAPError) as ei: > o.replace(attr, value) E Failed: DID NOT RAISE <class 'ldap.LDAPError'> suites/replication/replica_config_test.py:113: Failed | |||
XFailed | suites/replication/replica_config_test.py::test_agmt_num_modify[nsds5ReplicaFlowControlPause--1-9223372036854775807-9999999999999999999999999999999999999999999999999999999999999999999-invalid-6] | 0.21 | |
topo = <lib389.topologies.TopologyMain object at 0x7f61c2fbd1c0> attr = 'nsds5ReplicaFlowControlPause', too_small = '-1' too_big = '9223372036854775807' overflow = '9999999999999999999999999999999999999999999999999999999999999999999' notnum = 'invalid', valid = '6' @pytest.mark.xfail(reason="Agreement validation current does not work.") @pytest.mark.parametrize("attr, too_small, too_big, overflow, notnum, valid", agmt_attrs) def test_agmt_num_modify(topo, attr, too_small, too_big, overflow, notnum, valid): """Test all the number values you can set for a replica config entry :id: a8b47d4a-a089-4d70-8070-e6181209bf95 :parametrized: yes :setup: standalone instance :steps: 1. Replace a value that is too small 2. Replace a value that is too big 3. Replace a value that overflows the int 4. Replace a value with character value (not a number) 5. Replace a vlue with a valid value :expectedresults: 1. Value is rejected 2. Value is rejected 3. Value is rejected 4. Value is rejected 5. Value is allowed """ agmt = agmt_setup(topo) # Value too small perform_invalid_modify(agmt, attr, too_small) # Value too big > perform_invalid_modify(agmt, attr, too_big) suites/replication/replica_config_test.py:255: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ o = <lib389.agreement.Agreement object at 0x7f61c326e7c0> attr = 'nsds5ReplicaFlowControlPause', value = '9223372036854775807' def perform_invalid_modify(o, attr, value): with pytest.raises(ldap.LDAPError) as ei: > o.replace(attr, value) E Failed: DID NOT RAISE <class 'ldap.LDAPError'> suites/replication/replica_config_test.py:113: Failed | |||
XFailed | suites/replication/replica_config_test.py::test_agmt_num_modify[nsds5ReplicaProtocolTimeout--1-9223372036854775807-9999999999999999999999999999999999999999999999999999999999999999999-invalid-6] | 0.23 | |
topo = <lib389.topologies.TopologyMain object at 0x7f61c2fbd1c0> attr = 'nsds5ReplicaProtocolTimeout', too_small = '-1' too_big = '9223372036854775807' overflow = '9999999999999999999999999999999999999999999999999999999999999999999' notnum = 'invalid', valid = '6' @pytest.mark.xfail(reason="Agreement validation current does not work.") @pytest.mark.parametrize("attr, too_small, too_big, overflow, notnum, valid", agmt_attrs) def test_agmt_num_modify(topo, attr, too_small, too_big, overflow, notnum, valid): """Test all the number values you can set for a replica config entry :id: a8b47d4a-a089-4d70-8070-e6181209bf95 :parametrized: yes :setup: standalone instance :steps: 1. Replace a value that is too small 2. Replace a value that is too big 3. Replace a value that overflows the int 4. Replace a value with character value (not a number) 5. Replace a vlue with a valid value :expectedresults: 1. Value is rejected 2. Value is rejected 3. Value is rejected 4. Value is rejected 5. Value is allowed """ agmt = agmt_setup(topo) # Value too small perform_invalid_modify(agmt, attr, too_small) # Value too big > perform_invalid_modify(agmt, attr, too_big) suites/replication/replica_config_test.py:255: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ o = <lib389.agreement.Agreement object at 0x7f61c32750a0> attr = 'nsds5ReplicaProtocolTimeout', value = '9223372036854775807' def perform_invalid_modify(o, attr, value): with pytest.raises(ldap.LDAPError) as ei: > o.replace(attr, value) E Failed: DID NOT RAISE <class 'ldap.LDAPError'> suites/replication/replica_config_test.py:113: Failed | |||
XFailed | suites/replication/ruvstore_test.py::test_memoryruv_sync_with_databaseruv | 0.27 | |
topo = <lib389.topologies.TopologyMain object at 0x7f61d5689dc0> @pytest.mark.xfail(reason="No method to safety access DB ruv currently exists online.") def test_memoryruv_sync_with_databaseruv(topo): """Check if memory ruv and database ruv are synced :id: 5f38ac5f-6353-460d-bf60-49cafffda5b3 :setup: Replication with two masters. :steps: 1. Add user to server and compare memory ruv and database ruv. 2. Modify description of user and compare memory ruv and database ruv. 3. Modrdn of user and compare memory ruv and database ruv. 4. Delete user and compare memory ruv and database ruv. :expectedresults: 1. For add user, the memory ruv and database ruv should be the same. 2. For modify operation, the memory ruv and database ruv should be the same. 3. For modrdn operation, the memory ruv and database ruv should be the same. 4. For delete operation, the memory ruv and database ruv should be the same. """ log.info('Adding user: {} to master1'.format(TEST_ENTRY_NAME)) users = UserAccounts(topo.ms['master1'], DEFAULT_SUFFIX) tuser = users.create(properties=USER_PROPERTIES) > _compare_memoryruv_and_databaseruv(topo, 'add') suites/replication/ruvstore_test.py:139: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ topo = <lib389.topologies.TopologyMain object at 0x7f61d5689dc0> operation_type = 'add' def _compare_memoryruv_and_databaseruv(topo, operation_type): """Compare the memoryruv and databaseruv for ldap operations""" log.info('Checking memory ruv for ldap: {} operation'.format(operation_type)) replicas = Replicas(topo.ms['master1']) replica = replicas.list()[0] memory_ruv = replica.get_attr_val_utf8('nsds50ruv') log.info('Checking database ruv for ldap: {} operation'.format(operation_type)) > entry = replicas.get_ruv_entry(DEFAULT_SUFFIX) E AttributeError: 'Replicas' object has no attribute 'get_ruv_entry' suites/replication/ruvstore_test.py:81: AttributeError -------------------------------Captured log call-------------------------------- [32mINFO [0m tests.suites.replication.ruvstore_test:ruvstore_test.py:136 Adding user: rep2lusr to master1 [32mINFO [0m tests.suites.replication.ruvstore_test:ruvstore_test.py:75 Checking memory ruv for ldap: add operation [32mINFO [0m tests.suites.replication.ruvstore_test:ruvstore_test.py:80 Checking database ruv for ldap: add operation | |||
XPassed | suites/acl/syntax_test.py::test_aci_invalid_syntax_fail[test_Use_double_equal_instead_of_equal_in_the_targetattr] | 0.04 | |
No log output captured. | |||
XPassed | suites/acl/syntax_test.py::test_aci_invalid_syntax_fail[test_Use_double_equal_instead_of_equal_in_the_targetfilter] | 0.03 | |
No log output captured. | |||
XPassed | suites/replication/replica_config_test.py::test_agmt_num_add[nsds5ReplicaTimeout--1-9223372036854775807-9999999999999999999999999999999999999999999999999999999999999999999-invalid-6] | 0.27 | |
No log output captured. | |||
XPassed | suites/replication/replica_config_test.py::test_agmt_num_add[nsds5ReplicaBusyWaitTime--1-9223372036854775807-9999999999999999999999999999999999999999999999999999999999999999999-invalid-6] | 0.30 | |
No log output captured. | |||
XPassed | suites/replication/replica_config_test.py::test_agmt_num_add[nsds5ReplicaSessionPauseTime--1-9223372036854775807-9999999999999999999999999999999999999999999999999999999999999999999-invalid-6] | 0.25 | |
No log output captured. | |||
XPassed | suites/replication/replica_config_test.py::test_agmt_num_add[nsds5ReplicaFlowControlWindow--1-9223372036854775807-9999999999999999999999999999999999999999999999999999999999999999999-invalid-6] | 0.26 | |
No log output captured. | |||
XPassed | suites/replication/replica_config_test.py::test_agmt_num_add[nsds5ReplicaFlowControlPause--1-9223372036854775807-9999999999999999999999999999999999999999999999999999999999999999999-invalid-6] | 0.31 | |
No log output captured. | |||
XPassed | suites/replication/replica_config_test.py::test_agmt_num_add[nsds5ReplicaProtocolTimeout--1-9223372036854775807-9999999999999999999999999999999999999999999999999999999999999999999-invalid-6] | 0.26 | |
No log output captured. | |||
Skipped | suites/auth_token/basic_auth_test.py::test_ldap_auth_token_config::setup | 0.00 | |
('suites/auth_token/basic_auth_test.py', 28, 'Skipped: Auth tokens are not available in older versions') | |||
Skipped | suites/auth_token/basic_auth_test.py::test_ldap_auth_token_nsuser::setup | 0.00 | |
('suites/auth_token/basic_auth_test.py', 75, 'Skipped: Auth tokens are not available in older versions') | |||
Skipped | suites/auth_token/basic_auth_test.py::test_ldap_auth_token_disabled::setup | 0.00 | |
('suites/auth_token/basic_auth_test.py', 144, 'Skipped: Auth tokens are not available in older versions') | |||
Skipped | suites/auth_token/basic_auth_test.py::test_ldap_auth_token_directory_manager::setup | 0.00 | |
('suites/auth_token/basic_auth_test.py', 194, 'Skipped: Auth tokens are not available in older versions') | |||
Skipped | suites/auth_token/basic_auth_test.py::test_ldap_auth_token_anonymous::setup | 0.00 | |
('suites/auth_token/basic_auth_test.py', 217, 'Skipped: Auth tokens are not available in older versions') | |||
Skipped | suites/config/regression_test.py::test_set_cachememsize_to_custom_value::setup | 0.00 | |
('suites/config/regression_test.py', 34, 'Skipped: available memory is too low') | |||
Skipped | suites/ds_logs/ds_logs_test.py::test_etime_at_border_of_second::setup | 0.00 | |
('suites/ds_logs/ds_logs_test.py', 735, 'Skipped: rsearch was removed') | |||
Skipped | suites/entryuuid/basic_test.py::test_entryuuid_indexed_import_and_search::setup | 0.00 | |
('suites/entryuuid/basic_test.py', 73, 'Skipped: Entryuuid is not available in older versions') | |||
Skipped | suites/entryuuid/basic_test.py::test_entryuuid_unindexed_import_and_search::setup | 0.00 | |
('suites/entryuuid/basic_test.py', 113, 'Skipped: Entryuuid is not available in older versions') | |||
Skipped | suites/entryuuid/basic_test.py::test_entryuuid_generation_on_add::setup | 0.00 | |
('suites/entryuuid/basic_test.py', 155, 'Skipped: Entryuuid is not available in older versions') | |||
Skipped | suites/entryuuid/basic_test.py::test_entryuuid_fixup_task::setup | 0.00 | |
('suites/entryuuid/basic_test.py', 179, 'Skipped: Entryuuid is not available in older versions') | |||
Skipped | suites/memory_leaks/MMR_double_free_test.py::test_MMR_double_free::setup | 0.00 | |
('suites/memory_leaks/MMR_double_free_test.py', 67, "Skipped: Don't run if ASAN is not enabled") | |||
Skipped | suites/memory_leaks/range_search_test.py::test_range_search::setup | 0.00 | |
('suites/memory_leaks/range_search_test.py', 24, "Skipped: Don't run if ASAN is not enabled") | |||
Skipped | suites/migration/export_data_test.py::test_export_data_from_source_host::setup | 0.00 | |
('suites/migration/export_data_test.py', 24, 'Skipped: This test is meant to execute in specific test environment') | |||
Skipped | suites/migration/import_data_test.py::test_import_data_to_target_host::setup | 0.00 | |
('suites/migration/import_data_test.py', 24, 'Skipped: This test is meant to execute in specific test environment') | |||
Skipped | suites/replication/changelog_test.py::test_cldump_files_removed::setup | 0.00 | |
('suites/replication/changelog_test.py', 235, 'Skipped: does not work for prefix builds') | |||
Skipped | suites/replication/changelog_test.py::test_changelog_compactdbinterval::setup | 0.00 | |
('suites/replication/changelog_test.py', 630, 'Skipped: changelog compaction is done by the backend itself, with id2entry as well, nsslapd-changelogcompactdb-interval is no longer supported') | |||
Skipped | suites/rewriters/adfilter_test.py::test_adfilter_objectSid::setup | 0.00 | |
('suites/rewriters/adfilter_test.py', 90, 'Skipped: It is missing samba python bindings') | |||
Skipped | tickets/ticket47462_test.py::test_ticket47462::setup | 0.00 | |
('tickets/ticket47462_test.py', 39, 'Skipped: Upgrade scripts are supported only on versions < 1.4.x') | |||
Skipped | tickets/ticket47815_test.py::test_ticket47815::setup | 0.00 | |
('tickets/ticket47815_test.py', 26, 'Skipped: Not implemented, or invalid by nsMemberOf') | |||
Skipped | tickets/ticket49121_test.py::test_ticket49121::setup | 0.00 | |
('tickets/ticket49121_test.py', 32, "Skipped: Don't run if ASAN is not enabled") | |||
Passed | suites/acl/acivattr_test.py::test_positive[(ENG_USER, ENG_MANAGER, REAL_EQ_ACI)] | 0.05 | |
-------------------------------Captured log setup------------------------------- [32mINFO [0m lib389.SetupDs:setup.py:658 Starting installation... [32mINFO [0m lib389.SetupDs:setup.py:686 Completed installation for standalone1 [32mINFO [0m lib389.topologies:topologies.py:109 Instance with parameters {'ldap-port': 38901, 'ldap-secureport': 63601, 'server-id': 'standalone1', 'suffix': 'dc=example,dc=com'} was created. | |||
Passed | suites/acl/acivattr_test.py::test_positive[(ENG_USER, ENG_MANAGER, REAL_PRES_ACI)] | 0.04 | |
No log output captured. | |||
Passed | suites/acl/acivattr_test.py::test_positive[(ENG_USER, ENG_MANAGER, REAL_SUB_ACI)] | 0.05 | |
No log output captured. | |||
Passed | suites/acl/acivattr_test.py::test_positive[(ENG_USER, ENG_MANAGER, ROLE_PRES_ACI)] | 0.05 | |
No log output captured. | |||
Passed | suites/acl/acivattr_test.py::test_positive[(ENG_USER, ENG_MANAGER, ROLE_SUB_ACI)] | 0.05 | |
No log output captured. | |||
Passed | suites/acl/acivattr_test.py::test_positive[(ENG_USER, ENG_MANAGER, COS_EQ_ACI)] | 0.05 | |
No log output captured. | |||
Passed | suites/acl/acivattr_test.py::test_positive[(ENG_USER, ENG_MANAGER, COS_PRES_ACI)] | 0.05 | |
No log output captured. | |||
Passed | suites/acl/acivattr_test.py::test_positive[(ENG_USER, ENG_MANAGER, COS_SUB_ACI)] | 0.05 | |
No log output captured. | |||
Passed | suites/acl/acivattr_test.py::test_positive[(ENG_USER, ENG_MANAGER, LDAPURL_ACI)] | 0.35 | |
No log output captured. | |||
Passed | suites/acl/acivattr_test.py::test_negative[(ENG_USER, SALES_MANAGER, REAL_EQ_ACI)] | 0.06 | |
No log output captured. | |||
Passed | suites/acl/acivattr_test.py::test_negative[(ENG_USER, SALES_OU, REAL_PRES_ACI)] | 0.05 | |
No log output captured. | |||
Passed | suites/acl/acivattr_test.py::test_negative[(ENG_USER, SALES_MANAGER, REAL_SUB_ACI)] | 0.06 | |
No log output captured. | |||
Passed | suites/acl/acivattr_test.py::test_negative[(ENG_USER, SALES_MANAGER, ROLE_EQ_ACI)] | 0.06 | |
No log output captured. | |||
Passed | suites/acl/acivattr_test.py::test_negative[(ENG_USER, SALES_MANAGER, ROLE_PRES_ACI)] | 0.06 | |
No log output captured. | |||
Passed | suites/acl/acivattr_test.py::test_negative[(ENG_USER, SALES_MANAGER, ROLE_SUB_ACI)] | 0.28 | |
No log output captured. | |||
Passed | suites/acl/acivattr_test.py::test_negative[(ENG_USER, SALES_MANAGER, COS_EQ_ACI)] | 0.07 | |
No log output captured. | |||
Passed | suites/acl/acivattr_test.py::test_negative[(ENG_USER, SALES_MANAGER, COS_PRES_ACI)] | 0.06 | |
No log output captured. | |||
Passed | suites/acl/acivattr_test.py::test_negative[(ENG_USER, SALES_MANAGER, COS_SUB_ACI)] | 0.06 | |
No log output captured. | |||
Passed | suites/acl/acivattr_test.py::test_negative[(SALES_UESER, SALES_MANAGER, LDAPURL_ACI)] | 0.06 | |
No log output captured. | |||
Passed | suites/acl/acivattr_test.py::test_negative[(ENG_USER, ENG_MANAGER, ROLE_EQ_ACI)] | 0.06 | |
No log output captured. | |||
Passed | suites/acl/acl_deny_test.py::test_multi_deny_aci | 11.71 | |
-------------------------------Captured log setup------------------------------- [32mINFO [0m lib389.SetupDs:setup.py:658 Starting installation... [32mINFO [0m lib389.SetupDs:setup.py:686 Completed installation for standalone1 [32mINFO [0m lib389.topologies:topologies.py:109 Instance with parameters {'ldap-port': 38901, 'ldap-secureport': 63601, 'server-id': 'standalone1', 'suffix': 'dc=example,dc=com'} was created. [32mINFO [0m lib389:acl_deny_test.py:47 Add uid=tuser1,ou=People,dc=example,dc=com [32mINFO [0m lib389:acl_deny_test.py:58 Add uid=tuser,ou=People,dc=example,dc=com -------------------------------Captured log call-------------------------------- [32mINFO [0m lib389:acl_deny_test.py:90 Pass 1 [32mINFO [0m lib389:acl_deny_test.py:93 Testing two searches behave the same... [32mINFO [0m lib389:acl_deny_test.py:136 Testing search does not return any entries... [32mINFO [0m lib389:acl_deny_test.py:90 Pass 2 [32mINFO [0m lib389:acl_deny_test.py:93 Testing two searches behave the same... [32mINFO [0m lib389:acl_deny_test.py:136 Testing search does not return any entries... [32mINFO [0m lib389:acl_deny_test.py:200 Test PASSED | |||
Passed | suites/acl/acl_test.py::test_aci_attr_subtype_targetattr[lang-ja] | 0.01 | |
-------------------------------Captured log setup------------------------------- [32mINFO [0m lib389.SetupDs:setup.py:658 Starting installation... [32mINFO [0m lib389.SetupDs:setup.py:686 Completed installation for master1 [32mINFO [0m lib389.topologies:topologies.py:109 Instance with parameters {'ldap-port': 39001, 'ldap-secureport': 63701, 'server-id': 'master1', 'suffix': 'dc=example,dc=com'} was created. [32mINFO [0m lib389.SetupDs:setup.py:658 Starting installation... [32mINFO [0m lib389.SetupDs:setup.py:686 Completed installation for master2 [32mINFO [0m lib389.topologies:topologies.py:109 Instance with parameters {'ldap-port': 39002, 'ldap-secureport': 63702, 'server-id': 'master2', 'suffix': 'dc=example,dc=com'} was created. [32mINFO [0m lib389.topologies:topologies.py:142 Creating replication topology. [32mINFO [0m lib389.topologies:topologies.py:156 Joining master master2 to master1 ... [32mINFO [0m lib389.replica:replica.py:2084 SUCCESS: bootstrap to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 completed [32mINFO [0m lib389.replica:replica.py:2365 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is was created [32mINFO [0m lib389.replica:replica.py:2365 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is was created [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is NOT working (expect 32822820-46e5-494a-b22b-607804f0350c / got description=None) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 is working [32mINFO [0m lib389.replica:replica.py:2498 Retry: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is NOT working (expect ef0e84ff-2c1a-434b-a7d5-983e407e6274 / got description=32822820-46e5-494a-b22b-607804f0350c) [32mINFO [0m lib389.replica:replica.py:2496 SUCCESS: Replication from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 is working [32mINFO [0m lib389.replica:replica.py:2153 SUCCESS: joined master from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 [32mINFO [0m lib389.topologies:topologies.py:164 Ensuring master master1 to master2 ... [32mINFO [0m lib389.replica:replica.py:2338 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 already exists [32mINFO [0m lib389.topologies:topologies.py:164 Ensuring master master2 to master1 ... [32mINFO [0m lib389.replica:replica.py:2338 SUCCESS: Agreement from ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39002 to ldap://ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:39001 already exists [32mINFO [0m tests.suites.acl.acl_test:acl_test.py:76 ========Executing test with 'lang-ja' subtype======== [32mINFO [0m tests.suites.acl.acl_test:acl_test.py:77 Add a target attribute [32mINFO [0m tests.suites.acl.acl_test:acl_test.py:80 Add a user attribute [32mINFO [0m tests.suites.acl.acl_test:acl_test.py:88 Add an ACI with attribute subtype -------------------------------Captured log call-------------------------------- [32mINFO [0m tests.suites.acl.acl_test:acl_test.py:118 Search for the added attribute [32mINFO [0m tests.suites.acl.acl_test:acl_test.py:125 The added attribute was found | |||
Passed | suites/acl/acl_test.py::test_aci_attr_subtype_targetattr[binary] | 0.00 | |
-------------------------------Captured log setup------------------------------- [32mINFO [0m tests.suites.acl.acl_test:acl_test.py:76 ========Executing test with 'binary' subtype======== [32mINFO [0m tests.suites.acl.acl_test:acl_test.py:77 Add a target attribute [32mINFO [0m tests.suites.acl.acl_test:acl_test.py:80 Add a user attribute [32mINFO [0m tests.suites.acl.acl_test:acl_test.py:88 Add an ACI with attribute subtype -------------------------------Captured log call-------------------------------- [32mINFO [0m tests.suites.acl.acl_test:acl_test.py:118 Search for the added attribute [32mINFO [0m tests.suites.acl.acl_test:acl_test.py:125 The added attribute was found | |||
Passed | suites/acl/acl_test.py::test_aci_attr_subtype_targetattr[phonetic] | 0.00 | |
-------------------------------Captured log setup------------------------------- [32mINFO [0m tests.suites.acl.acl_test:acl_test.py:76 ========Executing test with 'phonetic' subtype======== [32mINFO [0m tests.suites.acl.acl_test:acl_test.py:77 Add a target attribute [32mINFO [0m tests.suites.acl.acl_test:acl_test.py:80 Add a user attribute [32mINFO [0m tests.suites.acl.acl_test:acl_test.py:88 Add an ACI with attribute subtype -------------------------------Captured log call-------------------------------- [32mINFO [0m tests.suites.acl.acl_test:acl_test.py:118 Search for the added attribute [32mINFO [0m tests.suites.acl.acl_test:acl_test.py:125 The added attribute was found | |||
Passed | suites/acl/acl_test.py::test_mode_default_add_deny | 0.03 | |
-------------------------------Captured log setup------------------------------- [32mINFO [0m lib389:acl_test.py:233 ######## INITIALIZATION ######## [32mINFO [0m lib389:acl_test.py:236 Add uid=bind_entry,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:254 Add cn=staged user,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:258 Add cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:262 Add cn=excepts,cn=accounts,dc=example,dc=com -------------------------------Captured log call-------------------------------- [32mINFO [0m lib389:acl_test.py:294 ######## mode moddn_aci : ADD (should fail) ######## [32mINFO [0m lib389:acl_test.py:139 Bind as uid=bind_entry,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:302 Try to add cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:311 Exception (expected): INSUFFICIENT_ACCESS | |||
Passed | suites/acl/acl_test.py::test_mode_default_delete_deny | 0.02 | |
-------------------------------Captured log call-------------------------------- [32mINFO [0m lib389:acl_test.py:329 ######## DELETE (should fail) ######## [32mINFO [0m lib389:acl_test.py:139 Bind as uid=bind_entry,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:336 Try to delete cn=staged user,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:341 Exception (expected): INSUFFICIENT_ACCESS | |||
Passed | suites/acl/acl_test.py::test_moddn_staging_prod[0-cn=staged user,dc=example,dc=com-cn=accounts,dc=example,dc=com-False] | 0.23 | |
-------------------------------Captured log call-------------------------------- [32mINFO [0m lib389:acl_test.py:376 ######## MOVE staging -> Prod (0) ######## [32mINFO [0m lib389:acl_test.py:139 Bind as uid=bind_entry,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:388 Try to MODDN uid=new_account0,cn=staged user,dc=example,dc=com -> uid=new_account0,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:395 Exception (expected): INSUFFICIENT_ACCESS [32mINFO [0m lib389:acl_test.py:399 ######## MOVE to and from equality filter ######## [32mINFO [0m lib389:acl_test.py:133 Bind as cn=Directory Manager [32mINFO [0m lib389:acl_test.py:139 Bind as uid=bind_entry,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:406 Try to MODDN uid=new_account0,cn=staged user,dc=example,dc=com -> uid=new_account0,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:133 Bind as cn=Directory Manager [32mINFO [0m lib389:acl_test.py:139 Bind as uid=bind_entry,dc=example,dc=com | |||
Passed | suites/acl/acl_test.py::test_moddn_staging_prod[1-cn=staged user,dc=example,dc=com-cn=accounts,dc=example,dc=com-False] | 0.17 | |
-------------------------------Captured log call-------------------------------- [32mINFO [0m lib389:acl_test.py:376 ######## MOVE staging -> Prod (1) ######## [32mINFO [0m lib389:acl_test.py:139 Bind as uid=bind_entry,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:388 Try to MODDN uid=new_account1,cn=staged user,dc=example,dc=com -> uid=new_account1,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:395 Exception (expected): INSUFFICIENT_ACCESS [32mINFO [0m lib389:acl_test.py:399 ######## MOVE to and from equality filter ######## [32mINFO [0m lib389:acl_test.py:133 Bind as cn=Directory Manager [32mINFO [0m lib389:acl_test.py:139 Bind as uid=bind_entry,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:406 Try to MODDN uid=new_account1,cn=staged user,dc=example,dc=com -> uid=new_account1,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:133 Bind as cn=Directory Manager [32mINFO [0m lib389:acl_test.py:139 Bind as uid=bind_entry,dc=example,dc=com | |||
Passed | suites/acl/acl_test.py::test_moddn_staging_prod[2-cn=staged user,dc=example,dc=com-cn=bad*,dc=example,dc=com-True] | 0.17 | |
-------------------------------Captured log call-------------------------------- [32mINFO [0m lib389:acl_test.py:376 ######## MOVE staging -> Prod (2) ######## [32mINFO [0m lib389:acl_test.py:139 Bind as uid=bind_entry,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:388 Try to MODDN uid=new_account2,cn=staged user,dc=example,dc=com -> uid=new_account2,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:395 Exception (expected): INSUFFICIENT_ACCESS [32mINFO [0m lib389:acl_test.py:399 ######## MOVE to and from equality filter ######## [32mINFO [0m lib389:acl_test.py:133 Bind as cn=Directory Manager [32mINFO [0m lib389:acl_test.py:139 Bind as uid=bind_entry,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:406 Try to MODDN uid=new_account2,cn=staged user,dc=example,dc=com -> uid=new_account2,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:409 Exception (expected): INSUFFICIENT_ACCESS [32mINFO [0m lib389:acl_test.py:133 Bind as cn=Directory Manager [32mINFO [0m lib389:acl_test.py:139 Bind as uid=bind_entry,dc=example,dc=com | |||
Passed | suites/acl/acl_test.py::test_moddn_staging_prod[3-cn=st*,dc=example,dc=com-cn=accounts,dc=example,dc=com-False] | 0.18 | |
-------------------------------Captured log call-------------------------------- [32mINFO [0m lib389:acl_test.py:376 ######## MOVE staging -> Prod (3) ######## [32mINFO [0m lib389:acl_test.py:139 Bind as uid=bind_entry,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:388 Try to MODDN uid=new_account3,cn=staged user,dc=example,dc=com -> uid=new_account3,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:395 Exception (expected): INSUFFICIENT_ACCESS [32mINFO [0m lib389:acl_test.py:399 ######## MOVE to and from equality filter ######## [32mINFO [0m lib389:acl_test.py:133 Bind as cn=Directory Manager [32mINFO [0m lib389:acl_test.py:139 Bind as uid=bind_entry,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:406 Try to MODDN uid=new_account3,cn=staged user,dc=example,dc=com -> uid=new_account3,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:133 Bind as cn=Directory Manager [32mINFO [0m lib389:acl_test.py:139 Bind as uid=bind_entry,dc=example,dc=com | |||
Passed | suites/acl/acl_test.py::test_moddn_staging_prod[4-cn=bad*,dc=example,dc=com-cn=accounts,dc=example,dc=com-True] | 0.17 | |
-------------------------------Captured log call-------------------------------- [32mINFO [0m lib389:acl_test.py:376 ######## MOVE staging -> Prod (4) ######## [32mINFO [0m lib389:acl_test.py:139 Bind as uid=bind_entry,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:388 Try to MODDN uid=new_account4,cn=staged user,dc=example,dc=com -> uid=new_account4,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:395 Exception (expected): INSUFFICIENT_ACCESS [32mINFO [0m lib389:acl_test.py:399 ######## MOVE to and from equality filter ######## [32mINFO [0m lib389:acl_test.py:133 Bind as cn=Directory Manager [32mINFO [0m lib389:acl_test.py:139 Bind as uid=bind_entry,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:406 Try to MODDN uid=new_account4,cn=staged user,dc=example,dc=com -> uid=new_account4,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:409 Exception (expected): INSUFFICIENT_ACCESS [32mINFO [0m lib389:acl_test.py:133 Bind as cn=Directory Manager [32mINFO [0m lib389:acl_test.py:139 Bind as uid=bind_entry,dc=example,dc=com | |||
Passed | suites/acl/acl_test.py::test_moddn_staging_prod[5-cn=st*,dc=example,dc=com-cn=ac*,dc=example,dc=com-False] | 0.17 | |
-------------------------------Captured log call-------------------------------- [32mINFO [0m lib389:acl_test.py:376 ######## MOVE staging -> Prod (5) ######## [32mINFO [0m lib389:acl_test.py:139 Bind as uid=bind_entry,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:388 Try to MODDN uid=new_account5,cn=staged user,dc=example,dc=com -> uid=new_account5,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:395 Exception (expected): INSUFFICIENT_ACCESS [32mINFO [0m lib389:acl_test.py:399 ######## MOVE to and from equality filter ######## [32mINFO [0m lib389:acl_test.py:133 Bind as cn=Directory Manager [32mINFO [0m lib389:acl_test.py:139 Bind as uid=bind_entry,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:406 Try to MODDN uid=new_account5,cn=staged user,dc=example,dc=com -> uid=new_account5,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:133 Bind as cn=Directory Manager [32mINFO [0m lib389:acl_test.py:139 Bind as uid=bind_entry,dc=example,dc=com | |||
Passed | suites/acl/acl_test.py::test_moddn_staging_prod[6-None-cn=ac*,dc=example,dc=com-False] | 0.17 | |
-------------------------------Captured log call-------------------------------- [32mINFO [0m lib389:acl_test.py:376 ######## MOVE staging -> Prod (6) ######## [32mINFO [0m lib389:acl_test.py:139 Bind as uid=bind_entry,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:388 Try to MODDN uid=new_account6,cn=staged user,dc=example,dc=com -> uid=new_account6,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:395 Exception (expected): INSUFFICIENT_ACCESS [32mINFO [0m lib389:acl_test.py:399 ######## MOVE to and from equality filter ######## [32mINFO [0m lib389:acl_test.py:133 Bind as cn=Directory Manager [32mINFO [0m lib389:acl_test.py:139 Bind as uid=bind_entry,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:406 Try to MODDN uid=new_account6,cn=staged user,dc=example,dc=com -> uid=new_account6,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:133 Bind as cn=Directory Manager [32mINFO [0m lib389:acl_test.py:139 Bind as uid=bind_entry,dc=example,dc=com | |||
Passed | suites/acl/acl_test.py::test_moddn_staging_prod[7-cn=st*,dc=example,dc=com-None-False] | 0.18 | |
-------------------------------Captured log call-------------------------------- [32mINFO [0m lib389:acl_test.py:376 ######## MOVE staging -> Prod (7) ######## [32mINFO [0m lib389:acl_test.py:139 Bind as uid=bind_entry,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:388 Try to MODDN uid=new_account7,cn=staged user,dc=example,dc=com -> uid=new_account7,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:395 Exception (expected): INSUFFICIENT_ACCESS [32mINFO [0m lib389:acl_test.py:399 ######## MOVE to and from equality filter ######## [32mINFO [0m lib389:acl_test.py:133 Bind as cn=Directory Manager [32mINFO [0m lib389:acl_test.py:139 Bind as uid=bind_entry,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:406 Try to MODDN uid=new_account7,cn=staged user,dc=example,dc=com -> uid=new_account7,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:133 Bind as cn=Directory Manager [32mINFO [0m lib389:acl_test.py:139 Bind as uid=bind_entry,dc=example,dc=com | |||
Passed | suites/acl/acl_test.py::test_moddn_staging_prod[8-None-None-False] | 0.16 | |
-------------------------------Captured log call-------------------------------- [32mINFO [0m lib389:acl_test.py:376 ######## MOVE staging -> Prod (8) ######## [32mINFO [0m lib389:acl_test.py:139 Bind as uid=bind_entry,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:388 Try to MODDN uid=new_account8,cn=staged user,dc=example,dc=com -> uid=new_account8,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:395 Exception (expected): INSUFFICIENT_ACCESS [32mINFO [0m lib389:acl_test.py:399 ######## MOVE to and from equality filter ######## [32mINFO [0m lib389:acl_test.py:133 Bind as cn=Directory Manager [32mINFO [0m lib389:acl_test.py:139 Bind as uid=bind_entry,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:406 Try to MODDN uid=new_account8,cn=staged user,dc=example,dc=com -> uid=new_account8,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:133 Bind as cn=Directory Manager [32mINFO [0m lib389:acl_test.py:139 Bind as uid=bind_entry,dc=example,dc=com | |||
Passed | suites/acl/acl_test.py::test_moddn_staging_prod_9 | 0.71 | |
-------------------------------Captured log call-------------------------------- [32mINFO [0m lib389:acl_test.py:453 ######## MOVE staging -> Prod (9) ######## [32mINFO [0m lib389:acl_test.py:139 Bind as uid=bind_entry,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:466 Try to MODDN uid=new_account9,cn=staged user,dc=example,dc=com -> uid=new_account9,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:473 Exception (expected): INSUFFICIENT_ACCESS [32mINFO [0m lib389:acl_test.py:479 Disable the moddn right [32mINFO [0m lib389:acl_test.py:133 Bind as cn=Directory Manager [32mINFO [0m lib389:acl_test.py:484 ######## MOVE to and from equality filter ######## [32mINFO [0m lib389:acl_test.py:133 Bind as cn=Directory Manager [32mINFO [0m lib389:acl_test.py:139 Bind as uid=bind_entry,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:492 Try to MODDN uid=new_account9,cn=staged user,dc=example,dc=com -> uid=new_account9,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:499 Exception (expected): INSUFFICIENT_ACCESS [32mINFO [0m lib389:acl_test.py:133 Bind as cn=Directory Manager [32mINFO [0m lib389:acl_test.py:139 Bind as uid=bind_entry,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:133 Bind as cn=Directory Manager [32mINFO [0m lib389:acl_test.py:139 Bind as uid=bind_entry,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:521 Try to MODDN uid=new_account9,cn=staged user,dc=example,dc=com -> uid=new_account9,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:133 Bind as cn=Directory Manager [32mINFO [0m lib389:acl_test.py:139 Bind as uid=bind_entry,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:532 Enable the moddn right [32mINFO [0m lib389:acl_test.py:133 Bind as cn=Directory Manager [32mINFO [0m lib389:acl_test.py:536 ######## MOVE staging -> Prod (10) ######## [32mINFO [0m lib389:acl_test.py:139 Bind as uid=bind_entry,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:548 Try to MODDN uid=new_account10,cn=staged user,dc=example,dc=com -> uid=new_account10,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:555 Exception (expected): INSUFFICIENT_ACCESS [32mINFO [0m lib389:acl_test.py:133 Bind as cn=Directory Manager [32mINFO [0m lib389:acl_test.py:139 Bind as uid=bind_entry,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:572 Try to MODDN uid=new_account10,cn=staged user,dc=example,dc=com -> uid=new_account10,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:579 Exception (expected): INSUFFICIENT_ACCESS [32mINFO [0m lib389:acl_test.py:133 Bind as cn=Directory Manager [32mINFO [0m lib389:acl_test.py:139 Bind as uid=bind_entry,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:588 ######## MOVE to and from equality filter ######## [32mINFO [0m lib389:acl_test.py:133 Bind as cn=Directory Manager [32mINFO [0m lib389:acl_test.py:139 Bind as uid=bind_entry,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:594 Try to MODDN uid=new_account10,cn=staged user,dc=example,dc=com -> uid=new_account10,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:133 Bind as cn=Directory Manager [32mINFO [0m lib389:acl_test.py:139 Bind as uid=bind_entry,dc=example,dc=com | |||
Passed | suites/acl/acl_test.py::test_moddn_prod_staging | 0.32 | |
-------------------------------Captured log call-------------------------------- [32mINFO [0m lib389:acl_test.py:623 ######## MOVE staging -> Prod (11) ######## [32mINFO [0m lib389:acl_test.py:139 Bind as uid=bind_entry,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:636 Try to MODDN uid=new_account11,cn=staged user,dc=example,dc=com -> uid=new_account11,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:643 Exception (expected): INSUFFICIENT_ACCESS [32mINFO [0m lib389:acl_test.py:647 ######## MOVE to and from equality filter ######## [32mINFO [0m lib389:acl_test.py:133 Bind as cn=Directory Manager [32mINFO [0m lib389:acl_test.py:139 Bind as uid=bind_entry,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:653 Try to MODDN uid=new_account11,cn=staged user,dc=example,dc=com -> uid=new_account11,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:133 Bind as cn=Directory Manager [32mINFO [0m lib389:acl_test.py:139 Bind as uid=bind_entry,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:668 Try to move back MODDN uid=new_account11,cn=accounts,dc=example,dc=com -> uid=new_account11,cn=staged user,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:675 Exception (expected): INSUFFICIENT_ACCESS [32mINFO [0m lib389:acl_test.py:133 Bind as cn=Directory Manager [32mINFO [0m lib389:acl_test.py:139 Bind as uid=bind_entry,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:133 Bind as cn=Directory Manager [32mINFO [0m lib389:acl_test.py:139 Bind as uid=bind_entry,dc=example,dc=com | |||
Passed | suites/acl/acl_test.py::test_check_repl_M2_to_M1 | 1.04 | |
-------------------------------Captured log call-------------------------------- [32mINFO [0m lib389:acl_test.py:705 Bind as cn=Directory Manager (M2) [32mINFO [0m lib389:acl_test.py:725 Update (M2) uid=new_account12,cn=staged user,dc=example,dc=com (description) [32mINFO [0m lib389:acl_test.py:738 Update uid=new_account12,cn=staged user,dc=example,dc=com (description) replicated on M1 | |||
Passed | suites/acl/acl_test.py::test_moddn_staging_prod_except | 0.41 | |
-------------------------------Captured log call-------------------------------- [32mINFO [0m lib389:acl_test.py:763 ######## MOVE staging -> Prod (13) ######## [32mINFO [0m lib389:acl_test.py:139 Bind as uid=bind_entry,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:775 Try to MODDN uid=new_account13,cn=staged user,dc=example,dc=com -> uid=new_account13,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:782 Exception (expected): INSUFFICIENT_ACCESS [32mINFO [0m lib389:acl_test.py:786 ######## MOVE to and from equality filter ######## [32mINFO [0m lib389:acl_test.py:133 Bind as cn=Directory Manager [32mINFO [0m lib389:acl_test.py:160 Add a DENY aci under cn=excepts,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:139 Bind as uid=bind_entry,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:793 Try to MODDN uid=new_account13,cn=staged user,dc=example,dc=com -> uid=new_account13,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:799 ######## MOVE staging -> Prod/Except (14) ######## [32mINFO [0m lib389:acl_test.py:805 Try to MODDN uid=new_account14,cn=staged user,dc=example,dc=com -> uid=new_account14,cn=excepts,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:812 Exception (expected): INSUFFICIENT_ACCESS [32mINFO [0m lib389:acl_test.py:133 Bind as cn=Directory Manager [32mINFO [0m lib389:acl_test.py:160 Add a DENY aci under cn=excepts,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:139 Bind as uid=bind_entry,dc=example,dc=com | |||
Passed | suites/acl/acl_test.py::test_mode_default_ger_no_moddn | 0.01 | |
-------------------------------Captured log call-------------------------------- [32mINFO [0m lib389:acl_test.py:839 ######## mode moddn_aci : GER no moddn ######## [32mINFO [0m lib389:acl_test.py:850 dn: cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:850 dn: cn=excepts,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:850 dn: uid=new_account0,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:850 dn: uid=new_account1,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:850 dn: uid=new_account3,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:850 dn: uid=new_account5,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:850 dn: uid=new_account6,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:850 dn: uid=new_account7,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:850 dn: uid=new_account8,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:850 dn: uid=new_account9,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:850 dn: uid=new_account10,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:850 dn: uid=new_account11,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:850 dn: uid=new_account13,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:853 ######## entryLevelRights: b'v' | |||
Passed | suites/acl/acl_test.py::test_mode_default_ger_with_moddn | 0.16 | |
-------------------------------Captured log call-------------------------------- [32mINFO [0m lib389:acl_test.py:877 ######## mode moddn_aci: GER with moddn ######## [32mINFO [0m lib389:acl_test.py:133 Bind as cn=Directory Manager [32mINFO [0m lib389:acl_test.py:139 Bind as uid=bind_entry,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:895 dn: cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:895 dn: cn=excepts,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:895 dn: uid=new_account0,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:895 dn: uid=new_account1,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:895 dn: uid=new_account3,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:895 dn: uid=new_account5,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:895 dn: uid=new_account6,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:895 dn: uid=new_account7,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:895 dn: uid=new_account8,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:895 dn: uid=new_account9,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:895 dn: uid=new_account10,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:895 dn: uid=new_account11,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:895 dn: uid=new_account13,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:898 ######## entryLevelRights: b'vn' [32mINFO [0m lib389:acl_test.py:133 Bind as cn=Directory Manager [32mINFO [0m lib389:acl_test.py:139 Bind as uid=bind_entry,dc=example,dc=com | |||
Passed | suites/acl/acl_test.py::test_mode_legacy_ger_no_moddn1 | 0.04 | |
-------------------------------Captured log call-------------------------------- [32mINFO [0m lib389:acl_test.py:928 ######## Disable the moddn aci mod ######## [32mINFO [0m lib389:acl_test.py:133 Bind as cn=Directory Manager [32mINFO [0m lib389:acl_test.py:932 ######## mode legacy 1: GER no moddn ######## [32mINFO [0m lib389:acl_test.py:942 dn: cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:942 dn: cn=excepts,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:942 dn: uid=new_account0,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:942 dn: uid=new_account1,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:942 dn: uid=new_account3,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:942 dn: uid=new_account5,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:942 dn: uid=new_account6,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:942 dn: uid=new_account7,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:942 dn: uid=new_account8,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:942 dn: uid=new_account9,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:942 dn: uid=new_account10,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:942 dn: uid=new_account11,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:942 dn: uid=new_account13,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:945 ######## entryLevelRights: b'v' | |||
Passed | suites/acl/acl_test.py::test_mode_legacy_ger_no_moddn2 | 0.34 | |
-------------------------------Captured log call-------------------------------- [32mINFO [0m lib389:acl_test.py:971 ######## Disable the moddn aci mod ######## [32mINFO [0m lib389:acl_test.py:133 Bind as cn=Directory Manager [32mINFO [0m lib389:acl_test.py:975 ######## mode legacy 2: GER no moddn ######## [32mINFO [0m lib389:acl_test.py:133 Bind as cn=Directory Manager [32mINFO [0m lib389:acl_test.py:139 Bind as uid=bind_entry,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:992 dn: cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:992 dn: cn=excepts,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:992 dn: uid=new_account0,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:992 dn: uid=new_account1,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:992 dn: uid=new_account3,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:992 dn: uid=new_account5,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:992 dn: uid=new_account6,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:992 dn: uid=new_account7,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:992 dn: uid=new_account8,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:992 dn: uid=new_account9,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:992 dn: uid=new_account10,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:992 dn: uid=new_account11,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:992 dn: uid=new_account13,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:995 ######## entryLevelRights: b'v' [32mINFO [0m lib389:acl_test.py:133 Bind as cn=Directory Manager [32mINFO [0m lib389:acl_test.py:139 Bind as uid=bind_entry,dc=example,dc=com | |||
Passed | suites/acl/acl_test.py::test_mode_legacy_ger_with_moddn | 0.11 | |
-------------------------------Captured log call-------------------------------- [32mINFO [0m lib389:acl_test.py:1031 ######## Disable the moddn aci mod ######## [32mINFO [0m lib389:acl_test.py:133 Bind as cn=Directory Manager [32mINFO [0m lib389:acl_test.py:1035 ######## mode legacy : GER with moddn ######## [32mINFO [0m lib389:acl_test.py:133 Bind as cn=Directory Manager [32mINFO [0m lib389:acl_test.py:139 Bind as uid=bind_entry,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:1057 dn: cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:1057 dn: cn=excepts,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:1057 dn: uid=new_account0,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:1057 dn: uid=new_account1,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:1057 dn: uid=new_account3,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:1057 dn: uid=new_account5,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:1057 dn: uid=new_account6,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:1057 dn: uid=new_account7,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:1057 dn: uid=new_account8,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:1057 dn: uid=new_account9,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:1057 dn: uid=new_account10,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:1057 dn: uid=new_account11,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:1057 dn: uid=new_account13,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:1060 ######## entryLevelRights: b'vn' [32mINFO [0m lib389:acl_test.py:133 Bind as cn=Directory Manager | |||
Passed | suites/acl/acl_test.py::test_rdn_write_get_ger | 0.01 | |
-------------------------------Captured log setup------------------------------- [32mINFO [0m lib389:acl_test.py:1071 ######## Add entry tuser ######## -------------------------------Captured log call-------------------------------- [32mINFO [0m lib389:acl_test.py:1097 ######## GER rights for anonymous ######## [32mINFO [0m lib389:acl_test.py:1107 dn: dc=example,dc=com [32mINFO [0m lib389:acl_test.py:1109 ######## entryLevelRights: b'v' [32mINFO [0m lib389:acl_test.py:1107 dn: ou=groups,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:1109 ######## entryLevelRights: b'v' [32mINFO [0m lib389:acl_test.py:1107 dn: ou=people,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:1109 ######## entryLevelRights: b'v' [32mINFO [0m lib389:acl_test.py:1107 dn: ou=permissions,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:1109 ######## entryLevelRights: b'v' [32mINFO [0m lib389:acl_test.py:1107 dn: ou=services,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:1109 ######## entryLevelRights: b'v' [32mINFO [0m lib389:acl_test.py:1107 dn: uid=demo_user,ou=people,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:1109 ######## entryLevelRights: b'v' [32mINFO [0m lib389:acl_test.py:1107 dn: cn=demo_group,ou=groups,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:1109 ######## entryLevelRights: b'v' [32mINFO [0m lib389:acl_test.py:1107 dn: cn=group_admin,ou=permissions,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:1109 ######## entryLevelRights: b'v' [32mINFO [0m lib389:acl_test.py:1107 dn: cn=group_modify,ou=permissions,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:1109 ######## entryLevelRights: b'v' [32mINFO [0m lib389:acl_test.py:1107 dn: cn=user_admin,ou=permissions,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:1109 ######## entryLevelRights: b'v' [32mINFO [0m lib389:acl_test.py:1107 dn: cn=user_modify,ou=permissions,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:1109 ######## entryLevelRights: b'v' [32mINFO [0m lib389:acl_test.py:1107 dn: cn=user_passwd_reset,ou=permissions,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:1109 ######## entryLevelRights: b'v' [32mINFO [0m lib389:acl_test.py:1107 dn: cn=user_private_read,ou=permissions,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:1109 ######## entryLevelRights: b'v' [32mINFO [0m lib389:acl_test.py:1107 dn: cn=replication_managers,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:1109 ######## entryLevelRights: b'v' [32mINFO [0m lib389:acl_test.py:1107 dn: cn=ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:63701,ou=services,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:1109 ######## entryLevelRights: b'v' [32mINFO [0m lib389:acl_test.py:1107 dn: cn=ci-vm-10-0-139-146.hosted.upshift.rdu2.redhat.com:63702,ou=services,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:1109 ######## entryLevelRights: b'v' [32mINFO [0m lib389:acl_test.py:1107 dn: uid=bind_entry,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:1109 ######## entryLevelRights: b'v' [32mINFO [0m lib389:acl_test.py:1107 dn: cn=staged user,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:1109 ######## entryLevelRights: b'v' [32mINFO [0m lib389:acl_test.py:1107 dn: cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:1109 ######## entryLevelRights: b'v' [32mINFO [0m lib389:acl_test.py:1107 dn: cn=excepts,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:1109 ######## entryLevelRights: b'v' [32mINFO [0m lib389:acl_test.py:1107 dn: uid=new_account0,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:1109 ######## entryLevelRights: b'v' [32mINFO [0m lib389:acl_test.py:1107 dn: uid=new_account1,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:1109 ######## entryLevelRights: b'v' [32mINFO [0m lib389:acl_test.py:1107 dn: uid=new_account2,cn=staged user,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:1109 ######## entryLevelRights: b'v' [32mINFO [0m lib389:acl_test.py:1107 dn: uid=new_account3,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:1109 ######## entryLevelRights: b'v' [32mINFO [0m lib389:acl_test.py:1107 dn: uid=new_account4,cn=staged user,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:1109 ######## entryLevelRights: b'v' [32mINFO [0m lib389:acl_test.py:1107 dn: uid=new_account5,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:1109 ######## entryLevelRights: b'v' [32mINFO [0m lib389:acl_test.py:1107 dn: uid=new_account6,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:1109 ######## entryLevelRights: b'v' [32mINFO [0m lib389:acl_test.py:1107 dn: uid=new_account7,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:1109 ######## entryLevelRights: b'v' [32mINFO [0m lib389:acl_test.py:1107 dn: uid=new_account8,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:1109 ######## entryLevelRights: b'v' [32mINFO [0m lib389:acl_test.py:1107 dn: uid=new_account9,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:1109 ######## entryLevelRights: b'v' [32mINFO [0m lib389:acl_test.py:1107 dn: uid=new_account10,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:1109 ######## entryLevelRights: b'v' [32mINFO [0m lib389:acl_test.py:1107 dn: uid=new_account11,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:1109 ######## entryLevelRights: b'v' [32mINFO [0m lib389:acl_test.py:1107 dn: uid=new_account12,cn=staged user,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:1109 ######## entryLevelRights: b'v' [32mINFO [0m lib389:acl_test.py:1107 dn: uid=new_account13,cn=accounts,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:1109 ######## entryLevelRights: b'v' [32mINFO [0m lib389:acl_test.py:1107 dn: uid=new_account14,cn=staged user,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:1109 ######## entryLevelRights: b'v' [32mINFO [0m lib389:acl_test.py:1107 dn: uid=new_account15,cn=staged user,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:1109 ######## entryLevelRights: b'v' [32mINFO [0m lib389:acl_test.py:1107 dn: uid=new_account16,cn=staged user,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:1109 ######## entryLevelRights: b'v' [32mINFO [0m lib389:acl_test.py:1107 dn: uid=new_account17,cn=staged user,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:1109 ######## entryLevelRights: b'v' [32mINFO [0m lib389:acl_test.py:1107 dn: uid=new_account18,cn=staged user,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:1109 ######## entryLevelRights: b'v' [32mINFO [0m lib389:acl_test.py:1107 dn: uid=new_account19,cn=staged user,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:1109 ######## entryLevelRights: b'v' [32mINFO [0m lib389:acl_test.py:1107 dn: cn=tuser,dc=example,dc=com [32mINFO [0m lib389:acl_test.py:1109 ######## entryLevelRights: b'v' | |||
Passed | suites/acl/acl_test.py::test_rdn_write_modrdn_anonymous | 0.06 | |
-------------------------------Captured log call-------------------------------- [32mINFO [0m lib389:acl_test.py:1136 dn: [32mINFO [0m lib389:acl_test.py:1138 ######## 'objectClass': [b'top'] [32mINFO [0m lib389:acl_test.py:1138 ######## 'defaultnamingcontext': [b'dc=example,dc=com'] [32mINFO [0m lib389:acl_test.py:1138 ######## 'dataversion': [b'020201031001729'] [32mINFO [0m lib389:acl_test.py:1138 ######## 'netscapemdsuffix': [b'cn=ldap://dc=localhost,dc=localdomain:39001'] [32mINFO [0m lib389:acl_test.py:1143 Exception (expected): INSUFFICIENT_ACCESS [32mINFO [0m lib389:acl_test.py:1150 The entry was not renamed (expected) [32mINFO [0m lib389:acl_test.py:133 Bind as cn=Directory Manager | |||
Passed | suites/acl/deladd_test.py::test_allow_delete_access_to_groupdn | 0.33 | |
-------------------------------Captured log setup------------------------------- [32mINFO [0m lib389.SetupDs:setup.py:658 Starting installation... [32mINFO [0m lib389.SetupDs:setup.py:686 Completed installation for standalone1 [32mINFO [0m lib389.topologies:topologies.py:109 Instance with parameters {'ldap-port': 38901, 'ldap-secureport': 63601, 'server-id': 'standalone1', 'suffix': 'dc=example,dc=com'} was created. | |||
Passed | suites/acl/deladd_test.py::test_allow_add_access_to_anyone | 0.06 | |
No log output captured. | |||
Passed | suites/acl/deladd_test.py::test_allow_delete_access_to_anyone | 0.05 | |
No log output captured. | |||
Passed | suites/acl/deladd_test.py::test_allow_delete_access_not_to_userdn | 0.06 | |
No log output captured. | |||
Passed | suites/acl/deladd_test.py::test_allow_delete_access_not_to_group | 0.29 | |
No log output captured. | |||
Passed | suites/acl/deladd_test.py::test_allow_add_access_to_parent | 0.07 | |
No log output captured. | |||
Passed | suites/acl/deladd_test.py::test_allow_delete_access_to_parent | 0.08 | |
No log output captured. | |||
Passed | suites/acl/deladd_test.py::test_allow_delete_access_to_dynamic_group | 0.05 | |
No log output captured. | |||
Passed | suites/acl/deladd_test.py::test_allow_delete_access_to_dynamic_group_uid | 0.05 | |
No log output captured. | |||
Passed | suites/acl/deladd_test.py::test_allow_delete_access_not_to_dynamic_group | 0.09 | |
No log output captured. | |||
Passed | suites/acl/enhanced_aci_modrnd_test.py::test_enhanced_aci_modrnd | 0.27 | |
-------------------------------Captured log setup------------------------------- [32mINFO [0m lib389.SetupDs:setup.py:658 Starting installation... [32mINFO [0m lib389.SetupDs:setup.py:686 Completed installation for standalone1 [32mINFO [0m lib389.topologies:topologies.py:109 Instance with parameters {'ldap-port': 38901, 'ldap-secureport': 63601, 'server-id': 'standalone1', 'suffix': 'dc=example,dc=com'} was created. [32mINFO [0m tests.suites.acl.enhanced_aci_modrnd_test:enhanced_aci_modrnd_test.py:32 Add a container: ou=test_ou_1,dc=example,dc=com [32mINFO [0m tests.suites.acl.enhanced_aci_modrnd_test:enhanced_aci_modrnd_test.py:38 Add a container: ou=test_ou_2,dc=example,dc=com [32mINFO [0m tests.suites.acl.enhanced_aci_modrnd_test:enhanced_aci_modrnd_test.py:44 Add a user: cn=test_user,ou=test_ou_1,dc=example,dc=com [32mINFO [0m tests.suites.acl.enhanced_aci_modrnd_test:enhanced_aci_modrnd_test.py:58 Add an ACI 'allow (all)' by cn=test_user,ou=test_ou_1,dc=example,dc=com to the ou=test_ou_1,dc=example,dc=com [32mINFO [0m tests.suites.acl.enhanced_aci_modrnd_test:enhanced_aci_modrnd_test.py:62 Add an ACI 'allow (all)' by cn=test_user,ou=test_ou_1,dc=example,dc=com to the ou=test_ou_2,dc=example,dc=com -------------------------------Captured log call-------------------------------- [32mINFO [0m tests.suites.acl.enhanced_aci_modrnd_test:enhanced_aci_modrnd_test.py:93 Bind as cn=test_user,ou=test_ou_1,dc=example,dc=com [32mINFO [0m tests.suites.acl.enhanced_aci_modrnd_test:enhanced_aci_modrnd_test.py:97 User MODRDN operation from ou=test_ou_1,dc=example,dc=com to ou=test_ou_2,dc=example,dc=com [32mINFO [0m tests.suites.acl.enhanced_aci_modrnd_test:enhanced_aci_modrnd_test.py:103 Check there is no user in ou=test_ou_1,dc=example,dc=com [32mINFO [0m tests.suites.acl.enhanced_aci_modrnd_test:enhanced_aci_modrnd_test.py:109 Check there is our user in ou=test_ou_2,dc=example,dc=com | |||
Passed | suites/acl/globalgroup_part2_test.py::test_undefined_in_group_eval_five | 0.29 | |
-------------------------------Captured log setup------------------------------- [32mINFO [0m lib389.SetupDs:setup.py:658 Starting installation... [32mINFO [0m lib389.SetupDs:setup.py:686 Completed installation for standalone1 [32mINFO [0m lib389.topologies:topologies.py:109 Instance with parameters {'ldap-port': 38901, 'ldap-secureport': 63601, 'server-id': 'standalone1', 'suffix': 'dc=example,dc=com'} was created. | |||
Passed | suites/acl/globalgroup_part2_test.py::test_undefined_in_group_eval_six | 0.06 | |
No log output captured. | |||
Passed | suites/acl/globalgroup_part2_test.py::test_undefined_in_group_eval_seven | 0.04 | |
No log output captured. | |||
Passed | suites/acl/globalgroup_part2_test.py::test_undefined_in_group_eval_eight | 0.03 | |
No log output captured. | |||
Passed | suites/acl/globalgroup_part2_test.py::test_undefined_in_group_eval_nine | 0.04 | |
No log output captured. | |||
Passed | suites/acl/globalgroup_part2_test.py::test_undefined_in_group_eval_ten | 0.05 | |
No log output captured. | |||
Passed | suites/acl/globalgroup_part2_test.py::test_undefined_in_group_eval_eleven | 0.04 | |
No log output captured. | |||
Passed | suites/acl/globalgroup_part2_test.py::test_undefined_in_group_eval_twelve | 0.03 | |
No log output captured. | |||
Passed | suites/acl/globalgroup_part2_test.py::test_undefined_in_group_eval_fourteen | 0.07 | |
No log output captured. | |||
Passed | suites/acl/globalgroup_part2_test.py::test_undefined_in_group_eval_fifteen | 0.05 | |
No log output captured. | |||
Passed | suites/acl/globalgroup_part2_test.py::test_undefined_in_group_eval_sixteen | 0.03 | |
No log output captured. | |||
Passed | suites/acl/globalgroup_part2_test.py::test_undefined_in_group_eval_seventeen | 0.02 | |
No log output captured. | |||
Passed | suites/acl/globalgroup_part2_test.py::test_undefined_in_group_eval_eighteen | 0.03 | |
No log output captured. | |||
Passed | suites/acl/globalgroup_test.py::test_caching_changes | 0.31 | |
-------------------------------Captured log setup------------------------------- [32mINFO [0m lib389.SetupDs:setup.py:658 Starting installation... [32mINFO [0m lib389.SetupDs:setup.py:686 Completed installation for standalone1 [32mINFO [0m lib389.topologies:topologies.py:109 Instance with parameters {'ldap-port': 38901, 'ldap-secureport': 63601, 'server-id': 'standalone1', 'suffix': 'dc=example,dc=com'} was created. | |||
Passed | suites/acl/globalgroup_test.py::test_deny_group_member_all_rights_to_user | 0.07 | |
No log output captured. | |||
Passed | suites/acl/globalgroup_test.py::test_deny_group_member_all_rights_to_group_members | 0.03 | |
No log output captured. | |||
Passed | suites/acl/globalgroup_test.py::test_deeply_nested_groups_aci_denial | 0.06 | |
No log output captured. | |||
Passed | suites/acl/globalgroup_test.py::test_deeply_nested_groups_aci_denial_two | 0.02 | |
No log output captured. | |||
Passed | suites/acl/globalgroup_test.py::test_deeply_nested_groups_aci_allow | 0.02 | |
No log output captured. | |||
Passed | suites/acl/globalgroup_test.py::test_deeply_nested_groups_aci_allow_two | 0.04 | |
No log output captured. | |||
Passed | suites/acl/globalgroup_test.py::test_undefined_in_group_eval | 0.04 | |
No log output captured. | |||
Passed | suites/acl/globalgroup_test.py::test_undefined_in_group_eval_two | 0.03 | |
No log output captured. | |||
Passed | suites/acl/globalgroup_test.py::test_undefined_in_group_eval_three | 0.03 | |
No log output captured. | |||
Passed | suites/acl/globalgroup_test.py::test_undefined_in_group_eval_four | 0.06 | |
No log output captured. | |||
Passed | suites/acl/keywords_part2_test.py::test_ip_keyword_test_noip_cannot | 0.11 | |
No log output captured. | |||
Passed | suites/acl/keywords_part2_test.py::test_user_can_access_the_data_at_any_time | 0.09 | |
No log output captured. | |||
Passed | suites/acl/keywords_part2_test.py::test_user_can_access_the_data_only_in_the_morning | 0.12 | |
No log output captured. | |||
Passed | suites/acl/keywords_part2_test.py::test_user_can_access_the_data_only_in_the_afternoon | 0.11 | |
No log output captured. | |||
Passed | suites/acl/keywords_part2_test.py::test_timeofday_keyword | 1.17 | |
No log output captured. | |||
Passed | suites/acl/keywords_part2_test.py::test_dayofweek_keyword_test_everyday_can_access | 0.10 | |
No log output captured. | |||
Passed | suites/acl/keywords_part2_test.py::test_dayofweek_keyword_today_can_access | 0.08 | |
No log output captured. | |||
Passed | suites/acl/keywords_part2_test.py::test_user_cannot_access_the_data_at_all | 0.09 | |
No log output captured. | |||
Passed | suites/acl/keywords_test.py::test_user_binds_with_a_password_and_can_access_the_data | 0.05 | |
-------------------------------Captured log setup------------------------------- [32mINFO [0m lib389.SetupDs:setup.py:658 Starting installation... [32mINFO [0m lib389.SetupDs:setup.py:686 Completed installation for standalone1 [32mINFO [0m lib389.topologies:topologies.py:109 Instance with parameters {'ldap-port': 38901, 'ldap-secureport': 63601, 'server-id': 'standalone1', 'suffix': 'dc=example,dc=com'} was created. | |||
Passed | suites/acl/keywords_test.py::test_user_binds_with_a_bad_password_and_cannot_access_the_data | 0.01 | |
No log output captured. | |||
Passed | suites/acl/keywords_test.py::test_anonymous_user_cannot_access_the_data | 0.04 | |
No log output captured. | |||
Passed | suites/acl/keywords_test.py::test_authenticated_but_has_no_rigth_on_the_data | 0.06 | |
No log output captured. | |||
Passed | suites/acl/keywords_test.py::test_the_bind_client_is_accessing_the_directory | 0.01 | |
No log output captured. | |||
Passed | suites/acl/keywords_test.py::test_users_binds_with_a_password_and_can_access_the_data | 0.01 | |
No log output captured. | |||
Passed | suites/acl/keywords_test.py::test_user_binds_without_any_password_and_cannot_access_the_data | 0.02 | |
No log output captured. | |||
Passed | suites/acl/keywords_test.py::test_user_can_access_the_data_when_connecting_from_any_machine | 0.05 | |
No log output captured. | |||
Passed | suites/acl/keywords_test.py::test_user_can_access_the_data_when_connecting_from_internal_ds_network_only | 0.04 | |
No log output captured. | |||
Passed | suites/acl/keywords_test.py::test_user_can_access_the_data_when_connecting_from_some_network_only | 0.05 | |
No log output captured. | |||
Passed | suites/acl/keywords_test.py::test_from_an_unauthorized_network | 0.04 | |
No log output captured. | |||
Passed | suites/acl/keywords_test.py::test_user_cannot_access_the_data_when_connecting_from_an_unauthorized_network_2 | 0.03 | |
No log output captured. | |||
Passed | suites/acl/keywords_test.py::test_user_cannot_access_the_data_if_not_from_a_certain_domain | 0.07 | |
No log output captured. | |||
Passed | suites/acl/keywords_test.py::test_dnsalias_keyword_test_nodns_cannot | 0.25 | |
No log output captured. | |||
Passed | suites/acl/keywords_test.py::test_user_can_access_from_ipv4_or_ipv6_address[127.0.0.1] | 0.04 | |
No log output captured. | |||
Passed | suites/acl/keywords_test.py::test_user_can_access_from_ipv4_or_ipv6_address[[::1]] | 0.02 | |
No log output captured. | |||
Passed | suites/acl/misc_test.py::test_accept_aci_in_addition_to_acl | 0.33 | |
-------------------------------Captured log setup------------------------------- [32mINFO [0m lib389.SetupDs:setup.py:658 Starting installation... [32mINFO [0m lib389.SetupDs:setup.py:686 Completed installation for standalone1 [32mINFO [0m lib389.topologies:topologies.py:109 Instance with parameters {'ldap-port': 38901, 'ldap-secureport': 63601, 'server-id': 'standalone1', 'suffix': 'dc=example,dc=com'} was created. | |||
Passed | suites/acl/misc_test.py::test_more_then_40_acl_will_crash_slapd | 0.32 | |
-------------------------------Captured log setup------------------------------- [32mINFO [0m lib389:misc_test.py:76 Exception (expected): ALREADY_EXISTS | |||
Passed | suites/acl/misc_test.py::test_search_access_should_not_include_read_access | 0.01 | |
-------------------------------Captured log setup------------------------------- [32mINFO [0m lib389:misc_test.py:76 Exception (expected): ALREADY_EXISTS | |||
Passed | suites/acl/misc_test.py::test_only_allow_some_targetattr | 0.06 | |
-------------------------------Captured log setup------------------------------- [32mINFO [0m lib389:misc_test.py:76 Exception (expected): ALREADY_EXISTS | |||
Passed | suites/acl/misc_test.py::test_only_allow_some_targetattr_two | 0.34 | |
-------------------------------Captured log setup------------------------------- [32mINFO [0m lib389:misc_test.py:76 Exception (expected): ALREADY_EXISTS | |||
Passed | suites/acl/misc_test.py::test_memberurl_needs_to_be_normalized | 0.13 | |
-------------------------------Captured log setup------------------------------- [32mINFO [0m lib389:misc_test.py:76 Exception (expected): ALREADY_EXISTS | |||
Passed | suites/acl/misc_test.py::test_greater_than_200_acls_can_be_created | 4.97 | |
-------------------------------Captured log setup------------------------------- [32mINFO [0m lib389:misc_test.py:76 Exception (expected): ALREADY_EXISTS | |||
Passed | suites/acl/misc_test.py::test_server_bahaves_properly_with_very_long_attribute_names | 0.06 | |
-------------------------------Captured log setup------------------------------- [32mINFO [0m lib389:misc_test.py:76 Exception (expected): ALREADY_EXISTS | |||
Passed | suites/acl/misc_test.py::test_do_bind_as_201_distinct_users | 172.23 | |
-------------------------------Captured log setup------------------------------- [32mINFO [0m lib389:misc_test.py:76 Exception (expected): ALREADY_EXISTS | |||
Passed | suites/acl/modify_test.py::test_allow_write_access_to_targetattr_with_a_single_attribute | 0.83 | |
-------------------------------Captured log setup------------------------------- [32mINFO [0m lib389.SetupDs:setup.py:658 Starting installation... [32mINFO [0m lib389.SetupDs:setup.py:686 Completed installation for standalone1 [32mINFO [0m lib389.topologies:topologies.py:109 Instance with parameters {'ldap-port': 38901, 'ldap-secureport': 63601, 'server-id': 'standalone1', 'suffix': 'dc=example,dc=com'} was created. | |||
Passed | suites/acl/modify_test.py::test_allow_write_access_to_targetattr_with_multiple_attibutes | 0.07 | |
No log output captured. | |||
Passed | suites/acl/modify_test.py::test_allow_write_access_to_userdn_all | 0.11 | |
No log output captured. | |||
Passed | suites/acl/modify_test.py::test_allow_write_access_to_userdn_with_wildcards_in_dn | 0.06 | |
No log output captured. | |||
Passed | suites/acl/modify_test.py::test_allow_write_access_to_userdn_with_multiple_dns | 0.22 | |
No log output captured. | |||
Passed | suites/acl/modify_test.py::test_allow_write_access_to_target_with_wildcards | 0.19 | |
No log output captured. | |||
Passed | suites/acl/modify_test.py::test_allow_write_access_to_userdnattr | 0.10 | |
No log output captured. | |||
Passed | suites/acl/modify_test.py::test_allow_selfwrite_access_to_anyone | 0.09 | |
No log output captured. | |||
Passed | suites/acl/modify_test.py::test_uniquemember_should_also_be_the_owner | 0.24 | |
No log output captured. | |||
Passed | suites/acl/modify_test.py::test_aci_with_both_allow_and_deny | 0.16 | |
No log output captured. | |||
Passed | suites/acl/modify_test.py::test_allow_owner_to_modify_entry | 0.11 | |
No log output captured. | |||
Passed | suites/acl/modrdn_test.py::test_allow_write_privilege_to_anyone | 0.03 | |
-------------------------------Captured log setup------------------------------- [32mINFO [0m lib389.SetupDs:setup.py:658 Starting installation... [32mINFO [0m lib389.SetupDs:setup.py:686 Completed installation for standalone1 [32mINFO [0m lib389.topologies:topologies.py:109 Instance with parameters {'ldap-port': 38901, 'ldap-secureport': 63601, 'server-id': 'standalone1', 'suffix': 'dc=example,dc=com'} was created. | |||
Passed | suites/acl/modrdn_test.py::test_allow_write_privilege_to_dynamic_group_with_scope_set_to_base_in_ldap_url | 0.03 | |
No log output captured. | |||
Passed | suites/acl/modrdn_test.py::test_write_access_to_naming_atributes | 0.04 | |
No log output captured. | |||
Passed | suites/acl/modrdn_test.py::test_write_access_to_naming_atributes_two | 0.11 | |
No log output captured. | |||
Passed | suites/acl/modrdn_test.py::test_access_aci_list_contains_any_deny_rule | 0.12 | |
No log output captured. | |||
Passed | suites/acl/modrdn_test.py::test_renaming_target_entry | 0.08 | |
No log output captured. | |||
Passed | suites/acl/repeated_ldap_add_test.py::test_repeated_ldap_add | 31.71 | |
-------------------------------Captured log setup------------------------------- [32mINFO [0m lib389.SetupDs:setup.py:658 Starting installation... [32mINFO [0m lib389.SetupDs:setup.py:686 Completed installation for standalone1 [32mINFO [0m lib389.topologies:topologies.py:109 Instance with parameters {'ldap-port': 38901, 'ldap-secureport': 63601, 'server-id': 'standalone1', 'suffix': 'dc=example,dc=com'} was created. ------------------------------Captured stdout call------------------------------ Entry uid=buser123,ou=BOU,dc=example,dc=com is locked -------------------------------Captured log call-------------------------------- [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:184 Testing Bug 1347760 - Information disclosure via repeated use of LDAP ADD operation, etc. [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:186 Disabling accesslog logbuffering [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:189 Bind as {cn=Directory Manager,password} [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:192 Adding ou=BOU a bind user belongs to. [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:197 Adding a bind user. [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:204 Adding a test user. [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:211 Deleting aci in dc=example,dc=com. [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:214 While binding as DM, acquire an access log path and instance dir [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:220 Bind case 1. the bind user has no rights to read the entry itself, bind should be successful. [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:221 Bind as {uid=buser123,ou=BOU,dc=example,dc=com,buser123} who has no access rights. [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:229 Access log path: /var/log/dirsrv/slapd-standalone1/access [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:231 Bind case 2-1. the bind user does not exist, bind should fail with error INVALID_CREDENTIALS [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:233 Bind as {uid=bogus,dc=example,dc=com,bogus} who does not exist. [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:237 Exception (expected): INVALID_CREDENTIALS [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:238 Desc Invalid credentials [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:246 Cause found - [30/Oct/2020:20:23:12.068704733 -0400] conn=1 op=11 RESULT err=49 tag=97 nentries=0 wtime=0.000101639 optime=0.008046312 etime=0.008145765 - No such entry [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:249 Bind case 2-2. the bind user's suffix does not exist, bind should fail with error INVALID_CREDENTIALS [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:251 Bind as {uid=bogus,ou=people,dc=bogus,bogus} who does not exist. [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:260 Cause found - [30/Oct/2020:20:23:13.077522648 -0400] conn=1 op=12 RESULT err=49 tag=97 nentries=0 wtime=0.000153497 optime=0.004257464 etime=0.004402734 - No suffix for bind dn found [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:263 Bind case 2-3. the bind user's password is wrong, bind should fail with error INVALID_CREDENTIALS [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:265 Bind as {uid=buser123,ou=BOU,dc=example,dc=com,bogus} who does not exist. [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:269 Exception (expected): INVALID_CREDENTIALS [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:270 Desc Invalid credentials [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:278 Cause found - [30/Oct/2020:20:23:14.114496391 -0400] conn=1 op=13 RESULT err=49 tag=97 nentries=0 wtime=0.000180453 optime=0.033104121 etime=0.033277037 - Invalid credentials [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:281 Adding aci for uid=buser123,ou=BOU,dc=example,dc=com to ou=BOU,dc=example,dc=com. [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:283 aci: (targetattr="*")(version 3.0; acl "buser123"; allow(all) userdn = "ldap:///uid=buser123,ou=BOU,dc=example,dc=com";) [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:284 Bind as {cn=Directory Manager,password} [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:289 Bind case 3. the bind user has the right to read the entry itself, bind should be successful. [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:290 Bind as {uid=buser123,ou=BOU,dc=example,dc=com,buser123} which should be ok. [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:293 The following operations are against the subtree the bind user uid=buser123,ou=BOU,dc=example,dc=com has no rights. [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:297 Search case 1. the bind user has no rights to read the search entry, it should return no search results with <class 'ldap.SUCCESS'> [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:108 Searching existing entry uid=tuser0,ou=people,dc=example,dc=com, which should be ok. [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:141 Search should return none [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:148 PASSED [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:303 Search case 2-1. the search entry does not exist, the search should return no search results with SUCCESS [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:108 Searching non-existing entry uid=bogus,dc=example,dc=com, which should be ok. [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:141 Search should return none [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:148 PASSED [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:309 Search case 2-2. the search entry does not exist, the search should return no search results with SUCCESS [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:108 Searching non-existing entry uid=bogus,ou=people,dc=example,dc=com, which should be ok. [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:141 Search should return none [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:148 PASSED [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:316 Add case 1. the bind user has no rights AND the adding entry exists, it should fail with INSUFFICIENT_ACCESS [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:108 Adding existing entry uid=tuser0,ou=people,dc=example,dc=com, which should fail with INSUFFICIENT_ACCESS. [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:131 Exception (expected): INSUFFICIENT_ACCESS [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:132 Desc Insufficient access [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:148 PASSED [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:322 Add case 2-1. the bind user has no rights AND the adding entry does not exist, it should fail with INSUFFICIENT_ACCESS [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:108 Adding non-existing entry uid=bogus,dc=example,dc=com, which should fail with INSUFFICIENT_ACCESS. [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:131 Exception (expected): INSUFFICIENT_ACCESS [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:132 Desc Insufficient access [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:148 PASSED [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:328 Add case 2-2. the bind user has no rights AND the adding entry does not exist, it should fail with INSUFFICIENT_ACCESS [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:108 Adding non-existing entry uid=bogus,ou=people,dc=example,dc=com, which should fail with INSUFFICIENT_ACCESS. [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:131 Exception (expected): INSUFFICIENT_ACCESS [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:132 Desc Insufficient access [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:148 PASSED [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:335 Modify case 1. the bind user has no rights AND the modifying entry exists, it should fail with INSUFFICIENT_ACCESS [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:108 Modifying existing entry uid=tuser0,ou=people,dc=example,dc=com, which should fail with INSUFFICIENT_ACCESS. [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:131 Exception (expected): INSUFFICIENT_ACCESS [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:132 Desc Insufficient access [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:148 PASSED [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:341 Modify case 2-1. the bind user has no rights AND the modifying entry does not exist, it should fail with INSUFFICIENT_ACCESS [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:108 Modifying non-existing entry uid=bogus,dc=example,dc=com, which should fail with INSUFFICIENT_ACCESS. [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:131 Exception (expected): INSUFFICIENT_ACCESS [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:132 Desc Insufficient access [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:148 PASSED [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:347 Modify case 2-2. the bind user has no rights AND the modifying entry does not exist, it should fail with INSUFFICIENT_ACCESS [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:108 Modifying non-existing entry uid=bogus,ou=people,dc=example,dc=com, which should fail with INSUFFICIENT_ACCESS. [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:131 Exception (expected): INSUFFICIENT_ACCESS [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:132 Desc Insufficient access [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:148 PASSED [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:354 Modrdn case 1. the bind user has no rights AND the renaming entry exists, it should fail with INSUFFICIENT_ACCESS [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:108 Renaming existing entry uid=tuser0,ou=people,dc=example,dc=com, which should fail with INSUFFICIENT_ACCESS. [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:131 Exception (expected): INSUFFICIENT_ACCESS [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:132 Desc Insufficient access [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:148 PASSED [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:360 Modrdn case 2-1. the bind user has no rights AND the renaming entry does not exist, it should fail with INSUFFICIENT_ACCESS [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:108 Renaming non-existing entry uid=bogus,dc=example,dc=com, which should fail with INSUFFICIENT_ACCESS. [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:131 Exception (expected): INSUFFICIENT_ACCESS [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:132 Desc Insufficient access [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:148 PASSED [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:366 Modrdn case 2-2. the bind user has no rights AND the renaming entry does not exist, it should fail with INSUFFICIENT_ACCESS [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:108 Renaming non-existing entry uid=bogus,ou=people,dc=example,dc=com, which should fail with INSUFFICIENT_ACCESS. [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:131 Exception (expected): INSUFFICIENT_ACCESS [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:132 Desc Insufficient access [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:148 PASSED [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:372 Modrdn case 3. the bind user has no rights AND the node moving an entry to exists, it should fail with INSUFFICIENT_ACCESS [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:108 Moving to existing superior ou=groups,dc=example,dc=com, which should fail with INSUFFICIENT_ACCESS. [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:131 Exception (expected): INSUFFICIENT_ACCESS [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:132 Desc Insufficient access [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:148 PASSED [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:378 Modrdn case 4-1. the bind user has no rights AND the node moving an entry to does not, it should fail with INSUFFICIENT_ACCESS [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:108 Moving to non-existing superior ou=OU,dc=example,dc=com, which should fail with INSUFFICIENT_ACCESS. [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:131 Exception (expected): INSUFFICIENT_ACCESS [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:132 Desc Insufficient access [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:148 PASSED [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:384 Modrdn case 4-2. the bind user has no rights AND the node moving an entry to does not, it should fail with INSUFFICIENT_ACCESS [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:108 Moving to non-existing superior ou=OU,dc=example,dc=com, which should fail with INSUFFICIENT_ACCESS. [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:131 Exception (expected): INSUFFICIENT_ACCESS [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:132 Desc Insufficient access [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:148 PASSED [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:391 Delete case 1. the bind user has no rights AND the deleting entry exists, it should fail with INSUFFICIENT_ACCESS [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:108 Deleting existing entry uid=tuser0,ou=people,dc=example,dc=com, which should fail with INSUFFICIENT_ACCESS. [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:131 Exception (expected): INSUFFICIENT_ACCESS [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:132 Desc Insufficient access [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:148 PASSED [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:397 Delete case 2-1. the bind user has no rights AND the deleting entry does not exist, it should fail with INSUFFICIENT_ACCESS [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:108 Deleting non-existing entry uid=bogus,dc=example,dc=com, which should fail with INSUFFICIENT_ACCESS. [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:131 Exception (expected): INSUFFICIENT_ACCESS [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:132 Desc Insufficient access [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:148 PASSED [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:403 Delete case 2-2. the bind user has no rights AND the deleting entry does not exist, it should fail with INSUFFICIENT_ACCESS [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:108 Deleting non-existing entry uid=bogus,ou=people,dc=example,dc=com, which should fail with INSUFFICIENT_ACCESS. [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:131 Exception (expected): INSUFFICIENT_ACCESS [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:132 Desc Insufficient access [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:148 PASSED [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:407 EXTRA: Check no regressions [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:408 Adding aci for uid=buser123,ou=BOU,dc=example,dc=com to dc=example,dc=com. [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:410 Bind as {cn=Directory Manager,password} [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:415 Bind as {uid=buser123,ou=BOU,dc=example,dc=com,buser123}. [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:425 Search case. the search entry does not exist, the search should fail with NO_SUCH_OBJECT [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:108 Searching non-existing entry uid=bogus,ou=people,dc=example,dc=com, which should fail with NO_SUCH_OBJECT. [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:131 Exception (expected): NO_SUCH_OBJECT [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:132 Desc No such object [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:148 PASSED [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:431 Add case. the adding entry already exists, it should fail with ALREADY_EXISTS [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:108 Adding existing entry uid=tuser0,ou=people,dc=example,dc=com, which should fail with ALREADY_EXISTS. [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:131 Exception (expected): ALREADY_EXISTS [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:132 Desc Already exists [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:148 PASSED [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:436 Modify case. the modifying entry does not exist, it should fail with NO_SUCH_OBJECT [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:108 Modifying non-existing entry uid=bogus,dc=example,dc=com, which should fail with NO_SUCH_OBJECT. [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:131 Exception (expected): NO_SUCH_OBJECT [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:132 Desc No such object [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:148 PASSED [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:441 Modrdn case 1. the renaming entry does not exist, it should fail with NO_SUCH_OBJECT [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:108 Renaming non-existing entry uid=bogus,dc=example,dc=com, which should fail with NO_SUCH_OBJECT. [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:131 Exception (expected): NO_SUCH_OBJECT [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:132 Desc No such object [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:148 PASSED [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:446 Modrdn case 2. the node moving an entry to does not, it should fail with NO_SUCH_OBJECT [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:108 Moving to non-existing superior ou=OU,dc=example,dc=com, which should fail with NO_SUCH_OBJECT. [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:131 Exception (expected): NO_SUCH_OBJECT [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:132 Desc No such object [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:148 PASSED [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:451 Delete case. the deleting entry does not exist, it should fail with NO_SUCH_OBJECT [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:108 Deleting non-existing entry uid=bogus,dc=example,dc=com, which should fail with NO_SUCH_OBJECT. [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:131 Exception (expected): NO_SUCH_OBJECT [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:132 Desc No such object [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:148 PASSED [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:454 Inactivate uid=buser123,ou=BOU,dc=example,dc=com [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:461 ['/usr/sbin/dsidm', 'standalone1', '-b', 'dc=example,dc=com', 'account', 'lock', 'uid=buser123,ou=BOU,dc=example,dc=com'] [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:465 Bind as {uid=buser123,ou=BOU,dc=example,dc=com,buser123} which should fail with UNWILLING_TO_PERFORM. [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:469 Exception (expected): UNWILLING_TO_PERFORM [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:470 Desc Server is unwilling to perform [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:473 Bind as {uid=buser123,ou=BOU,dc=example,dc=com,bogus} which should fail with UNWILLING_TO_PERFORM. [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:477 Exception (expected): UNWILLING_TO_PERFORM [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:478 Desc Server is unwilling to perform [32mINFO [0m tests.suites.acl.repeated_ldap_add_test:repeated_ldap_add_test.py:481 SUCCESS | |||
Passed | suites/acl/roledn_test.py::test_mod_seealso_positive[(STEVE_ROLE, NESTED_ROLE_TESTER)] | 0.07 | |
-------------------------------Captured log setup------------------------------- [32mINFO [0m lib389.SetupDs:setup.py:658 Starting installation... [32mINFO [0m lib389.SetupDs:setup.py:686 Completed installation for standalone1 [32mINFO [0m lib389.topologies:topologies.py:109 Instance with parameters {'ldap-port': 38901, 'ldap-secureport': 63601, 'server-id': 'standalone1', 'suffix': 'dc=example,dc=com'} was created. | |||
Passed | suites/acl/roledn_test.py::test_mod_seealso_positive[(HARRY_ROLE, NESTED_ROLE_TESTER)] | 0.04 | |
No log output captured. | |||
Passed | suites/acl/roledn_test.py::test_mod_seealso_positive[(MARY_ROLE, NOT_RULE_ACCESS)] | 0.04 | |
No log output captured. | |||
Passed | suites/acl/roledn_test.py::test_mod_seealso_positive[(STEVE_ROLE, OR_RULE_ACCESS)] | 0.04 | |
No log output captured. | |||
Passed | suites/acl/roledn_test.py::test_mod_seealso_positive[(HARRY_ROLE, OR_RULE_ACCESS)] | 0.04 | |
No log output captured. | |||
Passed | suites/acl/roledn_test.py::test_mod_seealso_positive[(STEVE_ROLE, ALL_ACCESS)] | 0.04 | |
No log output captured. | |||
Passed | suites/acl/roledn_test.py::test_mod_seealso_positive[(HARRY_ROLE, ALL_ACCESS)] | 0.04 | |
No log output captured. | |||
Passed | suites/acl/roledn_test.py::test_mod_seealso_positive[(MARY_ROLE, ALL_ACCESS)] | 0.04 | |
No log output captured. | |||
Passed | suites/acl/roledn_test.py::test_mod_seealso_negative[(MARY_ROLE, NESTED_ROLE_TESTER)] | 0.29 | |
No log output captured. | |||
Passed | suites/acl/roledn_test.py::test_mod_seealso_negative[(STEVE_ROLE, NOT_RULE_ACCESS)] | 0.06 | |
No log output captured. | |||
Passed | suites/acl/roledn_test.py::test_mod_seealso_negative[(HARRY_ROLE, NOT_RULE_ACCESS)] | 0.28 | |
No log output captured. | |||
Passed | suites/acl/roledn_test.py::test_mod_seealso_negative[(MARY_ROLE , OR_RULE_ACCESS)] | 0.06 | |
No log output captured. | |||
Passed | suites/acl/roledn_test.py::test_mod_anonseealso_positive[NOT_RULE_ACCESS] | 0.01 | |
No log output captured. | |||
Passed | suites/acl/roledn_test.py::test_mod_anonseealso_positive[ALL_ACCESS] | 0.01 | |
No log output captured. | |||
Passed | suites/acl/roledn_test.py::test_mod_anonseealso_negaive[NESTED_ROLE_TESTER] | 0.02 | |
No log output captured. | |||
Passed | suites/acl/roledn_test.py::test_mod_anonseealso_negaive[OR_RULE_ACCESS] | 0.02 | |
No log output captured. | |||
Passed | suites/acl/search_real_part2_test.py::test_deny_all_access_with__target_set_on_non_leaf | 0.50 | |
-------------------------------Captured log setup------------------------------- [32mINFO [0m lib389.SetupDs:setup.py:658 Starting installation... [32mINFO [0m lib389.SetupDs:setup.py:686 Completed installation for standalone1 [32mINFO [0m lib389.topologies:topologies.py:109 Instance with parameters {'ldap-port': 38901, 'ldap-secureport': 63601, 'server-id': 'standalone1', 'suffix': 'dc=example,dc=com'} was created. | |||
Passed | suites/acl/search_real_part2_test.py::test_deny_all_access_with__target_set_on_wildcard_non_leaf | 0.57 | |
No log output captured. | |||
Passed | suites/acl/search_real_part2_test.py::test_deny_all_access_with__target_set_on_wildcard_leaf | 0.70 | |
No log output captured. | |||
Passed | suites/acl/search_real_part2_test.py::test_deny_all_access_with_targetfilter_using_equality_search | 0.27 | |
No log output captured. | |||
Passed | suites/acl/search_real_part2_test.py::test_deny_all_access_with_targetfilter_using_equality_search_two | 0.58 | |
No log output captured. | |||
Passed | suites/acl/search_real_part2_test.py::test_deny_all_access_with_targetfilter_using_substring_search | 0.29 | |
No log output captured. | |||
Passed | suites/acl/search_real_part2_test.py::test_deny_all_access_with_targetfilter_using_substring_search_two | 2.00 | |
No log output captured. | |||
Passed | suites/acl/search_real_part2_test.py::test_deny_all_access_with_targetfilter_using_boolean_or_of_two_equality_search | 0.21 | |
No log output captured. | |||
Passed | suites/acl/search_real_part2_test.py::test_deny_all_access_to__userdn_two | 0.49 | |
No log output captured. | |||
Passed | suites/acl/search_real_part2_test.py::test_deny_all_access_with_userdn | 0.56 | |
No log output captured. | |||
Passed | suites/acl/search_real_part2_test.py::test_deny_all_access_with_targetfilter_using_presence_search | 0.21 | |
No log output captured. | |||
Passed | suites/acl/search_real_part3_test.py::test_deny_search_access_to_userdn_with_ldap_url | 0.73 | |
-------------------------------Captured log setup------------------------------- [32mINFO [0m lib389.SetupDs:setup.py:658 Starting installation... [32mINFO [0m lib389.SetupDs:setup.py:686 Completed installation for standalone1 [32mINFO [0m lib389.topologies:topologies.py:109 Instance with parameters {'ldap-port': 38901, 'ldap-secureport': 63601, 'server-id': 'standalone1', 'suffix': 'dc=example,dc=com'} was created. | |||
Passed | suites/acl/search_real_part3_test.py::test_deny_search_access_to_userdn_with_ldap_url_two | 0.42 | |
No log output captured. | |||
Passed | suites/acl/search_real_part3_test.py::test_deny_search_access_to_userdn_with_ldap_url_matching_all_users | 0.61 | |
No log output captured. | |||
Passed | suites/acl/search_real_part3_test.py::test_deny_read_access_to_a_dynamic_group | 0.50 | |
No log output captured. | |||
Passed | suites/acl/search_real_part3_test.py::test_deny_read_access_to_dynamic_group_with_host_port_set_on_ldap_url | 0.60 | |
No log output captured. | |||
Passed | suites/acl/search_real_part3_test.py::test_deny_read_access_to_dynamic_group_with_scope_set_to_one_in_ldap_url | 0.73 | |
No log output captured. | |||
Passed | suites/acl/search_real_part3_test.py::test_deny_read_access_to_dynamic_group_two | 0.72 | |
No log output captured. | |||
Passed | suites/acl/search_real_part3_test.py::test_deny_access_to_group_should_deny_access_to_all_uniquemember | 0.61 | |
No log output captured. | |||
Passed | suites/acl/search_real_part3_test.py::test_entry_with_lots_100_attributes | 11.73 | |
No log output captured. | |||
Passed | suites/acl/search_real_part3_test.py::test_groupdnattr_value_is_another_group | 0.14 | |
No log output captured. | |||
Passed | suites/acl/search_real_test.py::test_deny_all_access_with_target_set | 0.43 | |
-------------------------------Captured log setup------------------------------- [32mINFO [0m lib389.SetupDs:setup.py:658 Starting installation... [32mINFO [0m lib389.SetupDs:setup.py:686 Completed installation for standalone1 [32mINFO [0m lib389.topologies:topologies.py:109 Instance with parameters {'ldap-port': 38901, 'ldap-secureport': 63601, 'server-id': 'standalone1', 'suffix': 'dc=example,dc=com'} was created. | |||
Passed | suites/acl/search_real_test.py::test_deny_all_access_to_a_target_with_wild_card | 0.26 | |
No log output captured. | |||
Passed | suites/acl/search_real_test.py::test_deny_all_access_without_a_target_set | 1.84 | |
No log output captured. | |||
Passed | suites/acl/search_real_test.py::test_deny_read_search_and_compare_access_with_target_and_targetattr_set | 1.45 | |
No log output captured. | |||
Passed | suites/acl/search_real_test.py::test_deny_read_access_to_multiple_groupdns | 1.05 | |
No log output captured. | |||
Passed | suites/acl/search_real_test.py::test_deny_all_access_to_userdnattr | 0.22 | |
No log output captured. | |||
Passed | suites/acl/search_real_test.py::test_deny_all_access_with__target_set | 0.55 | |
No log output captured. | |||
Passed | suites/acl/search_real_test.py::test_deny_all_access_with__targetattr_set | 1.40 | |
No log output captured. | |||
Passed | suites/acl/search_real_test.py::test_deny_all_access_with_targetattr_set | 1.06 | |
No log output captured. | |||
Passed | suites/acl/selfdn_permissions_test.py::test_selfdn_permission_add | 0.78 | |
-------------------------------Captured log setup------------------------------- [32mINFO [0m lib389.SetupDs:setup.py:658 Starting installation... [32mINFO [0m lib389.SetupDs:setup.py:686 Completed installation for standalone1 [32mINFO [0m lib389.topologies:topologies.py:109 Instance with parameters {'ldap-port': 38901, 'ldap-secureport': 63601, 'server-id': 'standalone1', 'suffix': 'dc=example,dc=com'} was created. [32mINFO [0m lib389:selfdn_permissions_test.py:58 Add OCticket47653 that allows 'member' attribute [32mINFO [0m lib389:selfdn_permissions_test.py:63 Add cn=bind_entry, dc=example,dc=com -------------------------------Captured log call-------------------------------- [32mINFO [0m lib389:selfdn_permissions_test.py:106 ######################### ADD ###################### [32mINFO [0m lib389:selfdn_permissions_test.py:109 Bind as cn=bind_entry, dc=example,dc=com [32mINFO [0m lib389:selfdn_permissions_test.py:139 Try to add Add cn=test_entry, dc=example,dc=com (aci is missing): dn: cn=test_entry, dc=example,dc=com cn: test_entry member: cn=bind_entry, dc=example,dc=com objectclass: top objectclass: person objectclass: OCticket47653 postalAddress: here postalCode: 1234 sn: test_entry [32mINFO [0m lib389:selfdn_permissions_test.py:143 Exception (expected): INSUFFICIENT_ACCESS [32mINFO [0m lib389:selfdn_permissions_test.py:147 Bind as cn=Directory Manager and add the ADD SELFDN aci [32mINFO [0m lib389:selfdn_permissions_test.py:159 Bind as cn=bind_entry, dc=example,dc=com [32mINFO [0m lib389:selfdn_permissions_test.py:164 Try to add Add cn=test_entry, dc=example,dc=com (member is missing) [32mINFO [0m lib389:selfdn_permissions_test.py:172 Exception (expected): INSUFFICIENT_ACCESS [32mINFO [0m lib389:selfdn_permissions_test.py:178 Try to add Add cn=test_entry, dc=example,dc=com (with several member values) [32mINFO [0m lib389:selfdn_permissions_test.py:181 Exception (expected): INSUFFICIENT_ACCESS [32mINFO [0m lib389:selfdn_permissions_test.py:184 Try to add Add cn=test_entry, dc=example,dc=com should be successful | |||
Passed | suites/acl/selfdn_permissions_test.py::test_selfdn_permission_search | 0.40 | |
-------------------------------Captured log call-------------------------------- [32mINFO [0m lib389:selfdn_permissions_test.py:205 ######################### SEARCH ###################### [32mINFO [0m lib389:selfdn_permissions_test.py:207 Bind as cn=bind_entry, dc=example,dc=com [32mINFO [0m lib389:selfdn_permissions_test.py:211 Try to search cn=test_entry, dc=example,dc=com (aci is missing) [32mINFO [0m lib389:selfdn_permissions_test.py:216 Bind as cn=Directory Manager and add the READ/SEARCH SELFDN aci [32mINFO [0m lib389:selfdn_permissions_test.py:229 Bind as cn=bind_entry, dc=example,dc=com [32mINFO [0m lib389:selfdn_permissions_test.py:233 Try to search cn=test_entry, dc=example,dc=com should be successful | |||
Passed | suites/acl/selfdn_permissions_test.py::test_selfdn_permission_modify | 0.65 | |
-------------------------------Captured log call-------------------------------- [32mINFO [0m lib389:selfdn_permissions_test.py:256 Bind as cn=bind_entry, dc=example,dc=com [32mINFO [0m lib389:selfdn_permissions_test.py:259 ######################### MODIFY ###################### [32mINFO [0m lib389:selfdn_permissions_test.py:263 Try to modify cn=test_entry, dc=example,dc=com (aci is missing) [32mINFO [0m lib389:selfdn_permissions_test.py:267 Exception (expected): INSUFFICIENT_ACCESS [32mINFO [0m lib389:selfdn_permissions_test.py:271 Bind as cn=Directory Manager and add the WRITE SELFDN aci [32mINFO [0m lib389:selfdn_permissions_test.py:284 Bind as cn=bind_entry, dc=example,dc=com [32mINFO [0m lib389:selfdn_permissions_test.py:288 Try to modify cn=test_entry, dc=example,dc=com. It should succeeds | |||
Passed | suites/acl/selfdn_permissions_test.py::test_selfdn_permission_delete | 0.27 | |
-------------------------------Captured log call-------------------------------- [32mINFO [0m lib389:selfdn_permissions_test.py:314 ######################### DELETE ###################### [32mINFO [0m lib389:selfdn_permissions_test.py:317 Bind as cn=bind_entry, dc=example,dc=com [32mINFO [0m lib389:selfdn_permissions_test.py:322 Try to delete cn=test_entry, dc=example,dc=com (aci is missing) [32mINFO [0m lib389:selfdn_permissions_test.py:325 Exception (expected): INSUFFICIENT_ACCESS [32mINFO [0m lib389:selfdn_permissions_test.py:329 Bind as cn=Directory Manager and add the READ/SEARCH SELFDN aci [32mINFO [0m lib389:selfdn_permissions_test.py:341 Bind as cn=bind_entry, dc=example,dc=com [32mINFO [0m lib389:selfdn_permissions_test.py:345 Try to delete cn=test_entry, dc=example,dc=com should be successful | |||
Passed | suites/acl/syntax_test.py::test_aci_invalid_syntax[test_targattrfilters_1] | 0.04 | |
No log output captured. | |||
Passed | suites/acl/syntax_test.py::test_aci_invalid_syntax[test_targattrfilters_2] | 0.02 | |
No log output captured. | |||
Passed | suites/acl/syntax_test.py::test_aci_invalid_syntax[test_targattrfilters_3] | 0.03 | |
No log output captured. | |||
Passed | suites/acl/syntax_test.py::test_aci_invalid_syntax[test_targattrfilters_4] | 0.02 | |
No log output captured. | |||
Passed | suites/acl/syntax_test.py::test_aci_invalid_syntax[test_targattrfilters_5] | 0.02 | |
No log output captured. | |||
Passed | suites/acl/syntax_test.py::test_aci_invalid_syntax[test_targattrfilters_6] | 0.02 | |
No log output captured. | |||
Passed | suites/acl/syntax_test.py::test_aci_invalid_syntax[test_targattrfilters_7] | 0.02 | |
No log output captured. | |||
Passed | suites/acl/syntax_test.py::test_aci_invalid_syntax[test_targattrfilters_8] | 0.03 | |
No log output captured. | |||
Passed | suites/acl/syntax_test.py::test_aci_invalid_syntax[test_targattrfilters_9] | 0.02 | |
No log output captured. | |||
Passed | suites/acl/syntax_test.py::test_aci_invalid_syntax[test_targattrfilters_10] | 0.02 | |
No log output captured. | |||
Passed | suites/acl/syntax_test.py::test_aci_invalid_syntax[test_targattrfilters_11] | 0.03 | |
No log output captured. | |||
Passed | suites/acl/syntax_test.py::test_aci_invalid_syntax[test_targattrfilters_12] | 0.02 | |
No log output captured. | |||
Passed | suites/acl/syntax_test.py::test_aci_invalid_syntax[test_targattrfilters_13] | 0.02 | |
No log output captured. | |||
Passed | suites/acl/syntax_test.py::test_aci_invalid_syntax[test_targattrfilters_14] | 0.02 | |
No log output captured. | |||
Passed | suites/acl/syntax_test.py::test_aci_invalid_syntax[test_targattrfilters_15] | 0.02 | |
No log output captured. | |||
Passed | suites/acl/syntax_test.py::test_aci_invalid_syntax[test_targattrfilters_16] | 0.02 | |
No log output captured. | |||
Passed | suites/acl/syntax_test.py::test_aci_invalid_syntax[test_targattrfilters_17] | 0.02 | |
No log output captured. | |||
Passed | suites/acl/syntax_test.py::test_aci_invalid_syntax[test_targattrfilters_19] | 0.02 | |
No log output captured. | |||
Passed | suites/acl/syntax_test.py::test_aci_invalid_syntax[test_targattrfilters_21] | 0.02 | |
No log output captured. | |||
Passed | suites/acl/syntax_test.py::test_aci_invalid_syntax[test_targattrfilters_22] | 0.02 | |
No log output captured. | |||
Passed | suites/acl/syntax_test.py::test_aci_invalid_syntax[test_targattrfilters_23] | 0.02 | |
No log output captured. | |||
Passed | suites/acl/syntax_test.py::test_aci_invalid_syntax[test_Missing_acl_mispel] | 0.02 | |
No log output captured. | |||
Passed | suites/acl/syntax_test.py::test_aci_invalid_syntax[test_Missing_acl_string] | 0.02 | |
No log output captured. | |||
Passed | suites/acl/syntax_test.py::test_aci_invalid_syntax[test_Wrong_version_string] | 0.02 | |
No log output captured. | |||
Passed | suites/acl/syntax_test.py::test_aci_invalid_syntax[test_Missing_version_string] | 0.02 | |
No log output captured. | |||
Passed | suites/acl/syntax_test.py::test_aci_invalid_syntax[test_Authenticate_statement] | 0.02 | |
No log output captured. | |||
Passed | suites/acl/syntax_test.py::test_aci_invalid_syntax[test_Multiple_targets] | 0.02 | |
No log output captured. | |||
Passed | suites/acl/syntax_test.py::test_aci_invalid_syntax[test_Target_set_to_self] | 0.03 | |
No log output captured. | |||
Passed | suites/acl/syntax_test.py::test_aci_invalid_syntax[test_target_set_with_ldap_instead_of_ldap] | 0.02 | |
No log output captured. | |||
Passed | suites/acl/syntax_test.py::test_aci_invalid_syntax[test_target_set_with_more_than_three] | 0.02 | |
No log output captured. | |||
Passed | suites/acl/syntax_test.py::test_aci_invalid_syntax[test_target_set_with_less_than_three] | 0.02 | |
No log output captured. | |||
Passed | suites/acl/syntax_test.py::test_aci_invalid_syntax[test_bind_rule_set_with_less_than_three] | 0.03 | |
No log output captured. | |||
Passed | suites/acl/syntax_test.py::test_aci_invalid_syntax[test_Use_semicolon_instead_of_comma_in_permission] | 0.02 | |
No log output captured. | |||
Passed | suites/acl/syntax_test.py::test_aci_invalid_syntax[test_Use_double_equal_instead_of_equal_in_the_target] | 0.03 | |
No log output captured. | |||
Passed | suites/acl/syntax_test.py::test_aci_invalid_syntax[test_use_double_equal_instead_of_equal_in_user_and_group_access] | 0.02 | |
No log output captured. | |||
Passed | suites/acl/syntax_test.py::test_aci_invalid_syntax[test_donot_cote_the_name_of_the_aci] | 0.02 | |
No log output captured. | |||
Passed | suites/acl/syntax_test.py::test_aci_invalid_syntax[test_extra_parentheses_case_1] | 0.02 | |
No log output captured. | |||
Passed | suites/acl/syntax_test.py::test_aci_invalid_syntax[test_extra_parentheses_case_2] | 0.02 | |
No log output captured. | |||
Passed | suites/acl/syntax_test.py::test_aci_invalid_syntax[test_extra_parentheses_case_3] | 0.02 | |
No log output captured. | |||
Passed | suites/acl/syntax_test.py::test_aci_invalid_syntax[test_no_semicolon_at_the_end_of_the_aci] | 0.02 | |
No log output captured. | |||
Passed | suites/acl/syntax_test.py::test_aci_invalid_syntax[test_a_character_different_of_a_semicolon_at_the_end_of_the_aci] | 0.02 | |
No log output captured. | |||
Passed | suites/acl/syntax_test.py::test_aci_invalid_syntax[test_bad_filter] | 0.03 | |
No log output captured. | |||
Passed | suites/acl/syntax_test.py::test_aci_invalid_syntax[test_Use_double_equal_instead_of_equal_in_the_targattrfilters] | 0.03 | |
No log output captured. | |||
Passed | suites/acl/syntax_test.py::test_aci_invalid_syntax[test_Use_double_equal_instead_of_equal_inside_the_targattrfilters] | 0.03 | |
No log output captured. | |||
Passed | suites/acl/syntax_test.py::test_target_set_above_the_entry_test | 0.02 | |
No log output captured. | |||
Passed | suites/acl/userattr_test.py::test_mod_see_also_positive[(CAN,ROLEDNACCESS)] | 0.04 | |
-------------------------------Captured log setup------------------------------- [32mINFO [0m lib389.SetupDs:setup.py:658 Starting installation... [32mINFO [0m lib389.SetupDs:setup.py:686 Completed installation for standalone1 [32mINFO [0m lib389.topologies:topologies.py:109 Instance with parameters {'ldap-port': 38901, 'ldap-secureport': 63601, 'server-id': 'standalone1', 'suffix': 'dc=example,dc=com'} was created. | |||
Passed | suites/acl/userattr_test.py::test_mod_see_also_positive[(CAN,USERDNACCESS)] | 0.01 | |
No log output captured. | |||
Passed | suites/acl/userattr_test.py::test_mod_see_also_positive[(CAN,GROUPDNACCESS)] | 0.01 | |
No log output captured. | |||
Passed | suites/acl/userattr_test.py::test_mod_see_also_positive[(CAN,LDAPURLACCESS)] | 0.01 | |
No log output captured. | |||
Passed | suites/acl/userattr_test.py::test_mod_see_also_positive[(CAN,ATTRNAMEACCESS)] | 0.02 | |
No log output captured. | |||
Passed | suites/acl/userattr_test.py::test_mod_see_also_positive[(LEVEL_0, OU_2)] | 0.29 | |
No log output captured. | |||
Passed | suites/acl/userattr_test.py::test_mod_see_also_positive[(LEVEL_1,ANCESTORS)] | 0.04 | |
No log output captured. | |||
Passed | suites/acl/userattr_test.py::test_mod_see_also_positive[(LEVEL_2,GRANDPARENTS)] | 0.04 | |
No log output captured. | |||
Passed | suites/acl/userattr_test.py::test_mod_see_also_positive[(LEVEL_4,OU_2)] | 0.27 | |
No log output captured. | |||
Passed | suites/acl/userattr_test.py::test_mod_see_also_positive[(LEVEL_4, ANCESTORS)] | 0.02 | |
No log output captured. | |||
Passed | suites/acl/userattr_test.py::test_mod_see_also_positive[(LEVEL_4,GRANDPARENTS)] | 0.02 | |
No log output captured. | |||
Passed | suites/acl/userattr_test.py::test_mod_see_also_positive[(LEVEL_4,PARENTS)] | 0.02 |