| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
| |
Signed-off-by: Adrian Reber <adrian@lisas.de>
|
| |
|
|
|
|
| |
Signed-off-by: Adrian Reber <adrian@lisas.de>
|
|
|
|
| |
Signed-off-by: Adrian Reber <adrian@lisas.de>
|
|
|
|
| |
Signed-off-by: Adrian Reber <adrian@lisas.de>
|
|
|
|
| |
Signed-off-by: Adrian Reber <adrian@lisas.de>
|
|
|
|
|
|
|
|
| |
With the '--debug' switch the client access were logged to syslog and
to a logfile. The latest MM2 no longer logs this to syslog and removed
the '--debug' switch.
Signed-off-by: Adrian Reber <adrian@lisas.de>
|
|
|
|
| |
Signed-off-by: Adrian Reber <adrian@lisas.de>
|
|
|
|
|
|
|
| |
The new MM2 release includes different usage visualizations. This
prepares the MM2 frontend system to handle those visualizations.
Signed-off-by: Adrian Reber <adrian@lisas.de>
|
|
|
|
| |
Signed-off-by: Patrick Uiterwijk <puiterwijk@redhat.com>
|
|
|
|
| |
Signed-off-by: Patrick Uiterwijk <puiterwijk@redhat.com>
|
| |
|
|
|
|
|
|
|
|
|
|
| |
With the logs from the mirrorlist-server logging it is possible
to create country/repository/architecture statistics.
The code which creates the actual statistics is partially already
included into mirrormanager.
Signed-off-by: Adrian Reber <adrian@lisas.de>
|
| |
|
|
|
|
| |
Signed-off-by: Patrick Uiterwijk <puiterwijk@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
It seems like an update caused the crawler to use slightly
more memory than before, meaning the previous tuning of
27 threads no longer fits in the server's memory.
This patch brings it down to 23, which is for now known-good.
We should look again at what values to use after freeze.
Signed-off-by: Patrick Uiterwijk <puiterwijk@redhat.com>
|
| |
|
|
|
|
| |
are hitting this now with the last change.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The mirrorlist-server is the process which has the mirrorlist data
loaded and which is accessed by the public facing
mirrorlist_client.wsgi. The mirrorlist-server uses the
ForkingUnixStreamServer which has a default of max_children = 40.
(https://hg.python.org/cpython/file/2.7/Lib/SocketServer.py#l516)
Looking at the code of ForkingUnixStreamServer it says at
https://hg.python.org/cpython/file/2.7/Lib/SocketServer.py#l523
# If we're above the max number of children, wait and reap them until
# we go back below threshold. Note that we use waitpid(-1) below to be
# able to collect children in size(<defunct children>) syscalls instead
# of size(<children>): the downside is that this might reap children
# which we didn't spawn, which is why we only resort to this when we're
# above max_children.
As we are running the wsgi with processes=45 this sounds like it can
lead to situation where it might just hang.
This increases max_children to 80 and processes to 60.
Signed-off-by: Adrian Reber <adrian@lisas.de>
|
|
|
|
| |
private.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
There has been a report that the MM database was not updated correctly
and dmesg on mm-crawler01 shows three OOM killed crawlers.
https://fedorahosted.org/fedora-infrastructure/ticket/4845
|
| |
|
|
|
|
|
|
|
|
|
| |
The latest crawler needs python-GeoIP for continent specific crawls.
This should be a dependency of the mirrormanager2-crawler package but as
it is not python-GeoIP has to be installed manually. For some reason the
package was installed on mm-crawler02 but not on mm-crawler01. All
mirrors that should have been crawled from mm-crawler01 have not been
crawled for the last few days.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
From now on each umdl category is run separately at different times.
The categories 'Fedora Linux' and 'Fedora EPEL' are started every 30
minutes and if there has been a sync available fedmsg umdl is run for
just that category. The remaining categories are run once per day at
different times (00:00, 08:00, 16:00).
|
| |
|
|
|
|
| |
Only wait 2 hours before starting the crawl on the second crawler.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
The script mm2_get-highest-active-host-id used to return the highest
ID of the active mirrors. This number was divided by the number of
active crawlers and then each crawler got its share of mirrors to crawl.
This did not take into account that more active mirrors are in the
higher IDs as old mirror IDs are not re-used and thus one crawler was
getting much more mirrors to crawl than another. The new script (which
will be renamed) now divides the list correctly by returning exactly the
fraction which each crawler should crawl.
|
|
|
|
|
|
|
| |
This reverts commit 1b409809ebdc490a67e450a8d79ded2d00b28c3e.
Manual umdl runs with '--delete' have finished. Now re-enabled cron
based umdl runs.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
Create the user manually for the crawler log sync.
|
| |
|
| |
|
|
|
|
| |
Sync crawler logs every hour from crawlers to the frontend.
|
| |
|
|
|
|
|
| |
and therefore apache and its configuration can be removed from the
crawlers.
|
| |
|