summaryrefslogtreecommitdiffstats
path: root/0001-mm-vmscan-Limit-direct-reclaim-for-higher-order-allo.patch
diff options
context:
space:
mode:
Diffstat (limited to '0001-mm-vmscan-Limit-direct-reclaim-for-higher-order-allo.patch')
-rw-r--r--0001-mm-vmscan-Limit-direct-reclaim-for-higher-order-allo.patch54
1 files changed, 54 insertions, 0 deletions
diff --git a/0001-mm-vmscan-Limit-direct-reclaim-for-higher-order-allo.patch b/0001-mm-vmscan-Limit-direct-reclaim-for-higher-order-allo.patch
new file mode 100644
index 000000000..77777f012
--- /dev/null
+++ b/0001-mm-vmscan-Limit-direct-reclaim-for-higher-order-allo.patch
@@ -0,0 +1,54 @@
+From 6b7025ea927d290a59d2772828435c1893f0267f Mon Sep 17 00:00:00 2001
+From: Rik van Riel <riel@redhat.com>
+Date: Fri, 7 Oct 2011 16:17:22 +0100
+Subject: [PATCH 1/2] mm: vmscan: Limit direct reclaim for higher order
+ allocations
+
+When suffering from memory fragmentation due to unfreeable pages,
+THP page faults will repeatedly try to compact memory. Due to the
+unfreeable pages, compaction fails.
+
+Needless to say, at that point page reclaim also fails to create
+free contiguous 2MB areas. However, that doesn't stop the current
+code from trying, over and over again, and freeing a minimum of 4MB
+(2UL << sc->order pages) at every single invocation.
+
+This resulted in my 12GB system having 2-3GB free memory, a
+corresponding amount of used swap and very sluggish response times.
+
+This can be avoided by having the direct reclaim code not reclaim from
+zones that already have plenty of free memory available for compaction.
+
+If compaction still fails due to unmovable memory, doing additional
+reclaim will only hurt the system, not help.
+
+Signed-off-by: Rik van Riel <riel@redhat.com>
+Signed-off-by: Mel Gorman <mgorman@suse.de>
+---
+ mm/vmscan.c | 10 ++++++++++
+ 1 files changed, 10 insertions(+), 0 deletions(-)
+
+diff --git a/mm/vmscan.c b/mm/vmscan.c
+index 6072d74..8c03534 100644
+--- a/mm/vmscan.c
++++ b/mm/vmscan.c
+@@ -2022,6 +2022,16 @@ static void shrink_zones(int priority, struct zonelist *zonelist,
+ continue;
+ if (zone->all_unreclaimable && priority != DEF_PRIORITY)
+ continue; /* Let kswapd poll it */
++ if (COMPACTION_BUILD) {
++ /*
++ * If we already have plenty of memory free
++ * for compaction, don't free any more.
++ */
++ if (sc->order > PAGE_ALLOC_COSTLY_ORDER &&
++ (compaction_suitable(zone, sc->order) ||
++ compaction_deferred(zone)))
++ continue;
++ }
+ /*
+ * This steals pages from memory cgroups over softlimit
+ * and returns the number of reclaimed pages and
+--
+1.7.6.4
+