summaryrefslogtreecommitdiffstats
path: root/include
diff options
context:
space:
mode:
authorTejun Heo <tj@kernel.org>2009-03-06 14:33:59 +0900
committerTejun Heo <tj@kernel.org>2009-03-06 14:33:59 +0900
commit6b19b0c2400437a3c10059ede0e59b517092e1bd (patch)
tree4fc1868fc8fde37315b54c6d416b48000621af9d /include
parentedcb463997ed7b2ffa3bac76e3e75957318f2e01 (diff)
downloadkernel-crypto-6b19b0c2400437a3c10059ede0e59b517092e1bd.tar.gz
kernel-crypto-6b19b0c2400437a3c10059ede0e59b517092e1bd.tar.xz
kernel-crypto-6b19b0c2400437a3c10059ede0e59b517092e1bd.zip
x86, percpu: setup reserved percpu area for x86_64
Impact: fix relocation overflow during module load x86_64 uses 32bit relocations for symbol access and static percpu symbols whether in core or modules must be inside 2GB of the percpu segement base which the dynamic percpu allocator doesn't guarantee. This patch makes x86_64 reserve PERCPU_MODULE_RESERVE bytes in the first chunk so that module percpu areas are always allocated from the first chunk which is always inside the relocatable range. This problem exists for any percpu allocator but is easily triggered when using the embedding allocator because the second chunk is located beyond 2GB on it. This patch also changes the meaning of PERCPU_DYNAMIC_RESERVE such that it only indicates the size of the area to reserve for dynamic allocation as static and dynamic areas can be separate. New PERCPU_DYNAMIC_RESERVED is increased by 4k for both 32 and 64bits as the reserved area separation eats away some allocatable space and having slightly more headroom (currently between 4 and 8k after minimal boot sans module area) makes sense for common case performance. x86_32 can address anywhere from anywhere and doesn't need reserving. Mike Galbraith first reported the problem first and bisected it to the embedding percpu allocator commit. Signed-off-by: Tejun Heo <tj@kernel.org> Reported-by: Mike Galbraith <efault@gmx.de> Reported-by: Jaswinder Singh Rajput <jaswinder@kernel.org>
Diffstat (limited to 'include')
-rw-r--r--include/linux/percpu.h35
1 files changed, 12 insertions, 23 deletions
diff --git a/include/linux/percpu.h b/include/linux/percpu.h
index 8ff15153ae2..54a968b4b92 100644
--- a/include/linux/percpu.h
+++ b/include/linux/percpu.h
@@ -85,31 +85,20 @@
/*
* PERCPU_DYNAMIC_RESERVE indicates the amount of free area to piggy
- * back on the first chunk if arch is manually allocating and mapping
- * it for faster access (as a part of large page mapping for example).
- * Note that dynamic percpu allocator covers both static and dynamic
- * areas, so these values are bigger than PERCPU_MODULE_RESERVE.
+ * back on the first chunk for dynamic percpu allocation if arch is
+ * manually allocating and mapping it for faster access (as a part of
+ * large page mapping for example).
*
- * On typical configuration with modules, the following values leave
- * about 8k of free space on the first chunk after boot on both x86_32
- * and 64 when module support is enabled. When module support is
- * disabled, it's much tighter.
+ * The following values give between one and two pages of free space
+ * after typical minimal boot (2-way SMP, single disk and NIC) with
+ * both defconfig and a distro config on x86_64 and 32. More
+ * intelligent way to determine this would be nice.
*/
-#ifndef PERCPU_DYNAMIC_RESERVE
-# if BITS_PER_LONG > 32
-# ifdef CONFIG_MODULES
-# define PERCPU_DYNAMIC_RESERVE (24 << 10)
-# else
-# define PERCPU_DYNAMIC_RESERVE (16 << 10)
-# endif
-# else
-# ifdef CONFIG_MODULES
-# define PERCPU_DYNAMIC_RESERVE (16 << 10)
-# else
-# define PERCPU_DYNAMIC_RESERVE (8 << 10)
-# endif
-# endif
-#endif /* PERCPU_DYNAMIC_RESERVE */
+#if BITS_PER_LONG > 32
+#define PERCPU_DYNAMIC_RESERVE (20 << 10)
+#else
+#define PERCPU_DYNAMIC_RESERVE (12 << 10)
+#endif
extern void *pcpu_base_addr;