summaryrefslogtreecommitdiffstats
path: root/0001-arm64-account-for-sparsemem-section-alignment-when-c.patch
diff options
context:
space:
mode:
Diffstat (limited to '0001-arm64-account-for-sparsemem-section-alignment-when-c.patch')
-rw-r--r--0001-arm64-account-for-sparsemem-section-alignment-when-c.patch54
1 files changed, 54 insertions, 0 deletions
diff --git a/0001-arm64-account-for-sparsemem-section-alignment-when-c.patch b/0001-arm64-account-for-sparsemem-section-alignment-when-c.patch
new file mode 100644
index 000000000..78e01defa
--- /dev/null
+++ b/0001-arm64-account-for-sparsemem-section-alignment-when-c.patch
@@ -0,0 +1,54 @@
+From b3ffe8a6522dd1f07c181a5f2581142776e2162d Mon Sep 17 00:00:00 2001
+From: Ard Biesheuvel <ard.biesheuvel@linaro.org>
+Date: Tue, 8 Mar 2016 21:09:29 +0700
+Subject: [PATCH] arm64: account for sparsemem section alignment when choosing
+ vmemmap offset
+
+Commit dfd55ad85e4a ("arm64: vmemmap: use virtual projection of linear
+region") fixed an issue where the struct page array would overflow into the
+adjacent virtual memory region if system RAM was placed so high up in
+physical memory that its addresses were not representable in the build time
+configured virtual address size.
+
+However, the fix failed to take into account that the vmemmap region needs
+to be relatively aligned with respect to the sparsemem section size, so that
+a sequence of page structs corresponding with a sparsemem section in the
+linear region appears naturally aligned in the vmemmap region.
+
+So round up vmemmap to sparsemem section size. Since this essentially moves
+the projection of the linear region up in memory, also revert the reduction
+of the size of the vmemmap region.
+
+Fixes: dfd55ad85e4a ("arm64: vmemmap: use virtual projection of linear region")
+Tested-by: Mark Langsdorf <mlangsdo@redhat.com>
+Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
+---
+ arch/arm64/include/asm/pgtable.h | 5 +++--
+ 1 file changed, 3 insertions(+), 2 deletions(-)
+
+diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
+index fc9f7ef..eaa9cab 100644
+--- a/arch/arm64/include/asm/pgtable.h
++++ b/arch/arm64/include/asm/pgtable.h
+@@ -40,7 +40,7 @@
+ * VMALLOC_END: extends to the available space below vmmemmap, PCI I/O space,
+ * fixed mappings and modules
+ */
+-#define VMEMMAP_SIZE ALIGN((1UL << (VA_BITS - PAGE_SHIFT - 1)) * sizeof(struct page), PUD_SIZE)
++#define VMEMMAP_SIZE ALIGN((1UL << (VA_BITS - PAGE_SHIFT)) * sizeof(struct page), PUD_SIZE)
+
+ #ifndef CONFIG_KASAN
+ #define VMALLOC_START (VA_START)
+@@ -52,7 +52,8 @@
+ #define VMALLOC_END (PAGE_OFFSET - PUD_SIZE - VMEMMAP_SIZE - SZ_64K)
+
+ #define VMEMMAP_START (VMALLOC_END + SZ_64K)
+-#define vmemmap ((struct page *)VMEMMAP_START - (memstart_addr >> PAGE_SHIFT))
++#define vmemmap ((struct page *)VMEMMAP_START - \
++ SECTION_ALIGN_DOWN(memstart_addr >> PAGE_SHIFT))
+
+ #define FIRST_USER_ADDRESS 0UL
+
+--
+2.5.0
+