summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* x86: do not PSE on CONFIG_DEBUG_PAGEALLOC=yIngo Molnar2008-01-305-14/+29
| | | | | | | | | | | | | | get more testing of the c_p_a() code done by not turning off PSE on DEBUG_PAGEALLOC. this simplifies the early pagetable setup code, and tests the largepage-splitup code quite heavily. In the end, all the largepages will be split up pretty quickly, so there's no difference to how DEBUG_PAGEALLOC worked before. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: cpa: simplify lockingIngo Molnar2008-01-301-7/+5
| | | | | | | | | | | further simplify cpa locking: since the largepage-split is a slowpath, use the pgd_lock for the whole operation, intead of the mmap_sem. This also makes it suitable for DEBUG_PAGEALLOC purposes again. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: simplify cpa largepage split, #3Ingo Molnar2008-01-301-14/+5
| | | | | | | | simplify cpa largepage split: push the reference protection bits into the largepage-splitting function. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: cpa self-test fixesIngo Molnar2008-01-302-11/+7
| | | | | | | | cpa self-test fixes. change_page_attr_addr() was buggy, it passed in a virtual address as a physical one. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: further cpa largepage-split cleanupsIngo Molnar2008-01-301-40/+49
| | | | | | | | further cpa largepage-split cleanups: make the splitup isolated functionality, without leaking details back into __change_page_attr(). Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: simplify 32-bit cpa largepage splittingIngo Molnar2008-01-301-6/+8
| | | | | | | | simplify 32-bit cpa largepage splitting: do a pure split and repeat the pte lookup to get the new pte modified. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: simplify the 32-bit cpa codeIngo Molnar2008-01-301-123/+56
| | | | | | | simplify the 32-bit cpa code. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: fix some bugs about EFI runtime code mappingHuang, Ying2008-01-303-26/+36
| | | | | | | | | | | | | | | | | This patch fixes some bugs of making EFI runtime code executable. - Use change_page_attr in i386 too. Because the runtime code may be mapped not through ioremap. - If there is no _PAGE_NX in __supported_pte_mask, the change_page_attr is not called. - Make efi_ioremap map pages as PAGE_KERNEL_EXEC_NOCACHE, because EFI runtime code may be mapped through efi_ioremap. Signed-off-by: Huang Ying <ying.huang@intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: fix more non-global TLB flushesIngo Molnar2008-01-301-1/+1
| | | | | | | fix more __flush_tlb() instances, out of caution. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: fix early_ioremap() on 64-bitAndi Kleen2008-01-301-3/+3
| | | | | | | | | | | | | | | | | | | | | Fix early_ioremap() on x86-64 I had ACPI failures on several machines since a few days. Symptom was NUMA nodes not getting detected or worse cores not getting detected. They all came from ACPI not being able to read various of its tables. I finally bisected it down to Jeremy's "put _PAGE_GLOBAL into PAGE_KERNEL" change. With that the fix was fairly obvious. The problem was that early_ioremap() didn't use a "_all" flush that would affect the global PTEs too. So with global bits getting used everywhere now an early_ioremap would not actually flush a mapping if something else was mapped previously on that slot (which can happen with early_iounmap inbetween) This patch changes all flushes in init_64.c to be __flush_tlb_all() and fixes the problem here. Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: remove set_kernel_exec()Andi Kleen2008-01-303-52/+0
| | | | | | | | | | | | The SMP trampoline always runs in real mode, so making it executable in the page tables doesn't make much sense because it executes before page tables are set up. That was the only user of set_kernel_exec(). Remove set_kernel_exec(). Signed-off-by: Andi Kleen <ak@suse.de> Acked-by: Jan Beulich <jbeulich@novell.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: introduce canon_pgprot()Andi Kleen2008-01-301-0/+2
| | | | | | | | Introduce canon_pgprot() Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: c_p_a() make it more robust against use of PAT bitsAndi Kleen2008-01-302-4/+4
| | | | | | | | | | | | Use the page table level instead of the PSE bit to check if the PTE is for a 4K page or not. This makes the code more robust when the PAT bit is changed because the PAT bit on 4K pages is in the same position as the PSE bit. Signed-off-by: Andi Kleen <ak@suse.de> Acked-by: Jan Beulich <jbeulich@novell.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: c_p_a() fix: reorder TLB / cache flushes to follow Intel recommendationAndi Kleen2008-01-302-7/+8
| | | | | | | | | | | | | | | | | | | | | Intel recommends to first flush the TLBs and then the caches on caching attribute changes. c_p_a() previously did it the other way round. Reorder that. The procedure is still not fully compliant to the Intel documentation because Intel recommends a all CPU synchronization step between the TLB flushes and the cache flushes. However on all new Intel CPUs this is now meaningless anyways because they support Self-Snoop and can skip the cache flush step anyway. [ mingo@elte.hu: decoupled from clflush and ported it to x86.git ] Signed-off-by: Andi Kleen <ak@suse.de> Acked-by: Jan Beulich <jbeulich@novell.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: fix c_p_a() boot crashAndi Kleen2008-01-301-2/+2
| | | | | | | | | | | | | | | | | | | | fix: > hm, i just found a failing 64-bit .config while testing your CPA > patchset: > > [ 1.916541] CPA mapping 4k 0 large 2048 gb 0 x 0[0-0] miss 0 > [ 1.919874] Unable to handle kernel paging request at 000000000335aea8 RIP: > [ 1.919874] [<ffffffff8021d2d3>] change_page_attr+0x3/0x61 > [ 1.919874] PGD 0 > [ 1.919874] Oops: 0000 [1] > [ 1.919874] CPU 0 This handles addresses which don't have a mem_map entry. Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: don't drop NX bit in pte modifier functions on 32-bitAndi Kleen2008-01-301-6/+6
| | | | | | | | | | | | | | | The pte_* modifier functions that cleared bits dropped the NX bit on 32bit PAE because they only worked in int, but NX is in bit 63. Fix that by adding appropiate casts so that the arithmetic happens as long long on PAE kernels. I decided to just use 64bit arithmetic instead of open coding like pte_modify() because gcc should generate good enough code for that now. Signed-off-by: Andi Kleen <ak@suse.de> Acked-by: Jan Beulich <jbeulich@novell.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: add pte_pgprot to 32-bitAndi Kleen2008-01-302-2/+2
| | | | | | | | | | | 64bit already had it. Needed for later patches. Signed-off-by: Andi Kleen <ak@suse.de> Acked-by: Jan Beulich <jbeulich@novell.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: shrink __PAGE_KERNEL/__PAGE_KERNEL_EXEC on non PAE kernelsAndi Kleen2008-01-302-3/+3
| | | | | | | | | No need to make it 64bit there. Signed-off-by: Andi Kleen <ak@suse.de> Acked-by: Jan Beulich <jbeulich@novell.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: cpa: remove unnecessary masking of addressAndi Kleen2008-01-301-1/+1
| | | | | | | | | | virt_to_page does not care about the bits below the page granuality. So don't mask them. Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: cpa: use wbinvd() macro instead of inline assembly in 64bit c_p_a()Andi Kleen2008-01-301-1/+1
| | | | | | | Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: early_ioremap_init(), enhance warningsIngo Molnar2008-01-301-1/+16
| | | | | | | enhance the debug warning in early_ioremap_init(). Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: fix EISA ioremapIngo Molnar2008-01-301-2/+2
| | | | | Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: fix early_ioremap()/btmapIngo Molnar2008-01-301-2/+10
| | | | | | | | | fix a long-standing weakness of the early-ioremap allocator: it uses a single pgd entry for the boot mappings, and was not properly protecting itself against crossing a 2MB (4MB) boundary. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: add early_ioremap() leak detectionIngo Molnar2008-01-301-0/+16
| | | | | Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: make early_ioremap_debug early_paramHuang, Ying2008-01-301-2/+2
| | | | | | | | | | This patch makes "early_ioremap_debug" a early parameter, because "early_ioreamp/early_iounmap" is only used during early boot stage. Signed-off-by: Huang Ying <ying.huang@intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: early_ioremap(), debuggingIngo Molnar2008-01-301-0/+29
| | | | | | | | add early_ioremap() debug printouts via the early_ioremap_debug boot option. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: add debug warnings to early_ioremap()Ingo Molnar2008-01-301-6/+13
| | | | | Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: increase the number of boot-mappingsIngo Molnar2008-01-301-1/+1
| | | | | | | increase max early_ioremap() remapping size from 64K to 256K. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: enhance early_ioremap()Ingo Molnar2008-01-302-14/+31
| | | | | | | - allow nesting of up to 4 levels Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: early_ioremap_reset fixHuang, Ying2008-01-301-1/+1
| | | | | | | | This patch fixes a bug of early_ioremap_reset. Signed-off-by: Huang Ying <ying.huang@intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86 32-bit boot: rename bt_ioremap() to early_ioremap()Huang, Ying2008-01-309-61/+50
| | | | | | | | | | | This patch renames bt_ioremap to early_ioremap, which is used in x86_64. This makes it easier to merge i386 and x86_64 usage. [ mingo@elte.hu: fix ] Signed-off-by: Huang Ying <ying.huang@intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: replace boot_ioremap() with enhanced bt_ioremap() - remove boot_ioremap()Huang, Ying2008-01-305-118/+5
| | | | | | | | | This patch replaces boot_ioremap invokation with bt_ioremap and removes the boot_ioremap implementation. Signed-off-by: Huang Ying <ying.huang@intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* i386 boot: replace boot_ioremap with enhanced bt_ioremap - enhance bt_ioremapHuang, Ying2008-01-304-2/+91
| | | | | | | | | | | | This patch makes it possible for bt_ioremap() to be used before paging_init(), via providing an early implementation of set_fixmap() that can be used before paging_init(). This way boot_ioremap() can be replaced by bt_ioremap(). Signed-off-by: Huang Ying <ying.huang@intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: set strong uncacheable where UC is really desiredSiddha, Suresh B2008-01-302-2/+2
| | | | | | | | | | | | Also use _PAGE_PWT for all the mappings which need uncache mapping. Instead of existing PAT2 which is UC- (and can be overwritten by MTRRs), we now use PAT3 which is strong uncacheable. This makes it consistent with pgprot_noncached() Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: use __PAGE_KERNEL_EXEC in ioremap_64.cJoerg Roedel2008-01-301-2/+1
| | | | | | | | | This patch replaces the manual permission setup for pages in ioremap_64.c with the pre-defined __PAGE_KERNEL_EXEC value. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: cleanup boot_ioremap_32.cThomas Gleixner2008-01-301-11/+11
| | | | | | | | Coding style cleanup before modifying the file. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: clean up arch/x86/mm/pageattr-test.cIngo Molnar2008-01-301-52/+61
| | | | | | | fix 15 checkpatch warnings. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: c_p_a(), add simple self test at bootAndi Kleen2008-01-303-0/+235
| | | | | | | | | | | | | Since change_page_attr() is tricky code it is good to have some regression test code. This patch maps and unmaps some random pages in the direct mapping at boot and then dumps the state and does some simple sanity checks. Add it with a CONFIG option. Signed-off-by: Andi Kleen <ak@suse.de> Acked-by: Jan Beulich <jbeulich@novell.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: return the page table level in lookup_address()Ingo Molnar2008-01-307-11/+22
| | | | | | | | | | | | | | | | | | based on this patch from Andi Kleen: | Subject: CPA: Return the page table level in lookup_address() | From: Andi Kleen <ak@suse.de> | | Needed for the next change. | | And change all the callers. and ported it to x86.git. Signed-off-by: Andi Kleen <ak@suse.de> Acked-by: Jan Beulich <jbeulich@novell.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: add pte accessors for the global bitAndi Kleen2008-01-301-0/+3
| | | | | | | | | Needed for some test code. Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: clean up pte_execAndi Kleen2008-01-305-18/+3
| | | | | | | | | | | | - Rename it to pte_exec() from pte_exec_kernel(). There is nothing kernel specific in there. - Move it into the common file because _PAGE_NX is 0 on !PAE and then pte_exec() will be always evaluate to true. Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: add dump_pagetable helper to X86_32Harvey Harrison2008-01-301-33/+39
| | | | | | | | Similar to x86 64-bit. Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* c_p_a(): do a simple self test at bootAndi Kleen2008-01-303-0/+41
| | | | | | | | | | When CONFIG_DEBUG_RODATA is enabled undo the ro mapping and redo it again. This gives some simple testing for change_page_attr(). Signed-off-by: Andi Kleen <ak@suse.de> Acked-by: Jan Beulich <jbeulich@novell.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: clean up arch/x86/mm/pageattr_64.cIngo Molnar2008-01-301-62/+81
| | | | | | | | | | | | | clean up arch/x86/mm/pageattr_64.c. no code changed: text data bss dec hex filename 1751 16 0 1767 6e7 pageattr_64.o.before 1751 16 0 1767 6e7 pageattr_64.o.after Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: clean up arch/x86/mm/pageattr_32.cIngo Molnar2008-01-301-68/+83
| | | | | | | | | | | | | clean up arch/x86/mm/pageattr_32.c. no code changed: text data bss dec hex filename 1255 40 0 1295 50f pageattr_32.o.before 1255 40 0 1295 50f pageattr_32.o.after Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: change ioremap() to default to uncachedIngo Molnar2008-01-302-9/+29
| | | | | | | | Prepare ioremap() to default to uncached. This will be the safest - but first we have to fix CPA. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: fix recursion in arch/x86/kernel/cpu/mcheck/mce_amd_64.cYinghai Lu2008-01-301-2/+2
| | | | | | | remove the recursion from this function. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: allocate and initialize unshared pmdsJeremy Fitzhardinge2008-01-301-3/+11
| | | | | | | | | | If SHARED_KERNEL_PMD is false, then we need to allocate and initialize the kernel pmd. We can easily piggy-back this onto the existing pmd prepopulation code. Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: preallocate pmds at pgd creation timeJeremy Fitzhardinge2008-01-301-0/+70
| | | | | | | | | | | | | | | | | | | | | | | | | | In PAE mode, an update to the pgd requires a cr3 reload to make sure the processor notices the changes. Since this also has the side-effect of flushing the tlb, its an expensive operation which we want to avoid where possible. This patch mitigates the cost of installing the initial set of pmds on process creation by preallocating them when the pgd is allocated. This avoids up to three tlb flushes during exec, as it creates the new process address space while the pagetable is in active use. The pmds will be freed as part of the normal pagetable teardown in free_pgtables, which is called in munmap and process exit. However, free_pgtables will only free parts of the pagetable which actually contain mappings, so stray pmds may still be attached to the pgd at pgd_free time. We must mop them up to prevent a memory leak. Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com> Cc: Andi Kleen <ak@suse.de> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: H. Peter Anvin <hpa@zytor.com> Cc: William Irwin <wli@holomorphy.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* xen: deal with pmd being allocated/freedJeremy Fitzhardinge2008-01-301-5/+25
| | | | | | | | | | | Deal properly with pmd-level pages being allocated and freed dynamically. We can handle them more or less the same as pte pages. Also, deal with early_ioremap pagetable manipulations. Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>