summaryrefslogtreecommitdiffstats
path: root/mm/mlock.c
Commit message (Collapse)AuthorAgeFilesLines
* x86, bts, mm: clean up buffer allocationMarkus Metzger2009-04-241-19/+17
| | | | | | | | | | | | | | | | The current mm interface is asymetric. One function allocates a locked buffer, another function only refunds the memory. Change this to have two functions for accounting and refunding locked memory, respectively; and do the actual buffer allocation in ptrace. [ Impact: refactor BTS buffer allocation code ] Signed-off-by: Markus Metzger <markus.t.metzger@intel.com> Acked-by: Andrew Morton <akpm@linux-foundation.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <20090424095143.A30265@sedona.ch.intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* mm, x86, ptrace, bts: defer branch trace stopping, remove dead codeIngo Molnar2009-04-081-6/+0
| | | | | | Remove the unused free_locked_buffer() API. Signed-off-by: Ingo Molnar <mingo@elte.hu>
* mm, x86, ptrace, bts: defer branch trace stoppingMarkus Metzger2009-04-071-7/+6
| | | | | | | | | | | | | | | | | | | | | When a ptraced task is unlinked, we need to stop branch tracing for that task. Since the unlink is called with interrupts disabled, and we need interrupts enabled to stop branch tracing, we defer the work. Collect all branch tracing related stuff in a branch tracing context. Reviewed-by: Oleg Nesterov <oleg@redhat.com> Signed-off-by: Markus Metzger <markus.t.metzger@intel.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: roland@redhat.com Cc: eranian@googlemail.com Cc: juan.villacis@intel.com Cc: ak@linux.jf.intel.com LKML-Reference: <20090403144550.712401000@intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* Merge branch 'x86-fixes-for-linus' of ↵Linus Torvalds2009-02-171-1/+6
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'x86-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: x86, vm86: fix preemption bug x86, olpc: fix model detection without OFW x86, hpet: fix for LS21 + HPET = boot hang x86: CPA avoid repeated lazy mmu flush x86: warn if arch_flush_lazy_mmu_cpu is called in preemptible context x86/paravirt: make arch_flush_lazy_mmu/cpu disable preemption x86, pat: fix warn_on_once() while mapping 0-1MB range with /dev/mem x86/cpa: make sure cpa is safe to call in lazy mmu mode x86, ptrace, mm: fix double-free on race
| * x86, ptrace, mm: fix double-free on raceMarkus Metzger2009-02-111-1/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Ptrace_detach() races with __ptrace_unlink() if the traced task is reaped while detaching. This might cause a double-free of the BTS buffer. Change the ptrace_detach() path to only do the memory accounting in ptrace_bts_detach() and leave the buffer free to ptrace_bts_untrace() which will be called from __ptrace_unlink(). The fix follows a proposal from Oleg Nesterov. Reported-by: Oleg Nesterov <oleg@redhat.com> Signed-off-by: Markus Metzger <markus.t.metzger@intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* | mm: fix error case in mlock downgrade reversionHugh Dickins2009-02-081-1/+4
|/ | | | | | | | | | | | | | Commit 27421e211a39784694b597dbf35848b88363c248, Manually revert "mlock: downgrade mmap sem while populating mlocked regions", has introduced its own regression: __mlock_vma_pages_range() may report an error (for example, -EFAULT from trying to lock down pages from beyond EOF), but mlock_vma_pages_range() must hide that from its callers as before. Reported-by: Sami Farin <safari-kernel@safari.iki.fi> Signed-off-by: Hugh Dickins <hugh@veritas.com> Cc: stable@kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Manually revert "mlock: downgrade mmap sem while populating mlocked regions"Linus Torvalds2009-02-011-45/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This essentially reverts commit 8edb08caf68184fb170f4f69c7445929e199eaea. It downgraded our mmap semaphore to a read-lock while mlocking pages, in order to allow other threads (and external accesses like "ps" et al) to walk the vma lists and take page faults etc. Which is a nice idea, but the implementation does not work. Because we cannot upgrade the lock back to a write lock without releasing the mmap semaphore, the code had to release the lock entirely and then re-take it as a writelock. However, that meant that the caller possibly lost the vma chain that it was following, since now another thread could come in and mmap/munmap the range. The code tried to work around that by just looking up the vma again and erroring out if that happened, but quite frankly, that was just a buggy hack that doesn't actually protect against anything (the other thread could just have replaced the vma with another one instead of totally unmapping it). The only way to downgrade to a read map _reliably_ is to do it at the end, which is likely the right thing to do: do all the 'vma' operations with the write-lock held, then downgrade to a read after completing them all, and then do the "populate the newly mlocked regions" while holding just the read lock. And then just drop the read-lock and return to user space. The (perhaps somewhat simpler) alternative is to just make all the callers of mlock_vma_pages_range() know that the mmap lock got dropped, and just re-grab the mmap semaphore if it needs to mlock more than one vma region. So we can do this "downgrade mmap sem while populating mlocked regions" thing right, but the way it was done here was absolutely not correct. Thus the revert, in the expectation that we will do it all correctly some day. Cc: Lee Schermerhorn <lee.schermerhorn@hp.com> Cc: Rik van Riel <riel@redhat.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: stable@kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* [CVE-2009-0029] System call wrappers part 14Heiko Carstens2009-01-141-2/+2
| | | | Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
* [CVE-2009-0029] System call wrappers part 13Heiko Carstens2009-01-141-2/+2
| | | | Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
* mm: make get_user_pages() interruptibleYing Han2009-01-061-4/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The initial implementation of checking TIF_MEMDIE covers the cases of OOM killing. If the process has been OOM killed, the TIF_MEMDIE is set and it return immediately. This patch includes: 1. add the case that the SIGKILL is sent by user processes. The process can try to get_user_pages() unlimited memory even if a user process has sent a SIGKILL to it(maybe a monitor find the process exceed its memory limit and try to kill it). In the old implementation, the SIGKILL won't be handled until the get_user_pages() returns. 2. change the return value to be ERESTARTSYS. It makes no sense to return ENOMEM if the get_user_pages returned by getting a SIGKILL signal. Considering the general convention for a system call interrupted by a signal is ERESTARTNOSYS, so the current return value is consistant to that. Lee: An unfortunate side effect of "make-get_user_pages-interruptible" is that it prevents a SIGKILL'd task from munlock-ing pages that it had mlocked, resulting in freeing of mlocked pages. Freeing of mlocked pages, in itself, is not so bad. We just count them now--altho' I had hoped to remove this stat and add PG_MLOCKED to the free pages flags check. However, consider pages in shared libraries mapped by more than one task that a task mlocked--e.g., via mlockall(). If the task that mlocked the pages exits via SIGKILL, these pages would be left mlocked and unevictable. Proposed fix: Add another GUP flag to ignore sigkill when calling get_user_pages from munlock()--similar to Kosaki Motohiro's 'IGNORE_VMA_PERMISSIONS flag for the same purpose. We are not actually allocating memory in this case, which "make-get_user_pages-interruptible" intends to avoid. We're just munlocking pages that are already resident and mapped, and we're reusing get_user_pages() to access those pages. ?? Maybe we should combine 'IGNORE_VMA_PERMISSIONS and '_IGNORE_SIGKILL into a single flag: GUP_FLAGS_MUNLOCK ??? [Lee.Schermerhorn@hp.com: ignore sigkill in get_user_pages during munlock] Signed-off-by: Paul Menage <menage@google.com> Signed-off-by: Ying Han <yinghan@google.com> Reviewed-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Reviewed-by: Pekka Enberg <penberg@cs.helsinki.fi> Cc: Nick Piggin <nickpiggin@yahoo.com.au> Cc: Hugh Dickins <hugh@veritas.com> Cc: Oleg Nesterov <oleg@tv-sign.ru> Cc: Lee Schermerhorn <lee.schermerhorn@hp.com> Cc: Rohit Seth <rohitseth@google.com> Cc: David Rientjes <rientjes@google.com> Signed-off-by: Lee Schermerhorn <lee.schermerhorn@hp.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* x86, bts: memory accountingMarkus Metzger2008-12-201-0/+45
| | | | | | | | | | | | Impact: move the BTS buffer accounting to the mlock bucket Add alloc_locked_buffer() and free_locked_buffer() functions to mm/mlock.c to kalloc a buffer and account the locked memory to current. Account the memory for the BTS buffer to the tracer. Signed-off-by: Markus Metzger <markus.t.metzger@intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* unitialized return value in mm/mlock.c: __mlock_vma_pages_range()Helge Deller2008-11-161-1/+1
| | | | | | | | | | | | | Fix an unitialized return value when compiling on parisc (with CONFIG_UNEVICTABLE_LRU=y): mm/mlock.c: In function `__mlock_vma_pages_range': mm/mlock.c:165: warning: `ret' might be used uninitialized in this function Signed-off-by: Helge Deller <deller@gmx.de> [ It isn't ever really used uninitialized, since no caller should ever call this function with an empty range. But the compiler is correct that from a local analysis standpoint that is impossible to see, and fixing the warning is appropriate. ] Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* mm: remove lru_add_drain_all() from the munlock pathKOSAKI Motohiro2008-11-121-10/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | lockdep warns about following message at boot time on one of my test machine. Then, schedule_on_each_cpu() sholdn't be called when the task have mmap_sem. Actually, lru_add_drain_all() exist to prevent the unevictalble pages stay on reclaimable lru list. but currenct unevictable code can rescue unevictable pages although it stay on reclaimable list. So removing is better. In addition, this patch add lru_add_drain_all() to sys_mlock() and sys_mlockall(). it isn't must. but it reduce the failure of moving to unevictable list. its failure can rescue in vmscan later. but reducing is better. Note, if above rescuing happend, the Mlocked and the Unevictable field mismatching happend in /proc/meminfo. but it doesn't cause any real trouble. ======================================================= [ INFO: possible circular locking dependency detected ] 2.6.28-rc2-mm1 #2 ------------------------------------------------------- lvm/1103 is trying to acquire lock: (&cpu_hotplug.lock){--..}, at: [<c0130789>] get_online_cpus+0x29/0x50 but task is already holding lock: (&mm->mmap_sem){----}, at: [<c01878ae>] sys_mlockall+0x4e/0xb0 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #3 (&mm->mmap_sem){----}: [<c0153da2>] check_noncircular+0x82/0x110 [<c0185e6a>] might_fault+0x4a/0xa0 [<c0156161>] validate_chain+0xb11/0x1070 [<c0185e6a>] might_fault+0x4a/0xa0 [<c0156923>] __lock_acquire+0x263/0xa10 [<c015714c>] lock_acquire+0x7c/0xb0 (*) grab mmap_sem [<c0185e6a>] might_fault+0x4a/0xa0 [<c0185e9b>] might_fault+0x7b/0xa0 [<c0185e6a>] might_fault+0x4a/0xa0 [<c0294dd0>] copy_to_user+0x30/0x60 [<c01ae3ec>] filldir+0x7c/0xd0 [<c01e3a6a>] sysfs_readdir+0x11a/0x1f0 (*) grab sysfs_mutex [<c01ae370>] filldir+0x0/0xd0 [<c01ae370>] filldir+0x0/0xd0 [<c01ae4c6>] vfs_readdir+0x86/0xa0 (*) grab i_mutex [<c01ae75b>] sys_getdents+0x6b/0xc0 [<c010355a>] syscall_call+0x7/0xb [<ffffffff>] 0xffffffff -> #2 (sysfs_mutex){--..}: [<c0153da2>] check_noncircular+0x82/0x110 [<c01e3d2c>] sysfs_addrm_start+0x2c/0xc0 [<c0156161>] validate_chain+0xb11/0x1070 [<c01e3d2c>] sysfs_addrm_start+0x2c/0xc0 [<c0156923>] __lock_acquire+0x263/0xa10 [<c015714c>] lock_acquire+0x7c/0xb0 (*) grab sysfs_mutex [<c01e3d2c>] sysfs_addrm_start+0x2c/0xc0 [<c04f8b55>] mutex_lock_nested+0xa5/0x2f0 [<c01e3d2c>] sysfs_addrm_start+0x2c/0xc0 [<c01e3d2c>] sysfs_addrm_start+0x2c/0xc0 [<c01e3d2c>] sysfs_addrm_start+0x2c/0xc0 [<c01e422f>] create_dir+0x3f/0x90 [<c01e42a9>] sysfs_create_dir+0x29/0x50 [<c04faaf5>] _spin_unlock+0x25/0x40 [<c028f21d>] kobject_add_internal+0xcd/0x1a0 [<c028f37a>] kobject_set_name_vargs+0x3a/0x50 [<c028f41d>] kobject_init_and_add+0x2d/0x40 [<c019d4d2>] sysfs_slab_add+0xd2/0x180 [<c019d580>] sysfs_add_func+0x0/0x70 [<c019d5dc>] sysfs_add_func+0x5c/0x70 (*) grab slub_lock [<c01400f2>] run_workqueue+0x172/0x200 [<c014008f>] run_workqueue+0x10f/0x200 [<c0140bd0>] worker_thread+0x0/0xf0 [<c0140c6c>] worker_thread+0x9c/0xf0 [<c0143c80>] autoremove_wake_function+0x0/0x50 [<c0140bd0>] worker_thread+0x0/0xf0 [<c0143972>] kthread+0x42/0x70 [<c0143930>] kthread+0x0/0x70 [<c01042db>] kernel_thread_helper+0x7/0x1c [<ffffffff>] 0xffffffff -> #1 (slub_lock){----}: [<c0153d2d>] check_noncircular+0xd/0x110 [<c04f650f>] slab_cpuup_callback+0x11f/0x1d0 [<c0156161>] validate_chain+0xb11/0x1070 [<c04f650f>] slab_cpuup_callback+0x11f/0x1d0 [<c015433d>] mark_lock+0x35d/0xd00 [<c0156923>] __lock_acquire+0x263/0xa10 [<c015714c>] lock_acquire+0x7c/0xb0 [<c04f650f>] slab_cpuup_callback+0x11f/0x1d0 [<c04f93a3>] down_read+0x43/0x80 [<c04f650f>] slab_cpuup_callback+0x11f/0x1d0 (*) grab slub_lock [<c04f650f>] slab_cpuup_callback+0x11f/0x1d0 [<c04fd9ac>] notifier_call_chain+0x3c/0x70 [<c04f5454>] _cpu_up+0x84/0x110 [<c04f552b>] cpu_up+0x4b/0x70 (*) grab cpu_hotplug.lock [<c06d1530>] kernel_init+0x0/0x170 [<c06d15e5>] kernel_init+0xb5/0x170 [<c06d1530>] kernel_init+0x0/0x170 [<c01042db>] kernel_thread_helper+0x7/0x1c [<ffffffff>] 0xffffffff -> #0 (&cpu_hotplug.lock){--..}: [<c0155bff>] validate_chain+0x5af/0x1070 [<c040f7e0>] dev_status+0x0/0x50 [<c0156923>] __lock_acquire+0x263/0xa10 [<c015714c>] lock_acquire+0x7c/0xb0 [<c0130789>] get_online_cpus+0x29/0x50 [<c04f8b55>] mutex_lock_nested+0xa5/0x2f0 [<c0130789>] get_online_cpus+0x29/0x50 [<c0130789>] get_online_cpus+0x29/0x50 [<c017bc30>] lru_add_drain_per_cpu+0x0/0x10 [<c0130789>] get_online_cpus+0x29/0x50 (*) grab cpu_hotplug.lock [<c0140cf2>] schedule_on_each_cpu+0x32/0xe0 [<c0187095>] __mlock_vma_pages_range+0x85/0x2c0 [<c0156945>] __lock_acquire+0x285/0xa10 [<c0188f09>] vma_merge+0xa9/0x1d0 [<c0187450>] mlock_fixup+0x180/0x200 [<c0187548>] do_mlockall+0x78/0x90 (*) grab mmap_sem [<c01878e1>] sys_mlockall+0x81/0xb0 [<c010355a>] syscall_call+0x7/0xb [<ffffffff>] 0xffffffff other info that might help us debug this: 1 lock held by lvm/1103: #0: (&mm->mmap_sem){----}, at: [<c01878ae>] sys_mlockall+0x4e/0xb0 stack backtrace: Pid: 1103, comm: lvm Not tainted 2.6.28-rc2-mm1 #2 Call Trace: [<c01555fc>] print_circular_bug_tail+0x7c/0xd0 [<c0155bff>] validate_chain+0x5af/0x1070 [<c040f7e0>] dev_status+0x0/0x50 [<c0156923>] __lock_acquire+0x263/0xa10 [<c015714c>] lock_acquire+0x7c/0xb0 [<c0130789>] get_online_cpus+0x29/0x50 [<c04f8b55>] mutex_lock_nested+0xa5/0x2f0 [<c0130789>] get_online_cpus+0x29/0x50 [<c0130789>] get_online_cpus+0x29/0x50 [<c017bc30>] lru_add_drain_per_cpu+0x0/0x10 [<c0130789>] get_online_cpus+0x29/0x50 [<c0140cf2>] schedule_on_each_cpu+0x32/0xe0 [<c0187095>] __mlock_vma_pages_range+0x85/0x2c0 [<c0156945>] __lock_acquire+0x285/0xa10 [<c0188f09>] vma_merge+0xa9/0x1d0 [<c0187450>] mlock_fixup+0x180/0x200 [<c0187548>] do_mlockall+0x78/0x90 [<c01878e1>] sys_mlockall+0x81/0xb0 [<c010355a>] syscall_call+0x7/0xb Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Tested-by: Kamalesh Babulal <kamalesh@linux.vnet.ibm.com> Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com> Cc: Christoph Lameter <cl@linux-foundation.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Nick Piggin <nickpiggin@yahoo.com.au> Cc: Hugh Dickins <hugh@veritas.com> Cc: Rik van Riel <riel@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* mlock: make mlock error return Posixly CorrectLee Schermerhorn2008-10-201-6/+27
| | | | | | | | | | | | | | | | | | | | Rework Posix error return for mlock(). Posix requires error code for mlock*() system calls for some conditions that differ from what kernel low level functions, such as get_user_pages(), return for those conditions. For more info, see: http://marc.info/?l=linux-kernel&m=121750892930775&w=2 This patch provides the same translation of get_user_pages() error codes to posix specified error codes in the context of the mlock rework for unevictable lru. [akpm@linux-foundation.org: fix build] Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Signed-off-by: Lee Schermerhorn <lee.schermerhorn@hp.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* vmstat: mlocked pages statisticsNick Piggin2008-10-201-5/+36
| | | | | | | | | | | | | | | | | Add NR_MLOCK zone page state, which provides a (conservative) count of mlocked pages (actually, the number of mlocked pages moved off the LRU). Reworked by lts to fit in with the modified mlock page support in the Reclaim Scalability series. [kosaki.motohiro@jp.fujitsu.com: fix incorrect Mlocked field of /proc/meminfo] [lee.schermerhorn@hp.com: mlocked-pages: add event counting with statistics] Signed-off-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Lee Schermerhorn <lee.schermerhorn@hp.com> Signed-off-by: Rik van Riel <riel@redhat.com> Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* mmap: handle mlocked pages during map, remap, unmapRik van Riel2008-10-201-131/+90
| | | | | | | | | | | | | | | | | | | | | | Originally by Nick Piggin <npiggin@suse.de> Remove mlocked pages from the LRU using "unevictable infrastructure" during mmap(), munmap(), mremap() and truncate(). Try to move back to normal LRU lists on munmap() when last mlocked mapping removed. Remove PageMlocked() status when page truncated from file. [akpm@linux-foundation.org: cleanup] [kamezawa.hiroyu@jp.fujitsu.com: fix double unlock_page()] [kosaki.motohiro@jp.fujitsu.com: split LRU: munlock rework] [lee.schermerhorn@hp.com: mlock: fix __mlock_vma_pages_range comment block] [akpm@linux-foundation.org: remove bogus kerneldoc token] Signed-off-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Lee Schermerhorn <lee.schermerhorn@hp.com> Signed-off-by: Rik van Riel <riel@redhat.com> Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Signed-off-by: KAMEZAWA Hiroyuki <kamewzawa.hiroyu@jp.fujitsu.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* mlock: downgrade mmap sem while populating mlocked regionsLee Schermerhorn2008-10-201-3/+43
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We need to hold the mmap_sem for write to initiatate mlock()/munlock() because we may need to merge/split vmas. However, this can lead to very long lock hold times attempting to fault in a large memory region to mlock it into memory. This can hold off other faults against the mm [multithreaded tasks] and other scans of the mm, such as via /proc. To alleviate this, downgrade the mmap_sem to read mode during the population of the region for locking. This is especially the case if we need to reclaim memory to lock down the region. We [probably?] don't need to do this for unlocking as all of the pages should be resident--they're already mlocked. Now, the caller's of the mlock functions [mlock_fixup() and mlock_vma_pages_range()] expect the mmap_sem to be returned in write mode. Changing all callers appears to be way too much effort at this point. So, restore write mode before returning. Note that this opens a window where the mmap list could change in a multithreaded process. So, at least for mlock_fixup(), where we could be called in a loop over multiple vmas, we check that a vma still exists at the start address and that vma still covers the page range [start,end). If not, we return an error, -EAGAIN, and let the caller deal with it. Return -EAGAIN from mlock_vma_pages_range() function and mlock_fixup() if the vma at 'start' disappears or changes so that the page range [start,end) is no longer contained in the vma. Again, let the caller deal with it. Looks like only sys_remap_file_pages() [via mmap_region()] should actually care. With this patch, I no longer see processes like ps(1) blocked for seconds or minutes at a time waiting for a large [multiple gigabyte] region to be locked down. However, I occassionally see delays while unlocking or unmapping a large mlocked region. Should we also downgrade the mmap_sem for the unlock path? Signed-off-by: Lee Schermerhorn <lee.schermerhorn@hp.com> Signed-off-by: Rik van Riel <riel@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* mlock: mlocked pages are unevictableNick Piggin2008-10-201-19/+375
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Make sure that mlocked pages also live on the unevictable LRU, so kswapd will not scan them over and over again. This is achieved through various strategies: 1) add yet another page flag--PG_mlocked--to indicate that the page is locked for efficient testing in vmscan and, optionally, fault path. This allows early culling of unevictable pages, preventing them from getting to page_referenced()/try_to_unmap(). Also allows separate accounting of mlock'd pages, as Nick's original patch did. Note: Nick's original mlock patch used a PG_mlocked flag. I had removed this in favor of the PG_unevictable flag + an mlock_count [new page struct member]. I restored the PG_mlocked flag to eliminate the new count field. 2) add the mlock/unevictable infrastructure to mm/mlock.c, with internal APIs in mm/internal.h. This is a rework of Nick's original patch to these files, taking into account that mlocked pages are now kept on unevictable LRU list. 3) update vmscan.c:page_evictable() to check PageMlocked() and, if vma passed in, the vm_flags. Note that the vma will only be passed in for new pages in the fault path; and then only if the "cull unevictable pages in fault path" patch is included. 4) add try_to_unlock() to rmap.c to walk a page's rmap and ClearPageMlocked() if no other vmas have it mlocked. Reuses as much of try_to_unmap() as possible. This effectively replaces the use of one of the lru list links as an mlock count. If this mechanism let's pages in mlocked vmas leak through w/o PG_mlocked set [I don't know that it does], we should catch them later in try_to_unmap(). One hopes this will be rare, as it will be relatively expensive. Original mm/internal.h, mm/rmap.c and mm/mlock.c changes: Signed-off-by: Nick Piggin <npiggin@suse.de> splitlru: introduce __get_user_pages(): New munlock processing need to GUP_FLAGS_IGNORE_VMA_PERMISSIONS. because current get_user_pages() can't grab PROT_NONE pages theresore it cause PROT_NONE pages can't munlock. [akpm@linux-foundation.org: fix this for pagemap-pass-mm-into-pagewalkers.patch] [akpm@linux-foundation.org: untangle patch interdependencies] [akpm@linux-foundation.org: fix things after out-of-order merging] [hugh@veritas.com: fix page-flags mess] [lee.schermerhorn@hp.com: fix munlock page table walk - now requires 'mm'] [kosaki.motohiro@jp.fujitsu.com: build fix] [kosaki.motohiro@jp.fujitsu.com: fix truncate race and sevaral comments] [kosaki.motohiro@jp.fujitsu.com: splitlru: introduce __get_user_pages()] Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Signed-off-by: Rik van Riel <riel@redhat.com> Signed-off-by: Lee Schermerhorn <lee.schermerhorn@hp.com> Cc: Nick Piggin <npiggin@suse.de> Cc: Dave Hansen <dave@linux.vnet.ibm.com> Cc: Matt Mackall <mpm@selenic.com> Signed-off-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* mlock() fix return valuesKOSAKI Motohiro2008-08-041-2/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Halesh says: Please find the below testcase provide to test mlock. Test Case : =========================== #include <sys/resource.h> #include <stdio.h> #include <sys/stat.h> #include <sys/types.h> #include <unistd.h> #include <sys/mman.h> #include <fcntl.h> #include <errno.h> #include <stdlib.h> int main(void) { int fd,ret, i = 0; char *addr, *addr1 = NULL; unsigned int page_size; struct rlimit rlim; if (0 != geteuid()) { printf("Execute this pgm as root\n"); exit(1); } /* create a file */ if ((fd = open("mmap_test.c",O_RDWR|O_CREAT,0755)) == -1) { printf("cant create test file\n"); exit(1); } page_size = sysconf(_SC_PAGE_SIZE); /* set the MEMLOCK limit */ rlim.rlim_cur = 2000; rlim.rlim_max = 2000; if ((ret = setrlimit(RLIMIT_MEMLOCK,&rlim)) != 0) { printf("Cant change limit values\n"); exit(1); } addr = 0; while (1) { /* map a page into memory each time*/ if ((addr = (char *) mmap(addr,page_size, PROT_READ | PROT_WRITE,MAP_SHARED,fd,0)) == MAP_FAILED) { printf("cant do mmap on file\n"); exit(1); } if (0 == i) addr1 = addr; i++; errno = 0; /* lock the mapped memory pagewise*/ if ((ret = mlock((char *)addr, 1500)) == -1) { printf("errno value is %d\n", errno); printf("cant lock maped region\n"); exit(1); } addr = addr + page_size; } } ====================================================== This testcase results in an mlock() failure with errno 14 that is EFAULT, but it has nowhere been specified that mlock() will return EFAULT. When I tested the same on older kernels like 2.6.18, I got the correct result i.e errno 12 (ENOMEM). I think in source code mlock(2), setting errno ENOMEM has been missed in do_mlock() , on mlock_fixup() failure. SUSv3 requires the following behavior frmo mlock(2). [ENOMEM] Some or all of the address range specified by the addr and len arguments does not correspond to valid mapped pages in the address space of the process. [EAGAIN] Some or all of the memory identified by the operation could not be locked when the call was made. This rule isn't so nice and slighly strange. but many people think POSIX/SUS compliance is important. Reported-by: Halesh Sadashiv <halesh.sadashiv@ap.sony.com> Tested-by: Halesh Sadashiv <halesh.sadashiv@ap.sony.com> Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: <stable@kernel.org> [2.6.25.x, 2.6.26.x] Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* do not limit locked memory when RLIMIT_MEMLOCK is RLIM_INFINITYHerbert van den Bergh2007-07-161-1/+4
| | | | | | | | | | | | | Fix a bug in mm/mlock.c on 32-bit architectures that prevents a user from locking more than 4GB of shared memory, or allocating more than 4GB of shared memory in hugepages, when rlim[RLIMIT_MEMLOCK] is set to RLIM_INFINITY. Signed-off-by: Herbert van den Bergh <herbert.van.den.bergh@oracle.com> Acked-by: Chris Mason <chris.mason@oracle.com> Cc: <stable@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Detach sched.h from mm.hAlexey Dobriyan2007-05-211-0/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | First thing mm.h does is including sched.h solely for can_do_mlock() inline function which has "current" dereference inside. By dealing with can_do_mlock() mm.h can be detached from sched.h which is good. See below, why. This patch a) removes unconditional inclusion of sched.h from mm.h b) makes can_do_mlock() normal function in mm/mlock.c c) exports can_do_mlock() to not break compilation d) adds sched.h inclusions back to files that were getting it indirectly. e) adds less bloated headers to some files (asm/signal.h, jiffies.h) that were getting them indirectly Net result is: a) mm.h users would get less code to open, read, preprocess, parse, ... if they don't need sched.h b) sched.h stops being dependency for significant number of files: on x86_64 allmodconfig touching sched.h results in recompile of 4083 files, after patch it's only 3744 (-8.3%). Cross-compile tested on all arm defconfigs, all mips defconfigs, all powerpc defconfigs, alpha alpha-up arm i386 i386-up i386-defconfig i386-allnoconfig ia64 ia64-up m68k mips parisc parisc-up powerpc powerpc-up s390 s390-up sparc sparc-up sparc64 sparc64-up um-x86_64 x86_64 x86_64-up x86_64-defconfig x86_64-allnoconfig as well as my two usual configs. Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* [PATCH] mlock cleanupRik Bobbaers2006-12-071-1/+1
| | | | | | | | mm is defined as vma->vm_mm, so use that. Acked-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] move capable() to capability.hRandy.Dunlap2006-01-111-0/+1
| | | | | | | | | | | | | - Move capable() from sched.h to capability.h; - Use <linux/capability.h> where capable() is used (in include/, block/, ipc/, kernel/, a few drivers/, mm/, security/, & sound/; many more drivers/ to go) Signed-off-by: Randy Dunlap <rdunlap@xenotime.net> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* Linux-2.6.12-rc2v2.6.12-rc2Linus Torvalds2005-04-161-0/+253
Initial git repository build. I'm not bothering with the full history, even though we have it. We can create a separate "historical" git archive of that later if we want to, and in the meantime it's about 3.2GB when imported into git - space that would just make the early git days unnecessarily complicated, when we don't have a lot of good infrastructure for it. Let it rip!