diff options
author | Hugh Dickins <hugh@veritas.com> | 2005-09-03 15:54:53 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@evo.osdl.org> | 2005-09-05 00:05:44 -0700 |
commit | 836d5ffd34550901ea024347693e689273ded8aa (patch) | |
tree | b4dbbbe436eae38aa2f0f5d333608f64c6338cd8 /mm/madvise.c | |
parent | 53e9a6159fdc6419874ce4d86d3577dbedc77b62 (diff) | |
download | kernel-crypto-836d5ffd34550901ea024347693e689273ded8aa.tar.gz kernel-crypto-836d5ffd34550901ea024347693e689273ded8aa.tar.xz kernel-crypto-836d5ffd34550901ea024347693e689273ded8aa.zip |
[PATCH] mm: fix madvise vma merging
Better late than never, I've at last reviewed the madvise vma merging
going into 2.6.13. Remove a pointless check and fix two little bugs -
a simple test (with /proc/<pid>/maps hacked to show ReadHints) showed
both mismerges in practice: though being madvise, neither was disastrous.
1. Correct placement of the success label in madvise_behavior: as in
mprotect_fixup and mlock_fixup, it is necessary to update vm_flags
when vma_merge succeeds (to handle the exceptional Case 8 noted in
the comments above vma_merge itself).
2. Correct initial value of prev when starting part way into a vma: as
in sys_mprotect and do_mlock, it needs to be set to vma in this case
(vma_merge handles only that minimum of cases shown in its comments).
3. If find_vma_prev sets prev, then the vma it returns is prev->vm_next,
so it's pointless to make that same assignment again in sys_madvise.
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Diffstat (limited to 'mm/madvise.c')
-rw-r--r-- | mm/madvise.c | 9 |
1 files changed, 5 insertions, 4 deletions
diff --git a/mm/madvise.c b/mm/madvise.c index c8c01a12fea..4454936f87d 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -37,7 +37,7 @@ static long madvise_behavior(struct vm_area_struct * vma, if (new_flags == vma->vm_flags) { *prev = vma; - goto success; + goto out; } pgoff = vma->vm_pgoff + ((start - vma->vm_start) >> PAGE_SHIFT); @@ -62,6 +62,7 @@ static long madvise_behavior(struct vm_area_struct * vma, goto out; } +success: /* * vm_flags is protected by the mmap_sem held in write mode. */ @@ -70,7 +71,6 @@ static long madvise_behavior(struct vm_area_struct * vma, out: if (error == -ENOMEM) error = -EAGAIN; -success: return error; } @@ -237,8 +237,9 @@ asmlinkage long sys_madvise(unsigned long start, size_t len_in, int behavior) * - different from the way of handling in mlock etc. */ vma = find_vma_prev(current->mm, start, &prev); - if (!vma && prev) - vma = prev->vm_next; + if (vma && start > vma->vm_start) + prev = vma; + for (;;) { /* Still start < end. */ error = -ENOMEM; |