diff options
author | KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> | 2009-06-16 15:32:52 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2009-06-16 19:47:42 -0700 |
commit | cb4b86ba47bb0937b71fb825b3ed88adf7a190f0 (patch) | |
tree | 4b8528ba914a315e5857e7fe2a6e7d415f2e6650 /mm/vmscan.c | |
parent | 6837765963f1723e80ca97b1fae660f3a60d77df (diff) | |
download | kernel-crypto-cb4b86ba47bb0937b71fb825b3ed88adf7a190f0.tar.gz kernel-crypto-cb4b86ba47bb0937b71fb825b3ed88adf7a190f0.tar.xz kernel-crypto-cb4b86ba47bb0937b71fb825b3ed88adf7a190f0.zip |
mm: add swap cache interface for swap reference
In a following patch, the usage of swap cache is recorded into swap_map.
This patch is for necessary interface changes to do that.
2 interfaces:
- swapcache_prepare()
- swapcache_free()
are added for allocating/freeing refcnt from swap-cache to existing swap
entries. But implementation itself is not changed under this patch. At
adding swapcache_free(), memcg's hook code is moved under
swapcache_free(). This is better than using scattered hooks.
Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Reviewed-by: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
Acked-by: Balbir Singh <balbir@in.ibm.com>
Cc: Hugh Dickins <hugh.dickins@tiscali.co.uk>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Li Zefan <lizf@cn.fujitsu.com>
Cc: Dhaval Giani <dhaval@linux.vnet.ibm.com>
Cc: YAMAMOTO Takashi <yamamoto@valinux.co.jp>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/vmscan.c')
-rw-r--r-- | mm/vmscan.c | 3 |
1 files changed, 1 insertions, 2 deletions
diff --git a/mm/vmscan.c b/mm/vmscan.c index 2c4b945b011..52339dd7bf8 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -470,8 +470,7 @@ static int __remove_mapping(struct address_space *mapping, struct page *page) swp_entry_t swap = { .val = page_private(page) }; __delete_from_swap_cache(page); spin_unlock_irq(&mapping->tree_lock); - mem_cgroup_uncharge_swapcache(page, swap); - swap_free(swap); + swapcache_free(swap, page); } else { __remove_from_page_cache(page); spin_unlock_irq(&mapping->tree_lock); |