forked from luck/tmp_suning_uos_patched
mm: memcontrol: remove ordering between pc->mem_cgroup and PageCgroupUsed
There is a write barrier between setting pc->mem_cgroup and
PageCgroupUsed, which was added to allow LRU operations to lookup the
memcg LRU list of a page without acquiring the page_cgroup lock.
But ever since commit 38c5d72f3e
("memcg: simplify LRU handling by new
rule"), pages are ensured to be off-LRU while charging, so nobody else
is changing LRU state while pc->mem_cgroup is being written, and there
are no read barriers anymore.
Remove the unnecessary write barrier.
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Michal Hocko <mhocko@suse.cz>
Cc: Hugh Dickins <hughd@google.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Vladimir Davydov <vdavydov@parallels.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
parent
05b8430123
commit
9a2385eef9
|
@ -2795,14 +2795,6 @@ static void __mem_cgroup_commit_charge(struct mem_cgroup *memcg,
|
|||
}
|
||||
|
||||
pc->mem_cgroup = memcg;
|
||||
/*
|
||||
* We access a page_cgroup asynchronously without lock_page_cgroup().
|
||||
* Especially when a page_cgroup is taken from a page, pc->mem_cgroup
|
||||
* is accessed after testing USED bit. To make pc->mem_cgroup visible
|
||||
* before USED bit, we need memory barrier here.
|
||||
* See mem_cgroup_add_lru_list(), etc.
|
||||
*/
|
||||
smp_wmb();
|
||||
SetPageCgroupUsed(pc);
|
||||
|
||||
if (lrucare) {
|
||||
|
@ -3483,7 +3475,6 @@ void mem_cgroup_split_huge_fixup(struct page *head)
|
|||
for (i = 1; i < HPAGE_PMD_NR; i++) {
|
||||
pc = head_pc + i;
|
||||
pc->mem_cgroup = memcg;
|
||||
smp_wmb();/* see __commit_charge() */
|
||||
pc->flags = head_pc->flags & ~PCGF_NOCOPY_AT_SPLIT;
|
||||
}
|
||||
__this_cpu_sub(memcg->stat->count[MEM_CGROUP_STAT_RSS_HUGE],
|
||||
|
|
Loading…
Reference in New Issue
Block a user