Commit 19fc3f0a authored by Adam Litke's avatar Adam Litke Committed by Linus Torvalds
Browse files

hugetlb: decrease hugetlb_lock cycling in gather_surplus_huge_pages



To reduce hugetlb_lock acquisitions and releases when freeing excess surplus
pages, scan the page list in two parts.  First, transfer the needed pages to
the hugetlb pool.  Then drop the lock and free the remaining pages back to the
buddy allocator.

In the common case there are zero excess pages and no lock operations are
required.

Thanks Mel Gorman for this improvement.
Signed-off-by: default avatarAdam Litke <agl@us.ibm.com>
Cc: Mel Gorman <mel@csn.ul.ie>
Cc: Dave Hansen <haveblue@us.ibm.com>
Cc: William Lee Irwin III <wli@holomorphy.com>
Cc: Andy Whitcroft <apw@shadowen.org>
Cc: Mel Gorman <mel@csn.ul.ie>
Cc: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 797df574
......@@ -372,11 +372,19 @@ retry:
resv_huge_pages += delta;
ret = 0;
free:
/* Free the needed pages to the hugetlb pool */
list_for_each_entry_safe(page, tmp, &surplus_list, lru) {
if ((--needed) < 0)
break;
list_del(&page->lru);
if ((--needed) >= 0)
enqueue_huge_page(page);
else {
enqueue_huge_page(page);
}
/* Free unnecessary surplus pages to the buddy allocator */
if (!list_empty(&surplus_list)) {
spin_unlock(&hugetlb_lock);
list_for_each_entry_safe(page, tmp, &surplus_list, lru) {
list_del(&page->lru);
/*
* The page has a reference count of zero already, so
* call free_huge_page directly instead of using
......@@ -384,10 +392,9 @@ free:
* unlocked which is safe because free_huge_page takes
* hugetlb_lock before deciding how to free the page.
*/
spin_unlock(&hugetlb_lock);
free_huge_page(page);
spin_lock(&hugetlb_lock);
}
spin_lock(&hugetlb_lock);
}
return ret;
......
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment