Commit 6649a386 authored by Ken Chen's avatar Ken Chen Committed by Linus Torvalds
Browse files

[PATCH] hugetlb: preserve hugetlb pte dirty state

__unmap_hugepage_range() is buggy that it does not preserve dirty state of
huge_pte when unmapping hugepage range.  It causes data corruption in the
event of dop_caches being used by sys admin.  For example, an application
creates a hugetlb file, modify pages, then unmap it.  While leaving the
hugetlb file alive, comes along sys admin doing a "echo 3 >

drop_pagecache_sb() will happily free all pages that aren't marked dirty if
there are no active mapping.  Later when application remaps the hugetlb
file back and all data are gone, triggering catastrophic flip over on

Not only that, the internal resv_huge_pages count will also get all messed
up.  Fix it up by marking page dirty appropriately.
Signed-off-by: default avatarKen Chen <>
Cc: "Nish Aravamudan" <>
Cc: Adam Litke <>
Cc: David Gibson <>
Cc: William Lee Irwin III <>
Cc: <>
Cc: Hugh Dickins <>
Signed-off-by: default avatarAndrew Morton <>
Signed-off-by: default avatarLinus Torvalds <>
parent f336953b
......@@ -449,10 +449,13 @@ static int hugetlbfs_symlink(struct inode *dir,
* For direct-IO reads into hugetlb pages
* mark the head page dirty
static int hugetlbfs_set_page_dirty(struct page *page)
struct page *head = (struct page *)page_private(page);
return 0;
......@@ -389,6 +389,8 @@ void __unmap_hugepage_range(struct vm_area_struct *vma, unsigned long start,
page = pte_page(pte);
if (pte_dirty(pte))
list_add(&page->lru, &page_list);
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment