Commit 36be57ff authored by Paul Jackson's avatar Paul Jackson Committed by Linus Torvalds
Browse files

[PATCH] cpuset: update cpuset_zones_allowed comment



Update the kernel/cpuset.c:cpuset_zone_allowed() comment.

The rule for when mm/page_alloc.c should call cpuset_zone_allowed()
was intended to be:

  Don't call cpuset_zone_allowed() if you can't sleep, unless you
  pass in the __GFP_HARDWALL flag set in gfp_flag, which disables
  the code that might scan up ancestor cpusets and sleep.

The explanation of this rule in the comment above cpuset_zone_allowed() was
stale, as a result of a restructuring of some __alloc_pages() code in
November 2005.

Rewrite that comment ...
Signed-off-by: default avatarPaul Jackson <pj@sgi.com>
Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
parent bdd804f4
...@@ -2231,19 +2231,25 @@ static const struct cpuset *nearest_exclusive_ancestor(const struct cpuset *cs) ...@@ -2231,19 +2231,25 @@ static const struct cpuset *nearest_exclusive_ancestor(const struct cpuset *cs)
* So only GFP_KERNEL allocations, if all nodes in the cpuset are * So only GFP_KERNEL allocations, if all nodes in the cpuset are
* short of memory, might require taking the callback_mutex mutex. * short of memory, might require taking the callback_mutex mutex.
* *
* The first loop over the zonelist in mm/page_alloc.c:__alloc_pages() * The first call here from mm/page_alloc:get_page_from_freelist()
* calls here with __GFP_HARDWALL always set in gfp_mask, enforcing * has __GFP_HARDWALL set in gfp_mask, enforcing hardwall cpusets, so
* hardwall cpusets - no allocation on a node outside the cpuset is * no allocation on a node outside the cpuset is allowed (unless in
* allowed (unless in interrupt, of course). * interrupt, of course).
* *
* The second loop doesn't even call here for GFP_ATOMIC requests * The second pass through get_page_from_freelist() doesn't even call
* (if the __alloc_pages() local variable 'wait' is set). That check * here for GFP_ATOMIC calls. For those calls, the __alloc_pages()
* and the checks below have the combined affect in the second loop of * variable 'wait' is not set, and the bit ALLOC_CPUSET is not set
* the __alloc_pages() routine that: * in alloc_flags. That logic and the checks below have the combined
* affect that:
* in_interrupt - any node ok (current task context irrelevant) * in_interrupt - any node ok (current task context irrelevant)
* GFP_ATOMIC - any node ok * GFP_ATOMIC - any node ok
* GFP_KERNEL - any node in enclosing mem_exclusive cpuset ok * GFP_KERNEL - any node in enclosing mem_exclusive cpuset ok
* GFP_USER - only nodes in current tasks mems allowed ok. * GFP_USER - only nodes in current tasks mems allowed ok.
*
* Rule:
* Don't call cpuset_zone_allowed() if you can't sleep, unless you
* pass in the __GFP_HARDWALL flag set in gfp_flag, which disables
* the code that might scan up ancestor cpusets and sleep.
**/ **/
int __cpuset_zone_allowed(struct zone *z, gfp_t gfp_mask) int __cpuset_zone_allowed(struct zone *z, gfp_t gfp_mask)
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment