Commit 1b038c6e authored by Yabin Cui's avatar Yabin Cui Committed by Ingo Molnar
Browse files

perf/ring_buffer: Fix exposing a temporarily decreased data_head

In perf_output_put_handle(), an IRQ/NMI can happen in below location and
write records to the same ring buffer:

	...                          <-- an IRQ/NMI can happen here
	rb->user_page->data_head = head;

In this case, a value A is written to data_head in the IRQ, then a value
B is written to data_head after the IRQ. And A > B. As a result,
data_head is temporarily decreased from A to B. And a reader may see
data_head < data_tail if it read the buffer frequently enough, which
creates unexpected behaviors.

This can be fixed by moving dec(&rb->nest) to after updating data_head,
which prevents the IRQ/NMI above from updating data_head.

[ Split up by peterz. ]
Signed-off-by: default avatarYabin Cui <>
Signed-off-by: default avatarPeter Zijlstra (Intel) <>
Cc: Alexander Shishkin <>
Cc: Arnaldo Carvalho de Melo <>
Cc: Arnaldo Carvalho de Melo <>
Cc: Jiri Olsa <>
Cc: Linus Torvalds <>
Cc: Namhyung Kim <>
Cc: Peter Zijlstra <>
Cc: Stephane Eranian <>
Cc: Thomas Gleixner <>
Cc: Vince Weaver <>
Fixes: ef60777c ("perf: Optimize the perf_output() path by removing IRQ-disables")

Signed-off-by: default avatarIngo Molnar <>
parent 23e3983a
......@@ -51,11 +51,18 @@ static void perf_output_put_handle(struct perf_output_handle *handle)
head = local_read(&rb->head);
* IRQ/NMI can happen here, which means we can miss a head update.
* IRQ/NMI can happen here and advance @rb->head, causing our
* load above to be stale.
if (!local_dec_and_test(&rb->nest))
* If this isn't the outermost nesting, we don't have to update
* @rb->user_page->data_head.
if (local_read(&rb->nest) > 1) {
goto out;
* Since the mmap() consumer (userspace) can run on a different CPU:
......@@ -87,9 +94,18 @@ static void perf_output_put_handle(struct perf_output_handle *handle)
rb->user_page->data_head = head;
* Now check if we missed an update -- rely on previous implied
* compiler barriers to force a re-read.
* We must publish the head before decrementing the nest count,
* otherwise an IRQ/NMI can publish a more recent head value and our
* write will (temporarily) publish a stale value.
local_set(&rb->nest, 0);
* Ensure we decrement @rb->nest before we validate the @rb->head.
* Otherwise we cannot be sure we caught the 'last' nested update.
if (unlikely(head != local_read(&rb->head))) {
goto again;
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment