Commit c133c9db authored by Peter Zijlstra's avatar Peter Zijlstra Committed by Greg Kroah-Hartman
Browse files

perf/ring_buffer: Add ordering to rb->nest increment

[ Upstream commit 3f9fbe9b


Similar to how decrementing rb->next too early can cause data_head to
(temporarily) be observed to go backward, so too can this happen when
we increment too late.

This barrier() ensures the rb->head load happens after the increment,
both the one in the 'goto again' path, as the one from
perf_output_get_handle() -- albeit very unlikely to matter for the
Suggested-by: default avatarYabin Cui <>
Signed-off-by: default avatarPeter Zijlstra (Intel) <>
Cc: Alexander Shishkin <>
Cc: Arnaldo Carvalho de Melo <>
Cc: Jiri Olsa <>
Cc: Linus Torvalds <>
Cc: Peter Zijlstra <>
Cc: Stephane Eranian <>
Cc: Thomas Gleixner <>
Cc: Vince Weaver <>
Fixes: ef60777c ("perf: Optimize the perf_output() path by removing IRQ-disables")

Signed-off-by: default avatarIngo Molnar <>
Signed-off-by: default avatarSasha Levin <>
parent cca19ab2
......@@ -49,6 +49,15 @@ static void perf_output_put_handle(struct perf_output_handle *handle)
unsigned long head;
* In order to avoid publishing a head value that goes backwards,
* we must ensure the load of @rb->head happens after we've
* incremented @rb->nest.
* Otherwise we can observe a @rb->head value before one published
* by an IRQ/NMI happening between the load and the increment.
head = local_read(&rb->head);
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment