From a70fe14b7dddcb944fbd6c9f3739cd3a22089af5 Mon Sep 17 00:00:00 2001 From: Paolo Bonzini Date: Sun, 29 Jan 2017 12:15:15 +0100 Subject: [PATCH] cpu-exec: tighten barrier on TCG_EXIT_REQUESTED This seems to have worked just fine so far on weakly-ordered architectures, but I don't see anything that prevents the reordering from: store 1 to exit_request store 1 to tcg_exit_req load tcg_exit_req store 0 to tcg_exit_req load exit_request store 0 to exit_request store 1 to exit_request store 1 to tcg_exit_req to this: store 1 to exit_request store 1 to tcg_exit_req load tcg_exit_req load exit_request store 1 to exit_request store 1 to tcg_exit_req store 0 to tcg_exit_req store 0 to exit_request therefore losing a request. It's possible that other memory barriers (e.g. in rcu_read_unlock) are hiding it, but better safe than sorry. Signed-off-by: Paolo Bonzini --- cpu-exec.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/cpu-exec.c b/cpu-exec.c index 1f7d217f30..d50625bf97 100644 --- a/cpu-exec.c +++ b/cpu-exec.c @@ -552,11 +552,11 @@ static inline void cpu_loop_exec_tb(CPUState *cpu, TranslationBlock *tb, * have set something else (eg exit_request or * interrupt_request) which we will handle * next time around the loop. But we need to - * ensure the tcg_exit_req read in generated code + * ensure the zeroing of tcg_exit_req (see cpu_tb_exec) * comes before the next read of cpu->exit_request * or cpu->interrupt_request. */ - smp_rmb(); + smp_mb(); *last_tb = NULL; break; case TB_EXIT_ICOUNT_EXPIRED: