diff options
author | Mike Galbraith <efault@gmx.de> | 2010-03-11 17:17:13 +0100 |
---|---|---|
committer | Ingo Molnar <mingo@elte.hu> | 2010-03-11 18:32:49 +0100 |
commit | 39c0cbe2150cbd848a25ba6cdb271d1ad46818ad (patch) | |
tree | 7b9c356b39a2b50219398ce534d7d64e7ab4bf06 /include/linux/sched.h | |
parent | 41acab8851a0408c1d5ad6c21a07456f88b54d40 (diff) | |
download | linux-3.10-39c0cbe2150cbd848a25ba6cdb271d1ad46818ad.tar.gz linux-3.10-39c0cbe2150cbd848a25ba6cdb271d1ad46818ad.tar.bz2 linux-3.10-39c0cbe2150cbd848a25ba6cdb271d1ad46818ad.zip |
sched: Rate-limit nohz
Entering nohz code on every micro-idle is costing ~10% throughput for netperf
TCP_RR when scheduling cross-cpu. Rate limiting entry fixes this, but raises
ticks a bit. On my Q6600, an idle box goes from ~85 interrupts/sec to 128.
The higher the context switch rate, the more nohz entry costs. With this patch
and some cycle recovery patches in my tree, max cross cpu context switch rate is
improved by ~16%, a large portion of which of which is this ratelimiting.
Signed-off-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <1268301003.6785.28.camel@marge.simson.net>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'include/linux/sched.h')
-rw-r--r-- | include/linux/sched.h | 6 |
1 files changed, 6 insertions, 0 deletions
diff --git a/include/linux/sched.h b/include/linux/sched.h index 8cc863d6647..13efe7dac5f 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -271,11 +271,17 @@ extern cpumask_var_t nohz_cpu_mask; #if defined(CONFIG_SMP) && defined(CONFIG_NO_HZ) extern int select_nohz_load_balancer(int cpu); extern int get_nohz_load_balancer(void); +extern int nohz_ratelimit(int cpu); #else static inline int select_nohz_load_balancer(int cpu) { return 0; } + +static inline int nohz_ratelimit(int cpu) +{ + return 0; +} #endif /* |