summaryrefslogtreecommitdiff
path: root/include
diff options
context:
space:
mode:
authorPeter Zijlstra <a.p.zijlstra@chello.nl>2010-01-26 18:50:16 +0100
committerIngo Molnar <mingo@elte.hu>2010-01-27 08:39:33 +0100
commitabd50713944c8ea9e0af5b7bffa0aacae21cc91a (patch)
treec75a352aa13821a41791877f25d2f048568827b0 /include
parentef12a141306c90336a3a10d40213ecd98624d274 (diff)
downloadlinux-3.10-abd50713944c8ea9e0af5b7bffa0aacae21cc91a.tar.gz
linux-3.10-abd50713944c8ea9e0af5b7bffa0aacae21cc91a.tar.bz2
linux-3.10-abd50713944c8ea9e0af5b7bffa0aacae21cc91a.zip
perf: Reimplement frequency driven sampling
There was a bug in the old period code that caused intel_pmu_enable_all() or native_write_msr_safe() to show up quite high in the profiles. In staring at that code it made my head hurt, so I rewrote it in a hopefully simpler fashion. Its now fully symetric between tick and overflow driven adjustments and uses less data to boot. The only complication is that it basically wants to do a u128 division. The code approximates that in a rather simple truncate until it fits fashion, taking care to balance the terms while truncating. This version does not generate that sampling artefact. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <new-submission> Cc: <stable@kernel.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'include')
-rw-r--r--include/linux/perf_event.h5
1 files changed, 2 insertions, 3 deletions
diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
index c6f812e4d05..72b2615600d 100644
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -498,9 +498,8 @@ struct hw_perf_event {
atomic64_t period_left;
u64 interrupts;
- u64 freq_count;
- u64 freq_interrupts;
- u64 freq_stamp;
+ u64 freq_time_stamp;
+ u64 freq_count_stamp;
#endif
};