diff options
author | Peter Zijlstra <a.p.zijlstra@chello.nl> | 2010-01-26 18:50:16 +0100 |
---|---|---|
committer | Greg Kroah-Hartman <gregkh@suse.de> | 2010-03-15 09:06:17 -0700 |
commit | 21a6adcde06e129b055caa3256e65a97a2986770 (patch) | |
tree | 56663f2682b5114b92335c7c53ce26e1449ac8cf /include | |
parent | 69cb5f7cdc28a5352a03c16bbaa0a92cdf31b9d4 (diff) | |
download | kernel-common-21a6adcde06e129b055caa3256e65a97a2986770.tar.gz kernel-common-21a6adcde06e129b055caa3256e65a97a2986770.tar.bz2 kernel-common-21a6adcde06e129b055caa3256e65a97a2986770.zip |
perf: Reimplement frequency driven sampling
commit abd50713944c8ea9e0af5b7bffa0aacae21cc91a upstream.
There was a bug in the old period code that caused intel_pmu_enable_all()
or native_write_msr_safe() to show up quite high in the profiles.
In staring at that code it made my head hurt, so I rewrote it in a
hopefully simpler fashion. Its now fully symetric between tick and
overflow driven adjustments and uses less data to boot.
The only complication is that it basically wants to do a u128 division.
The code approximates that in a rather simple truncate until it fits
fashion, taking care to balance the terms while truncating.
This version does not generate that sampling artefact.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Diffstat (limited to 'include')
-rw-r--r-- | include/linux/perf_event.h | 5 |
1 files changed, 2 insertions, 3 deletions
diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h index a177698d95e2..c8ea0c77a625 100644 --- a/include/linux/perf_event.h +++ b/include/linux/perf_event.h @@ -496,9 +496,8 @@ struct hw_perf_event { atomic64_t period_left; u64 interrupts; - u64 freq_count; - u64 freq_interrupts; - u64 freq_stamp; + u64 freq_time_stamp; + u64 freq_count_stamp; #endif }; |