summaryrefslogtreecommitdiff
path: root/benchmarks
diff options
context:
space:
mode:
authorMingzhe Li <mingzhe0908@fb.com>2019-04-16 08:47:25 -0700
committerFacebook Github Bot <facebook-github-bot@users.noreply.github.com>2019-04-16 08:57:17 -0700
commit35015762307a68654b8c72d690b7e3e58c3ad137 (patch)
treea4ea16c7521c1d0a5aa10c0173dea3fc8281b019 /benchmarks
parent646cb6157df948643ec98b8947026ccaee5c7415 (diff)
downloadpytorch-35015762307a68654b8c72d690b7e3e58c3ad137.tar.gz
pytorch-35015762307a68654b8c72d690b7e3e58c3ad137.tar.bz2
pytorch-35015762307a68654b8c72d690b7e3e58c3ad137.zip
calculate execution time based on final iterations (#19299)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/19299 I saw larger than 5% performance variation with small operators, this diff aims to reduce the variation by avoiding python overhead. Previously, in the benchmark, we run the main loop for 100 iterations then look at the time. If it's not significant, we will double the number of iterations to rerun and look at the result. We continue this process until it becomes significant. We calculate the time by total_time / number of iterations. The issue is that we are including multiple python trigger overhead. Now, I change the logic to calculate execution time based on the last run instead of all runs, the equation is time_in_last_run/number of iterations. Reviewed By: hl475 Differential Revision: D14925287 fbshipit-source-id: cb646298c08a651e27b99a5547350da367ffff47
Diffstat (limited to 'benchmarks')
-rw-r--r--benchmarks/operator_benchmark/benchmark_core.py6
1 files changed, 3 insertions, 3 deletions
diff --git a/benchmarks/operator_benchmark/benchmark_core.py b/benchmarks/operator_benchmark/benchmark_core.py
index 2693f8461b..f6ff591df5 100644
--- a/benchmarks/operator_benchmark/benchmark_core.py
+++ b/benchmarks/operator_benchmark/benchmark_core.py
@@ -160,7 +160,7 @@ class BenchmarkRunner(object):
run_time = 0
iters = self.iters
while True:
- # Use Python's timeit module to measure execution time.
+ # Use Python's timeit module to measure execution time (unit: second).
# Each experiment consists of repeated execution of
# the benchmark_func a number of times (self.iters)
# because otherwise the duration is too short to get
@@ -170,8 +170,8 @@ class BenchmarkRunner(object):
# (num_repeats) and we then take the minimum execution
# time as the final measurement result (this is also
# recommended by timeit's doc).
- run_time = run_time + min(timeit.repeat(functools.partial(benchmark_func, iters),
- repeat=1, number=1))
+ run_time = min(timeit.repeat(functools.partial(benchmark_func, iters),
+ repeat=1, number=1))
# Analyze time after each run to decide if the result is stable
results_are_significant = self.has_explicit_iteration_count or \
self._report_iteration_result(iters, run_time)