verilator/test_regress/t/t_gantt_io_arm.out
Geza Lore b1b5b5dfe2 Improve run-time profiling
The --prof-threads option has been split into two independent options:
1. --prof-exec, for collecting verilator_gantt and other execution
related profiling data, and
2. --prof-pgo, for collecting data needed for PGO

The implementation of execution profiling is extricated from
VlThreadPool and is now a separate class VlExecutionProfiler. This means
--prof-exec can now be used for single-threaded models (though it does
not measure a lot of things just yet). For consistency VerilatedProfiler
is renamed VlPgoProfiler. Both VlExecutionProfiler and VlPgoProfiler are
in verilated_profiler.{h/cpp}, but can be used completely independently.

Also re-worked the execution profile format so it now only emits events
without holding onto any temporaries. This is in preparation for some
future optimizations that would be hindered by the introduction of function
locals via AstText.

Also removed the Barrier event. Clearing the profile buffers is not
notably more expensive as the profiling records are trivially
destructible.
2022-03-27 15:57:30 +02:00

37 lines
1016 B
Plaintext

Verilator Gantt report
Argument settings:
+verilator+prof+exec+start+1
+verilator+prof+exec+window+2
Analysis:
Total threads = 2
Total mtasks = 5
Total cpus used = 2
Total yields = 51
Total evals = 1
Total eval loops = 1
Total eval time = 294309 rdtsc ticks
Longest mtask time = 137754 rdtsc ticks
All-thread mtask time = 205237 rdtsc ticks
Longest-thread efficiency = 46.8%
All-thread efficiency = 34.9%
All-thread speedup = 0.7
Prediction (what Verilator used for scheduling):
All-thread efficiency = 82.4%
All-thread speedup = 1.6
MTask statistics:
min log(p2e) = -1.054 from mtask 79 (predict 48001, elapsed 137754)
max log(p2e) = 3.641 from mtask 87 (predict 33809, elapsed 887)
mean = 1.656
stddev = 2.104
e ^ stddev = 8.200
CPUs:
cpu 2: cpu_time=202323 Phytium,FT-2500/128
cpu 3: cpu_time=2914 Phytium,FT-2500/128
Writing profile_exec.vcd