Add RTLMeter badge and note in internals docs (#6095)

This commit is contained in:
Geza Lore 2025-06-16 16:35:31 +01:00 committed by GitHub
parent 832629c602
commit de2818c733
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
2 changed files with 38 additions and 1 deletions

View File

@ -1,7 +1,7 @@
.. Github doesn't render images unless absolute URL
.. Do not know of a conditional tag, "only: github" nor "github display" works
|badge1| |badge2| |badge3| |badge4| |badge5| |badge7|
|badge1| |badge2| |badge3| |badge4| |badge5| |badge7| |badge8|
.. |badge1| image:: https://img.shields.io/badge/Website-Verilator.org-181717.svg
:target: https://verilator.org
@ -15,6 +15,8 @@
:target: https://hub.docker.com/r/verilator/verilator
.. |badge7| image:: https://github.com/verilator/verilator/workflows/build/badge.svg
:target: https://github.com/verilator/verilator/actions?query=workflow%3Abuild
.. |badge8| image:: https://img.shields.io/github/actions/workflow/status/verilator/verilator/rtlmeter.yml?branch=master&event=schedule&label=benchmarks
:target: https://verilator.github.io/verilator-rtlmeter-results
Welcome to Verilator

View File

@ -1569,6 +1569,41 @@ environment can check their branches too by enabling the build workflow:
- Click Enable workflow.
Benchmarking
------------
For benchmarking the effects of changes (simulation speed, memory consumption,
verilation time, etc.), you can use `RTLMeter
<https://github.com/verilator/rtlmeter>`__, a benchmark suite designed for this
purpose. The scripts provided with RTLMeter have many capabilities. For full
details, see the `documentation of RTLMeter
<https://verilator.github.io/rtlmeter>`__ itself.
For a quick check, you an run the following after putting ``verilator`` on your
``PATH``:
.. code:: shell
./rtlmeter run --cases "+standard" --workRoot work-a
./rtlmeter report work-a
To compare against an alternate version, again put that alternate ``verilator``
on your ``PATH`` then run:
.. code:: shell
./rtlmeter run --cases "+standard" --workRoot work-b
./rtlmeter compare work-a work-b
The continuous integration system in GitHub Actions runs this benchmark suite
nightly on the master branch. The performance numbers from these nightly runs
can be viewed via the `RTLMeter results dashboard
<https://verilator.github.io/verilator-rtlmeter-results>`__. Note that these
results are collected on GitHub hosted runners. These are virtual machines
operating in a potentially noisy environment, so time measurements can have
significant variance. Experience shows that a ~20% time difference can be
reliably measured on GitHub hosted runners, and smaller differences are
noticeable over a few days of reruns as trends emerge from the noise.
Fuzzing
-------