verilator/ci/docker/run
Krzysztof Bieganski 39af5d020e
Timing support (#3363)
Adds timing support to Verilator. It makes it possible to use delays,
event controls within processes (not just at the start), wait
statements, and forks.

Building a design with those constructs requires a compiler that
supports C++20 coroutines (GCC 10, Clang 5).

The basic idea is to have processes and tasks with delays/event controls
implemented as C++20 coroutines. This allows us to suspend and resume
them at any time.

There are five main runtime classes responsible for managing suspended
coroutines:
* `VlCoroutineHandle`, a wrapper over C++20's `std::coroutine_handle`
  with move semantics and automatic cleanup.
* `VlDelayScheduler`, for coroutines suspended by delays. It resumes
  them at a proper simulation time.
* `VlTriggerScheduler`, for coroutines suspended by event controls. It
  resumes them if its corresponding trigger was set.
* `VlForkSync`, used for syncing `fork..join` and `fork..join_any`
  blocks.
* `VlCoroutine`, the return type of all verilated coroutines. It allows
  for suspending a stack of coroutines (normally, C++ coroutines are
  stackless).

There is a new visitor in `V3Timing.cpp` which:
  * scales delays according to the timescale,
  * simplifies intra-assignment timing controls and net delays into
    regular timing controls and assignments,
  * simplifies wait statements into loops with event controls,
  * marks processes and tasks with timing controls in them as
    suspendable,
  * creates delay, trigger scheduler, and fork sync variables,
  * transforms timing controls and fork joins into C++ awaits

There are new functions in `V3SchedTiming.cpp` (used by `V3Sched.cpp`)
that integrate static scheduling with timing. This involves providing
external domains for variables, so that the necessary combinational
logic gets triggered after coroutine resumption, as well as statements
that need to be injected into the design eval function to perform this
resumption at the correct time.

There is also a function that transforms forked processes into separate
functions.

See the comments in `verilated_timing.h`, `verilated_timing.cpp`,
`V3Timing.cpp`, and `V3SchedTiming.cpp`, as well as the internals
documentation for more details.

Signed-off-by: Krzysztof Bieganski <kbieganski@antmicro.com>
2022-08-22 13:26:32 +01:00
..
hooks
Dockerfile Timing support (#3363) 2022-08-22 13:26:32 +01:00
README.rst
verilator-docker
verilator-wrap.sh

Verilator Executable Docker Container
=====================================

The Verilator Executable Docker Container allows you to run Verilator
easily as a docker image, e.g.:

::

   docker run -ti verilator/verilator:latest --version

This will install the container, run the latest Verilator and print
Verilator's version.

Containers are automatically built for all released versions, so you may
easily compare results across versions, e.g.:

::

   docker run -ti verilator/verilator:4.030 --version

Verilator needs to read and write files on the local system. To simplify
this process, use the ``verilator-docker`` convenience script. This script
takes the version number, and all remaining arguments are passed through to
Verilator. e.g.:

::

   ./verilator-docker 4.030 --version

or

::

   ./verilator-docker 4.030 --cc test.v

If you prefer not to use ``verilator-docker`` you must give the container
access to your files as a volume with appropriate user rights.  For example
to Verilate test.v:

::

   docker run -ti -v ${PWD}:/work --user $(id -u):$(id -g) verilator/verilator:latest --cc test.v

This method can only access files below the current directory. An
alternative is setup the volume ``-workdir``.

You can also work in the container by setting the entrypoint (don't forget
to mount a volume if you want your work persistent):

::

   docker run -ti --entrypoint /bin/bash verilator/verilator:latest

You can also use the container to build Verilator at a specific commit:

::

   docker build --build-arg SOURCE_COMMIT=<commit> .


Internals
---------

The Dockerfile builds Verilator and removes the tree when completed to
reduce the image size. The entrypoint is set as a wrapper script
(``verilator-wrap.sh``). That script 1. calls Verilator, and 2. copies the
Verilated runtime files to the ``obj_dir`` or the ``-Mdir``
respectively. This allows the user to have the files to they may later
build the C++ output with the matching runtime files. The wrapper also
patches the Verilated Makefile accordingly.

There is also a hook defined that is run by docker hub via automated
builds.