* Fix memory leak of t_trace_cat and t_trace_cat_renew
* Fix memory leak of t_trace_c_api
* Fix memory leak in t_trace_public_func and t_trace_public_func_vlt
* Fix memory leaks in t_flat_build (and probably more).
* Use unique_ptr in testcases
* Guarantee mechanism to initialize just once is now in VerilatedInitializer. No functional change is intended.
* Make sure to initialize Verilated::NonInitialized just once. Fixes
memory leak in t_prot_lib_shared and t_hier_block_prot_lib_shared.
* Call setup() and teardown() of Verilated::NonSerialized.
* Test that indexing into memory word fails
* VPI: don't index into memory word
Memory and memory words share a VerilatedVar, so check for memory word before attempting to index.
* Test that range iterator exhausts after 1 dimension
* Separate range iterator from range object
Range iterators should return NULL after all ranges have been returned
The end iterator always points to an element past the end of the list.
When new elements are added to the back of the list, they are inserted
before the end iterator.
Instead, track the last element in the list at the start of processing
and stop after it's been processed.
Previously, in any given VPI callback, if the callback body registered
the same callback, that registering would be processed in the currently
executing call to the call*Cbs function. In the worse case, this could
lead to an infinite loop.
* Add a test to use string for $fgets
* Use dedicated function for $fgets to std::string
* share the implementation of $fgets
* Pass -1 for bitwidth of std::string to distinguish from POD
* add checks for scanf with string
* apply clang-format
This is to allow C++ verilator toplevel to support
multiple modes of waveform tracing
VM_TRACE_FST can be used inside a #if VM_TRACE
section to switch between classic .vcd tracing and the
more compact .fst format supported by GTKWAVE
* Add a test to use shared object of protect-lib
* Add a guard to call ctor/dtor just once even when a protec-lib is shared object.
* Pass .a to linker in leaf-last order for older ld.
* Add -flat_namespace for mac
* Don't use thin-archive to merge multiple archives because it is gnu-ar specific.
* remove -time option that may caues -Wunused-command-line-argument warning
This allows compiling the run-time library with optimization even when OPT_FAST is not used in order to imporove model build speed, possibly during debug cycles.
Instead of __ALLfast.cpp and __ALLslow.cpp, we now create only a single
__ALL.cpp and compile it with OPT_FAST, this speeds up small builds
where the C compiler does not dominate. A separate patch will follow
turning VM_PARALLEL_BUILDS on by default at a certain size.
Given this change to the build there is now no point in emitting both
fast and slow routines into the same .cpp file when --output-split is
not set as they will be just included in the same __ALL.cpp file. To
keep things simpler and the output easier to comprehend, V3EmitC has
also been changed to always emit the fast and slow files separately.
Also change verilated.mk to apply OPT_SLOW to all slow files, not just
ones called *__Slow.cpp. This change in particular ensures __Syms.cpp
is build as slow.
Part of #2360.
The main goal of this patch is to enable splitting the full and
incremental tracing functions into multiple functions, which can then be
run in parallel at a later stage. It also simplifies further
experimentation as all of the interesting trace code construction now
happens in V3Trace. No functional change is intended by this patch, but
there are some implementation changes in the generated code.
Highlights:
- Pass symbol table directly to trace callbacks for simplicity.
- A new traceRegister function is generated which adds each trace
function as an individual callback, which means we can have multiple
callbacks for each trace function type.
- A new traceCleanup function is generated which clears the activity
flags, as the trace callbacks might be implemented as multiple functions.
- Re-worked sub-function handling so there is no separate sub-function
for each trace activity class. Sub-functions are generate when required
by splitting.
- traceFull/traceChg are now created in V3Trace rather than V3TraceDecl,
this requires carrying the trace value tree in TraceDecl until it
reaches V3Trace where the TraceInc nodes are created (previously a
TraceInc was also created in V3TraceDecl which carries the value).
- Allow arbitrary number of open array dimensions, not just 3. Note
right now this only works with the array querying functions specified
in IEEE 1800-2017 H.12.2
- Issue error when passing dynamic array or queue as DPI open array
(currently unsupported)
- Also tweaked AstVar::vlArgTypeRecurse, which should now error or fail
for unsupported types.
Use SIMD intrinsics to render VCD traces.
I have measured 10-40% single threaded performance increase with VCD
tracing on SweRV EH1 and lowRISC Ibex using SSE2 intrinsics to render
the trace. Also helps a tiny bit with FST, but now almost all of the FST
overhead is in the FST library.
I have reworked the tracing routines to use more precisely sized
arguments. The nice thing about this is that the performance without the
intrinsics is pretty much the same as it was before, as we do at most 2x
as much work as necessary, but in exchange there are no data dependent
branches at all.
Convert trace buffer to 32-bit entries, rather than a union containing a
pointer type. Also tweaked trace entry layouts for a bit more
performance. This gains another 10% on SweRV EH1 CoreMark.
- Change templated trace routines to branch table.
Removed templating from trace chgBus and fullBus and replaced them with
a branch table like the other there is a very small (< 1%) penalty for
this on SwerRV EH1 CoreMark, but this is less than the variability of
disk IO so it's worth it to keep the code simpler and smaller.
- Prefetch VCD suffix buffer at the top of emit*
- Increase ILP in VCD emit* routines
- Use a 64-bit unaligned store to emit the VCD suffix (on x86 only)
The performance difference with these is very small, but the changes
hopefully make this code more performance-portable across various
micro-architectures.
VerilatedVcdC::openNext() failed to flush the tracing thread before
opening the next output file, which caused t_trace_cat.pl to fail
with --vltmt on occasion.
The --trace-threads option can now be used to perform tracing on a
thread separate from the main thread when using VCD tracing (with
--trace-threads 1). For FST tracing --trace-threads can be 1 or 2, and
--trace-fst --trace-threads 1 is the same a what --trace-fst-threads
used to be (which is now deprecated).
Performance numbers on SweRV EH1 CoreMark, clang 6.0.0, Intel i7-3770 @
3.40GHz, IO to ramdisk, with numactl set to schedule threads on different
physical cores. Relative speedup:
--trace -> --trace --trace-threads 1 +22%
--trace-fst -> --trace-fst --trace-threads 1 +38% (as --trace-fst-thread)
--trace-fst -> --trace-fst --trace-threads 2 +93%
Speed relative to --trace with no threaded tracing:
--trace 1.00 x
--trace --trace-threads 1 0.82 x
--trace-fst 1.79 x
--trace-fst --trace-threads 1 1.23 x
--trace-fst --trace-threads 2 0.87 x
This means FST tracing with 2 extra threads is now faster than single
threaded VCD tracing, and is on par with threaded VCD tracing. You do
pay for it in total compute though as --trace-fst --trace-threads 2 uses
about 240% CPU vs 150% for --trace-fst --trace-threads 1, and 155% for
--trace --trace threads 1. Still for interactive use it should be
helpful with large designs.
This patch de-duplicates common functionality between the VCD and FST
trace implementation. It also enables adding new trace formats more
easily and consistently.
No functional nor performance change intended.
The FST trace timescale used to be set in the constructor via
set_time_unit, but at that point we haven't normally opened the
file yet so it was just dropped. On top of that, we actually want
to use set_time_resolution... FST trace timescales now match the VCD.
If the first dump was not at time zero, then the FST trace used
to contain the initial values as if they were set at time zero. Now
they only appear at the time the first dump call is actually made,
and hence match the VCD trace exactly.
Includes `timescale, $printtimescale, $timeformat.
VL_TIME_MULTIPLIER, VL_TIME_PRECISION, VL_TIME_UNIT have been removed
and the time precision must now match the SystemC time precision.
To get closer behavior to older versions, use e.g. --timescale-override
"1ps/1ps".
* Improve tracing performance.
Various tactics used to improve performance of both VCD and FST tracing:
- Both: Change tracing functions to templates to take variable widths as
template parameters. For VCD, subsequently specialize these to the
values used by Verilator. This avoids redundant instructions and hard
to predict branches.
- Both: Check for value changes via direct pointer access into the
previous signal value buffer. This eliminates a lot of simple pointer
arithmetic instructions form the tracing code.
- Both: Verilator provides clean input, no need to mask out used bits.
- VCD: pre-compute identifier codes and use memory copy instead of
re-computing them every time a code is emitted. This saves a lot of
instructions and hard to predict branches. The added D-cache misses
are cheaper than the removed branches/instructions.
- VCD: re-write the routines emitting the changes to be more efficient.
- FST: Use previous signal value buffer the same way as the VCD tracing
code, and only call the FST API when a change is detected.
Performance as measured on SweRV EH1, with the pre-canned CoreMark
benchmark running from DCCM/ICCM, clang 6.0.0, Intel i7-3770 @ 3.40GHz,
and IO to ramdisk:
+--------------+---------------+----------------------+
| VCD | FST | FST separate thread |
| (--trace) | (--trace-fst) | (--trace-fst-thread) |
------------+-----------------------------------------------------+
Before | 30.2 s | 121.1 s | 69.8 s |
============+==============+===============+======================+
After | 24.7 s | 45.7 s | 32.4 s |
------------+--------------+---------------+----------------------+
Speedup | 22 % | 256 % | 215 % |
------------+--------------+---------------+----------------------+
Rel. to VCD | 1 x | 1.85 x | 1.31 x |
------------+--------------+---------------+----------------------+
In addition, FST trace size for the above reduced by 48%.
When using the __ALL*.cpp based single compile mode (i.e.: without
VM_PARALLEL_BUILDS), the fast path tracing code used to be included in
__Allsup.cpp, which was compiled with OPT_SLOW, severely harming tracing
performance. We now have __ALLfast.cpp and __ALLslow.cpp instead of
__ALLcls.cpp and __ALLsup.cpp, so we can compile the fast support code
with OPT_FAST as well.
This patch eliminates a major piece of inefficiency in FST tracing
support, by using an array to lookup fstHandle values corresponding
to trace codes, instead of a tree based std::map. With this change, FST
tracing is now only about 3x slower than VCD tracing. We do require
more memory to store the symbol lookup table, but the size of that is
still small, for the speed benefit.