Testing
How Good is my Counter?
Please Log In for full access to the web site.
Note that this link will take you to an external site (https://shimmer.mit.edu) to authenticate, and then you will be redirected back to this page.
Testing Frameworks
While we'll occasionally provide embedded checkers for chunks of Verilog you create on this site, this will not always be the case and it definitely will not be the case when doing the final project or when doing life. A huge portion of a digital design engineer's time is spent writing software to simulate and verify designs. In fact modern designs are so complex and so unbelievable costly that there are entire branches of companies that exist soley to "verify" designs, and chances are, if you go into digital design as a field, the companies will likely start you out in one of these roles since it is a good way to get people into a company's way of doing things.
Writing testbenches for designs is the perpetual challenge. Nobody wants to do it when they're starting out, since we always feel like our design is almost working to the point where we can just stare at it for a bit longer, and any departure from staring is a distraction. But in truth, most of the time debugging would be much quicker if people just "bit the bullet" and write a testbench for ten minutes to find the bug. Believe me. Every year in 6.205, students will spend six hours debugging their designs in hardware or staring at the Verilog to "save" themselves fifteen minutes of writing and running a simulation. It is wild. It is like a death wish or something. I get it, but I don't.
When we start writing any more complicated designs, especially those that rely on state and sequential logic, understanding what’ll happen on hardware before we spend time running our synthesis scripts will be vital! For that reason, we use testing frameworks. There are many testing frameworks out in the world, but for this semester, we’ll be using cocotb, a library in Python that runs hardware simulations with your tests, written in Python. Let’s write our first one!
Cocotb
Cocotb stands for Coroutine-based Cosimulation testbench environment, written in Python. Historically we've done all our testbenches and simulations in SystemVerilog in 6.205, but we're going to try an experiment this year of 2024 by using Cocotb instead. The reasons are several:
- It is written in and used with Python, a language you've all gotten experience with.
- It is a more modern testing framework, and since it is in Python, it allows us to compare our designs to models created using the vast array of Python libraries out there.
- We're hoping that by simulating with Python and by designing with SystemVerilog we'll avoid some of the historical confusion that arises between synthesizable and non-synthesizable SystemVerilog.
- It is honestly just more enjoyable to use once you get the hang of it and takes care of a lot of boilerplate stuff for us.
Getting Started
As a first step, make sure to install on your system Python cocotb
. We can use the most up to date stable version, which as of late August 2024, is 1.9.0. We should really only be using pretty widely-supported libraries in 6.205 so install cocotb with either pip install cocotb
or conda install cocotb
.
In your project folder for this week make a new folder called sim
if you haven't already. Inside of there, make a file called test_counter.py
. In that file we're going to put the following:
import cocotb
import os
import random
import sys
import logging
from pathlib import Path
from cocotb.triggers import Timer
from cocotb.utils import get_sim_time as gst
from cocotb.runner import get_runner
@cocotb.test()
async def first_test(dut):
""" First cocotb test?"""
# write your test here!
# throughout your test, use "assert" statements to test for correct behavior
# replace the assertion below with useful statements
assert False
"""the code below should largely remain unchanged in structure, though the specific files and things
specified should get updated for different simulations.
"""
def counter_runner():
"""Simulate the counter using the Python runner."""
hdl_toplevel_lang = os.getenv("HDL_TOPLEVEL_LANG", "verilog")
sim = os.getenv("SIM", "icarus")
proj_path = Path(__file__).resolve().parent.parent
sys.path.append(str(proj_path / "sim" / "model"))
sources = [proj_path / "hdl" / "counter.sv"] #grow/modify this as needed.
build_test_args = ["-Wall"]#,"COCOTB_RESOLVE_X=ZEROS"]
parameters = {}
sys.path.append(str(proj_path / "sim"))
runner = get_runner(sim)
runner.build(
sources=sources,
hdl_toplevel="counter",
always=True,
build_args=build_test_args,
parameters=parameters,
timescale = ('1ns','1ps'),
waves=True
)
run_test_args = []
runner.test(
hdl_toplevel="counter",
test_module="test_counter",
test_args=run_test_args,
waves=True
)
if __name__ == "__main__":
counter_runner()
Into your hdl
folder, make sure to make a new file called counter.sv
and put in your successfully functioning counter
from the previous page.
If you then run this file from within sim
, (python test_counter.py
) a bunch of text will fly by like this. It might be overwhelming, but you should see something like "FAIL" in it.
0.00ns INFO cocotb.regression Found test test_counter.first_test 0.00ns INFO cocotb.regression running first_test (1/1) First cocotb test? 0.00ns INFO cocotb.regression first_test failed Traceback (most recent call last): File "/Users/jodalyst/cocotb_development/pwm_1/sim/test_counter.py", line 19, in first_test assert False AssertionError 0.00ns INFO cocotb.regression ************************************************************************************** ** TEST STATUS SIM TIME (ns) REAL TIME (s) RATIO (ns/s) ** ************************************************************************************** ** test_counter.first_test FAIL 0.00 0.00 5.84 ** ************************************************************************************** ** TESTS=1 PASS=0 FAIL=1 SKIP=0 0.00 0.01 0.09 ** **************************************************************************************
As it stands, this testbench isn't doing anything. The code that is actually testing our "Device Under Test" (the dut
) which is the first_test
coroutine doesn't actually do anything. Instead at the end it just asserts False (which is equivalent to saying things have failed).
What we need to do is set inputs and evaluate the outputs of our module (the counter). And the first step in doing that is to put values on the connections.
Setting wire values
To set a value, we can access it like shown below. Cocotb builds up a access object for the DUT which allows easy specification of it. The counter
that you wrote has three inputs:
clk_in
rst_in
period_in
dut.rst_in.value = 1 # set the value of input wire rst_in to be high (1)
Any named port of our module can be accessed using the syntax above, and its value can be set to be high or low. For multi-bit ports, the value can instead be set to any integer value. For example, the following would be a reasonable set of inputs to "start up" the counter for use in a 4-bit PWM module.
dut.clk_in.value = 0
dut.rst_in.value = 1
dut.period_in.value = 15
Reading values
Reading values can be done basically in the same way but backwards:
read_value = dut.count_out.value
print(read_value)
Any output logic, input wire, or internal register can be read during the simulation—you can use assertions on these values to prove that your design is functioning as intended.
You can also check the outputs of your module using assert
statements. Perhaps at some point in your test you want to make sure the output of the counter is 13
. Well you can do the following:
assert dut.count_out.value == 13 # only allow test case to pass if the read value is what you want it to be (1 in this example)
Passage of Time
All simulations, even non-sequntial ones, need some simulation time to pass in order to actually simulate. In order to do this, you can use a Timer
construct to allow some time to pass. For example if you wanted to turn on the reset signal, wait 10 nanoseconds, and then turn it off you could do something like:
dut.rst_in.value = 1
await Timer(10, units="ns")
dut.rst_in.value = 0
It’s important to remember that the Timer has nothing to do with how long our Python script runs for! Our simulation will run slower than in real-time; the 10 nanosecond value represents passage of simulation time!
Viewing Signals Over Time
Once you are having your simulation exist over time, simply viewing printouts of values is not super productive. Instead what we usually like to do is generate waveforms. Cocotb has already been configuring this for us in the background. When it runs, a folder called sim_build
gets generated inside your sim
folder. In there will be a file with a .fst
extension. That is a form of wavefile that can be viewed with either GTKWave or Surfer (and possibly an online one, though I'm not sure if we'll get that working or not yet).
Open up that waveform viewer, open the generated FST and you can see the signals.
If you then have your test coroutine specified as such:
@cocotb.test()
async def first_test(dut):
""" First cocotb test?"""
dut.rst_in.value = 1
dut.clk_in.value = 0
dut.period_in.value = 15
await Timer(10, units="ns")
dut.rst_in.value = 0
assert False
And then run, first you'll now see that your simulation ends not at 0ns but rather 10 ns
10.00ns INFO cocotb.regression first_test failed Traceback (most recent call last): File "/Users/jodalyst/cocotb_development/pwm_1/sim/test_counter.py", line 19, in first_test assert False AssertionError 10.00ns INFO cocotb.regression ************************************************************************************** ** TEST STATUS SIM TIME (ns) REAL TIME (s) RATIO (ns/s) ** ************************************************************************************** ** test_counter.first_test FAIL 10.00 0.00 26667.03 ** ************************************************************************************** ** TESTS=1 PASS=0 FAIL=1 SKIP=0 10.00 0.01 826.04 ** **************************************************************************************
A waveform file will have been generated. If you view it with the waveform viewer you'll see something like the following:
It is pretty useless. What we're missing is our clock signal that actually runs our system.
Driving a Clock + Helper Functions
Nearly every design we test will require a clock input, and to simulate our design with a clock, we’ll need a little helper function, a lot like the one below:
async def generate_clock(clock_wire):
while True: # repeat forever
clock_wire.value = 0
await Timer(5,units="ns")
clock_wire.value = 1
await Timer(5,units="ns")
Notice the term async
in the function’s definition; this is short for asynchronous. Don’t worry too much about what it’s doing within Python, but know that it, unlike normal lines of code in Python, can run “at the same time” as other code, rather than entirely in order. This is very helpful for this function, where we want to have the clock value constantly changing back and forth while other values are changing! To start this function running in the background of our normal test, call it within your function with await cocotb.start()
.
# inside your test function: starts generate_clock running in the background
await cocotb.start( generate_clock( dut.clk_in ) )
async def generate_clock(clock_wire):
while True: # repeat forever
clock_wire.value = 0
await Timer(5,units="ns")
clock_wire.value = 1
await Timer(5,units="ns")
@cocotb.test()
async def first_test(dut):
"""First cocotb test?"""
await cocotb.start( generate_clock( dut.clk_in ) ) #launches clock
dut.rst_in.value = 1;
dut.period_in.value = 3;
await Timer(5, "ns")
await Timer(5, "ns")
dut.rst_in.value = 0; #rst is off...let it run
count = dut.count_out.value
dut._log.info(f"Checking count_out @ {gst('ns')} ns: count_out: {count}")
await Timer(5, "ns")
await Timer(5, "ns")
count = dut.count_out.value
dut._log.info(f"Checking count_out @ {gst('ns')} ns: count_out: {count}")
await Timer(5, "ns")
await Timer(5, "ns")
count = dut.count_out.value
dut._log.info(f"Checking count_out @ {gst('ns')} ns: count_out: {count}")
If you now run this you'll see something like the following. Noteic
Let's let the simulation run for a bit longer. Add the following:
#add to end of first_test
await Timer(100, "ns")
dut.period_in.value = 15;
await Timer(1000, "ns")
Baby’s First! Hardware Testbench
Using all these components, let’s write a cocotb test for the module we saw above! Ensure that your test:
- First starts a background clock generator to drive the
clk_in
signal - Begins by setting the
rst_in
signal high for at least one clock cycle - Ensures that after a reset signal, the
count_out
value is set to zero - Sets the
period_in
value to some "low" value that you can let run and overflow several times - Sets the
period_in
to a higher value - Make sure in the wave file that when
rst_in
is set to 1 mid-count and held for at least one clock cycle, the system goes to 0 on itscount_out
and stays at 0 untilrst_in
is deasserted back to 0, at which pointcount_out
starts to grow as expected again.
Note: for this first testbench, we have a correct SystemVerilog design in front of us, so it’s easy to reason through exactly what our testbench should check for, but it's important to build testbenches that match our specified behavior, rather than our Verilog code, so that we can catch when we misunderstood what exactly should be output from the hardware.
Once you're done, move on to the next part of this week's assignments :D