I'm looking for a clean way to declare Verilog/SystemVerilog types with a parameterised bit width. This is what I've got so far and was wondering if there is a better way to do it. I've looked through the system functions in the LRM 1800-2009 and -2017. The closest I could find is $bits, but I would like something like $minbits. Have I overlooked something?
In VHDL, it's done by simply specifying the range:
signal counter: integer range 0 to MAX_COUNT;
...and the compiler will calculate the minimum bit width to hold that range.
For the parameter values of 20 ns and 125 ms, the counter should be 23 bits with MAX_COUNT being 6,250,000.
module Debounce
#(
parameter CLOCK_PERIOD_ns = 20, // nanoseconds.
parameter DEBOUNCE_PERIOD_ms = 125 // milliseconds.
)
. . .
function int MinBitWidth([1023:0] value);
begin
for (MinBitWidth = 0; value > 0; MinBitWidth = MinBitWidth + 1)
begin
value = value >> 1;
end
end
endfunction
localparam MAX_COUNT_32BITS = DEBOUNCE_PERIOD_ms * 1_000_000 / CLOCK_PERIOD_ns; // Default type of 32-bits.
localparam COUNTER_BITS = MinBitWidth(MAX_COUNT_32BITS); // Calculate actual bit width needed.
typedef logic [COUNTER_BITS - 1 : 0] TCounter;
localparam TCounter MAX_COUNT = MAX_COUNT_32BITS; // Assign to a type of the actual bit width (truncation warning from Quartus).
localparam TCounter ONE = 1;
TCounter counter;
. . .
always @(posedge clock)
begin
. . .
if (counter == MAX_COUNT_32BITS - 1) // Synthesises a 32-bit comparer no matter how many bits are needed with unused bits tied to ground.
. . .
if (counter == MAX_COUNT - ONE) // Synthesises a 23-bit comparer as expected.
. . .
counter <= counter + 1; // Synthesises a 23-bit counter as expected.
. . .
counter <= counter + ONE; // Synthesises a 23-bit counter as expected.
Incorrect Algorithm
I considered $clog2 which is the correct way to obtain an address bus width from a RAM depth parameter. However, this is not the same as the minimum bit width of a value. Let me explain...
Consider a value of 4 which is 100 base-2 (3 bits wide).
The $clog2 algorithm calculates a value of 2, which is incorrect. It should be 3. The reason for this miscalculation is because $clog2 subtracts 1 from the value before it starts to compute the number of bits, i.e. 4 becomes 3, then it calculates the minimum bit width of the value 3, giving 2 bits. While this is mathematically correct for the ceiling of log base-2, it is not the bit width of the original value.
Here is the clogb2 algorithm from the LRM:
function integer clogb2;
input [31:0] value;
begin
value = value - 1; // GOTCHA!
for (clogb2 = 0; value > 0; clogb2 = clogb2 + 1) begin
value = value >> 1;
end
end
endfunction
Correct Algorithm
The correct algorithm is to calculate the minimum bit width of the original value, which is the algorithm given by @jonathan-mayer in his first answer before he edited it.
Here is the correct algorithm as a function:
function integer MinBitWidth;
input [1023:0] value;
begin
for (MinBitWidth = 0; value > 0; MinBitWidth = MinBitWidth + 1)
begin
value = value >> 1;
end
end
endfunction