VHDL Operands Have Different Lengths Error During Synthesis - concatenation

I have a code piece that concatenates two variable length vectors and XORs the result with another fixed-length vector. The variable lengths of related vectors does not affect the total length of concatenation result. Here is the respected code
-- Find the number of bits to be skipped.
-- This is done for better optimization of hardware.
bits2MSB := 15 - findHighestIndex(m_xorResult);
-- If there are sufficient number of remaining bits in the extended data
-- Then we can continue the XOR operation
if(bits2MSB < remainingXorCount) then
m_xorResult <= (m_xorResult((15 - bits2MSB - 1) downto 0) & m_dataExtended(remainingXorCount downto (remainingXorCount - bits2MSB))) xor STD_LOGIC_VECTOR(to_unsigned(polynom, 16));
remainingXorCount := remainingXorCount - bits2MSB - 1; -- Decrease remainingXorCount
-- If the remaining bit count of the extended data is equal to the number of bits to be skipped until the first HIGH bit
-- Then the last XOR operation for given data can be made.
elsif(bits2MSB = remainingXorCount) then
m_xorResult <= (m_xorResult((14 - remainingXorCount) downto 0) & m_dataExtended(remainingXorCount downto 0)) xor STD_LOGIC_VECTOR(to_unsigned(polynom, 16));
remainingXorCount := remainingXorCount - bits2MSB;
state <= FINISH;
-- If the remaining bits are not sufficient for a new XOR operation
-- Then the result is equal to the extended version of the last XOR result.
else
m_xorResult <= (m_xorResult((14 - remainingXorCount) downto 0) & m_dataExtended(remainingXorCount downto 0));
remainingXorCount := 0; -- Decrease remainingXorCount
state <= FINISH;
end if;
The error message points to the line below the if statement. It says that
[Synth 8-509] operands of logical operator '^' have different lengths (40 vs. 16)
The declaration of related vectors are as following
variable bits2MSB : integer range 0 to 8 := 0;
variable remainingXorCount : integer range 0 to 7 := 7;
signal m_xorResult : STD_LOGIC_VECTOR(15 downto 0);
signal m_dataExtended : STD_LOGIC_VECTOR(23 downto 0);
variable polynom : natural := 16#1021#;
In addition to these, the function findHighestIndex(...) can return an integer value in range 7 to 15.
The testbench for the given module works without any problem. I tested it for any given input to the module. Somehow, Vivado says that in some condition I can produce a length of 40 bits vector and try to XOR it with a length of 16 bit vector. What do you think the problem is?

Instead of concatenating variable width words to make a fixed width word, you can OR two fixed width words together, each with a variable number of bits masked out.
In outline, instead of
X"AAAA"(15 downto var) & X"5555"(var-1 downto 0) XOR X"1234";
compute
((X"AAAA" AND upper_mask(var)) OR (X"5555" AND not upper_mask(var))) XOR X"1234";
The masks can be generated by functions like this;
function upper_mask(var : natural) return std_logic_vector is
mask : std_logic_vector(15 downto 0) := (others => '1');
begin
mask(var - 1 downto 0) := (others => '0');
return mask;
end;
If Vivado still can't synthesise upper_mask, a loop over all bits in upper_mask should work:
for i in mask'range loop
if i < var then
mask(i) := '0';
end if;
end loop

Related

UNDERSTANDING how to count trailing zeros for a number using bitwise operators in C

Note - This is NOT a duplicate of this question - Count the consecutive zero bits (trailing) on the right in parallel: an explanation? . The linked question has a different context, it only asks the purpose of signed() being use. DO NOT mark this question as duplicate.
I've been finding a way to acquire the number of trailing zeros in a number. I found a bit twiddling Stanford University Write up HERE here that gives the following explanation.
unsigned int v; // 32-bit word input to count zero bits on right
unsigned int c = 32; // c will be the number of zero bits on the right
v &= -signed(v);
if (v) c--;
if (v & 0x0000FFFF) c -= 16;
if (v & 0x00FF00FF) c -= 8;
if (v & 0x0F0F0F0F) c -= 4;
if (v & 0x33333333) c -= 2;
if (v & 0x55555555) c -= 1;
Why does this end up working ? I have an understanding of how Hex numbers are represented as binary and bitwise operators, but I am unable to figure out the intuition behind this working ? What is the working mechanism ?
The code is broken (undefined behavior is present). Here is a fixed version which is also slightly easier to understand (and probably faster):
uint32_t v; // 32-bit word input to count zero bits on right
unsigned c; // c will be the number of zero bits on the right
if (v) {
v &= -v; // keep rightmost set bit (the one that determines the answer) clear all others
c = 0;
if (v & 0xAAAAAAAAu) c |= 1; // binary 10..1010
if (v & 0xCCCCCCCCu) c |= 2; // binary 1100..11001100
if (v & 0xF0F0F0F0u) c |= 4;
if (v & 0xFF00FF00u) c |= 8;
if (v & 0xFFFF0000u) c |= 16;
}
else c = 32;
Once we know only one bit is set, we determine one bit of the result at a time, by simultaneously testing all bits where the result is odd, then all bits where the result has the 2's-place set, etc.
The original code worked in reverse, starting with all bits of the result set (after the if (c) c--;) and then determining which needed to be zero and clearing them.
Since we are learning one bit of the output at a time, I think it's more clear to build the output using bit operations not arithmetic.
This code (from the net) is mostly C, although v &= -signed(v); isn't correct C. The intent is for it to behave as v &= ~v + 1;
First, if v is zero, then it remains zero after the & operation, and all of the if statements are skipped, so you get 32.
Otherwise, the & operation (when corrected) clears all bits to the left of the rightmost 1, so at that point v contains a single 1 bit. Then c is decremented to 31, i.e. all 1 bits within the possible result range.
The if statements then determine its numeric position one bit at a time (one bit of the position number, not of v), clearing the bits that should be 0.
The code first transforms v is such a way that is is entirely null, except the left most one that remains. Then, it determines the position of this first one.
First let's see how we suppress all ones but the left most one.
Assume that k is the position of the left most one in v. v=(vn-1,vn-2,..vk+1,1,0,..0).
-v is the number that added to v will give 0 (actually it gives 2^n, but bit 2^n is ignored if we only keep the n less significant bits).
What must the value of bits in -v so that v+-v=0?
obviously bits k-1..0 of -k must be at 0 so that added to the trailing zeros in v they give a zero.
bit k must be at 1. Added to the one in vk, it will give a zero and a carry at one at order k+1
bit k+1 of -v will be added to vk+1 and to the carry generated at step k. It must be the logical complement of vk+1. So whatever the value of vk+1, we will have 1+0+1 if vk+1=0 (or 1+1+0 if vk+1=1) and result will be 0 at order k+1 with a carry generated at order k+2.
This is similar for bits n-1..k+2 and they must all be the logical complement of the corresponding bit in v.
Hence, we get the well-known result that to get -v, one must
leave unchanged all trailing zeros of v
leave unchanged the left most one of v
complement all the other bits.
If we compute v&-v, we have
v vn-1 vn-2 ... vk+1 1 0 0 ... 0
-v & ~vn-1 ~vn-2 ... ~vk+1 1 0 0 ... 0
v&-v 0 0 ... 0 1 0 0 ... 0
So v&-v only keeps the left most one in v.
To find the location of first one, look at the code:
if (v) c--; // no 1 in result? -> 32 trailing zeros.
// Otherwise it will be in range c..0=31..0
if (v & 0x0000FFFF) c -= 16; // If there is a one in left most part of v the range
// of possible values for the location of this one
// will be 15..0.
// Otherwise, range must 31..16
// remaining range is c..c-15
if (v & 0x00FF00FF) c -= 8; // if there is one in either byte 0 (c=15) or byte 2 (c=31),
// the one is in the lower part of range.
// So we must substract 8 to boundaries of range.
// Other wise, the one is in the upper part.
// Possible range of positions of v is now c..c-7
if (v & 0x0F0F0F0F) c -= 4; // do the same for the other bits.
if (v & 0x33333333) c -= 2;
if (v & 0x55555555) c -= 1;

Extract bits into a int slice from byte slice

I have following byte slice which from which i need to extract bits and place them in a []int as i intend to fetch individual bit values later. I am having a hard time figuring out how to do that.
below is my code
data := []byte{3 255}//binary representation is for 3 and 255 is 00000011 11111111
what i need is a slice of bits -- > [0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1]
What i tried
I tried converting byte slice to Uint16 with BigEndian and then tried to use strconv.FormatUint but that fails with error panic: runtime error: index out of range
Saw many examples that simple output bit representation of number using fmt.Printf function but that is not useful for me as i need a int slice for further bit value access.
Do i need to use bit shift operators here ? Any help will be greatly appreciated.
One way is to loop over the bytes, and use a 2nd loop to shift the byte values bit-by-bit and test for the bits with a bitmask. And add the result to the output slice.
Here's an implementation of it:
func bits(bs []byte) []int {
r := make([]int, len(bs)*8)
for i, b := range bs {
for j := 0; j < 8; j++ {
r[i*8+j] = int(b >> uint(7-j) & 0x01)
}
}
return r
}
Testing it:
fmt.Println(bits([]byte{3, 255}))
Output (try it on the Go Playground):
[0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1]
Using the bits package provides a fairly straightforward solution.
func bitsToBits(data []byte) (st []int) {
st = make([]int, len(data)*8) // Performance x 2 as no append occurs.
for i, d := range data {
for j := 0; j < 8; j++ {
if bits.LeadingZeros8(d) == 0 {
// No leading 0 means that it is a 1
st[i*8+j] = 1
} else {
st[i*8+j] = 0
}
d = d << 1
}
}
return
}
Performance is comparable to similar solutions.

Dynamically sized array elements in Ada

So I have the following Ada array declaration buried inside a package body, that eventually gets passed to a C function
declare
type t_buffer is array (0 .. ARR_SIZE) of Unsigned_32;
buffer : constant access t_buffer := new t_buffer;
begin
c_obj.buffer_p := buffer (1)'Address;
c_obj.buffer_length := Unsigned_64 (buffer'Last);
for idx in Integer range buffer'Range loop
buffer (idx) := Unsigned_32 (idx * 4);
end loop;
end
However, the elements of the array aren't actually always Unsigned_32/uint32_t - it varies between uint8_t, uint16_t, uint32_t & uint64_t, depending on a (runtime) parameter. This means that when it gets read as a (for example) uint16_t array in the C code, the numbers are coming out as a sequence of 0,0,4,0,8,0,... instead of the intended 0,4,8,... when the uint32_t is "split" into 2 different numbers.
Ada doesn't have something approximating dependent types so I can't dynamically create the array type. I'm not sure how I can solve this at all nicely, possibly something to do with making an array of Unsigned_8 and bitshifting as appropriate?
The way Ada works, you have to have four different array types.
But you can encapsulate the selection of the array types in a variant record:
package Variant_Records is
type Word_Sizes is range 8 .. 64
with Static_Predicate => Word_Sizes in 8 | 16 | 32 | 64;
type Data_8_Bit is mod 2 ** 8 with Size => 8;
type Data_16_Bit is mod 2 ** 16 with Size => 16;
type Data_32_Bit is mod 2 ** 32 with Size => 32;
type Data_64_Bit is mod 2 ** 64 with Size => 64;
type Array_8_Bit is array (Positive range <>) of Data_8_Bit;
type Array_16_Bit is array (Positive range <>) of Data_16_Bit;
type Array_32_Bit is array (Positive range <>) of Data_32_Bit;
type Array_64_Bit is array (Positive range <>) of Data_64_Bit;
type Data_Array (Word_Size : Word_Sizes;
Length : Natural) is
record
case Word_Size is
when 8 => Data_8 : Array_8_Bit (1 .. Length);
when 16 => Data_16 : Array_16_Bit (1 .. Length);
when 32 => Data_32 : Array_32_Bit (1 .. Length);
when 64 => Data_64 : Array_64_Bit (1 .. Length);
end case;
end record;
end Variant_Records;
And then an example of some usage:
with Variant_Records;
procedure Using_Variant_Records is
use Variant_Records;
A : Data_Array (Word_Size => 8, Length => 16);
B : Data_Array (Word_Size => 64, Length => 2);
begin
for I in A.Data_8'Range loop
A.Data_8 (I) := 2 * Data_8_Bit (I) + 4;
end loop;
for I in B.Data_64'Range loop
B.Data_64 (I) := Data_64_Bit (8 ** I) + 4;
end loop;
declare
D : Data_Array := B;
begin
for E of D.Data_64 loop
E := E * 8;
end loop;
end;
end Using_Variant_Records;

Converting C Macro to VHDL

I'm fairly new to VHDL and trying to convert two given C macros to be executed as a single instruction on my FPGA. The macros are:
#define m_in_bits(buf, num) (buf) >> (24 - (num)) // buf is uint32_t
#define m_ext_bits(buf, i) ((buf) < (1<<((i)-1)) ? (buf) + (((-1)<<(i)) + 1) : (buf))
And the C code that uses the macros is:
m_ext_bits(m_in_bits(buffer, size), size);
I'm having issues with getting m_ext_bits to properly compile. Here's my VHDL:
library ieee;
use ieee.std_logic_1164.all;
use ieee.numeric_std.all;
entity myEntity is
port(
signal buffer: in std_logic_vector(31 downto 0);
signal result: out std_logic_vector(31 downto 0)
);
end entity myEntity;
architecture myArch of myEntity is
signal size : signed (3 downto 0);
begin
size <= signed(buffer(27 downto 24));
result(15 downto 0) <= std_logic_vector(signed(buffer(23 downto 0)) srl (24 - to_integer(size))
+ signed((-1 sll to_integer(size)) + 1)); -- the offending line
end architecture myArch ;
The long line beginning with result(15 downto 0) <= actually compiles without error (that implements the m_in_bits macro). However, when I add the following line, beginning with the +, errors occur. I tried playing around with casting the std_logic_vector and signed types and the errors change.
type of expression is ambiguous - "SIGNED" or "UNSIGNED" are two possible matches...
can't determine definition of operator ""sll"" -- found 0 possible definitions...
illegal SIGNED in expression...
I think it's a matter of proper casting and using the correct types to fulfill the required operations.
First, buffer is a reserved VHDL word, so change that; using argbuf below.
The expression -1 sll to_integer(size) is not defined in VHDL, since the
integer value -1 is a numerical expression only with no bit representation
specified by VHDL, so shifting is not possible. Neither are operations like
bitwise and, or, etc. on integers. A -1 representation in 24-bit signed type can be created as:
to_signed(-1, 24)
There is a length issue with the assign, since 16-bit signal (result(15 downto
0)) is assigned with 24-bit value (based on right side argbuf(23 downto 0)).
The srl should then compile when the above is addressed.
Code as:
result(15 downto 0) <= std_logic_vector(resize((signed(argbuf(23 downto 0)) srl (24 - to_integer(size)))
+ signed((to_signed(-1, 24) sll to_integer(size)) + 1), 16));
However, the VHDL shift operators, e.g. srl, may give unexpected results, as
described in this page "Arithmetic and logical shifts and rotates are done with
functions in VHDL, not
operators",
so you may consider using the shift functions defined in numeric_std instead,
e.g. shift_right, as a general coding style. Code with functions as:
result(15 downto 0) <= std_logic_vector(resize(shift_right(signed(argbuf(23 downto 0)), 24 - to_integer(size))
+ signed(shift_left(to_signed(-1, 24), to_integer(size)) + 1), 16));

Delphi XE3 -> Integer to array of Bytes

I have a data structure:
data = array of integer;
I have filled it from an
source = array of byte;
with
data[x] := Source[offset] or (Source[offset + 1] shl 8) or
(Source[offset + 2] shl 16) or (Source[offset + 3] shl 24);
after processing these blocks i have to bring them back to "bytes"...
any idea?
You can do this in a one-liner using Move.
Move(source[0], dest[0], Length(source)*SizeOf(source[0]));
If you need to perform a network/host byte order transformation, then you can run across the integer array after the Move.
In the opposite direction you do it all in reverse.
If you haven't got byte order issues then you might not actually need to convert to a byte array at all. It's possible that you can use the integer array as is. Remember that, without byte order issues, the memory layout of the byte and integer arrays are the same (which is why you are able to blit with Move).
You mean like this?
var
i: integer;
b1, b2, b3, b4: byte;
begin
b1 := byte(i);
b2 := byte(i shr 8);
b3 := byte(i shr 16);
b4 := byte(i shr 24);
Try, for instance,
procedure TForm1.FormCreate(Sender: TObject);
var
i: integer;
b1, b2, b3, b4: byte;
begin
i := $AABBCCDD;
b1 := byte(i);
b2 := byte(i shr 8);
b3 := byte(i shr 16);
b4 := byte(i shr 24);
ShowMessage(IntToHex(b1, 2));
ShowMessage(IntToHex(b2, 2));
ShowMessage(IntToHex(b3, 2));
ShowMessage(IntToHex(b4, 2));
end;
Hmmm... I see an answer using Move and one using shifts, but what about a simple cast?:
var
I: Integer;
B: array[0..3] of Byte;
begin
// from bytes to integer:
I := PInteger(#B)^;
// from integer to bytes:
PInteger(#B)^ := I;
Or with your arrays:
data[i] := PInteger(#source[offset])^;
and vice versa:
// get low byte
source[offset] := PByte(#data[i])^; // or := PByte(#data[i])[0];
// get second byte
secondByte := PByte(#data[i])[1]; // or := (PByte(#data[i]) + 1)^;
or
PInteger(#source[offset])^ := data[i];
As you see, you can get a long way by casting to pointers. This does not actually take the pointer, the compiler is clever enough to access the items directly.
As commented, you do not need to move the data to access it as both byte and integer.
Your original array of byte could be accessed as an array of integer by type casting.
type
TArrayInteger = array of Integer;
...
for i := 0 to Pred(Length(source)) div SizeOf(Integer) do
WriteLn(TArrayInteger(source)[i]);
Often I hide these type casts in a class. In XE3 there is possibility to declare class helpers for simple types, like string,byte,integers, etc. See TStringHelper for example.
The same goes for array of simple types.
Here is an example using a record helper:
type
TArrayByte = array of Byte;
TArrayInteger = array of Integer;
TArrayByteHelper = record helper for TArrayByte
private
function GetInteger(index : Integer) : Integer;
procedure SetInteger(index : Integer; value : Integer);
public
property AsInteger[index : Integer] : Integer read GetInteger write SetInteger;
end;
function TArrayByteHelper.GetInteger(index: Integer): Integer;
begin
Result := TArrayInteger(Self)[index];
end;
procedure TArrayByteHelper.SetInteger(index: Integer; value: Integer);
begin
TArrayInteger(Self)[index] := value;
end;
Use it like this:
Var
source : TArrayByte;
i : Integer;
begin
SetLength(source,8);
for i := 0 to 7 do
source[i] := i;
for i := 0 to 1 do
WriteLn(Format('%8.8X',[source.AsInteger[i]]));
ReadLn;
end.
To build this as a function:
Type TBytes = array of byte;
function InttoBytes(const int: Integer): TBytes;
begin
result[0]:= int and $FF;
result[1]:= (int shr 8) and $FF;
result[2]:= (int shr 16) and $FF;
end;

Resources