understanding a bitwise calculation - c

I'm reading some code that performs bitwise operations on an int and stores them in an array. I've worked out the binary representation of each step and included this beside the code.
On another computer, the array buff is received as a message and displayed in hex as [42,56,da,1,0,0]
Why question is, how could you figure out what the original number was from the hex number. I get that 42 and 56 are the ASCII equivalent of 'B' and 'V'. But how do you get the number 423 from da '1' '0' '0'?
Thanks

DA 01 00 00 is the little endian representation of 0x000001DA or just 0x1DA. This, in turn, is 256 + 13 * 16 + 10 = 474. Maybe you had this number and changed the program later and forgot to recompile?
Seen it from the other side, 423 is 0x1a7…

as glgl says it should be a7, you can see where that comes from
buff[2] = reading&0xff; // 10100111 = 0xa7
buff[3] = (reading>>8)&0xff; //00000001 = 1
buff[4] = (reading>>16)&0xff; //00000000 = 0
buff[5] = (reading>>24)&0xff; ////00000000 = 0

Related

A friend sent me a snippet I don't understand. How does this work?

Thanks for the replies, everyone was useful in helping me understand how this works.
A friend sent me this piece of C code asking how it worked (he doesn't know either). I don't usually work with C, but this piqued my interest. I spent some time trying to understand what was going on but in the end I couldn't fully figure it out. Here's the code:
void knock_knock(char *s){
while (*s++ != '\0')
printf("Bazinga\n");
}
int main() {
int data[5] = { -1, -3, 256, -4, 0 };
knock_knock((char *) data);
return 0;
}
Initially I thought it was just a fancy way to print the data in the array (yeah, I know :\), but then I was surprised when I saw it didn't print 'Bazinga' 5 times, but 8. I searched stuff up and figured out it was working with pointers (total amateur when it comes to c), but I still couldn't figure out why 8. I searched a bit more and found out that usually pointers have 8 bytes of length in C, and I verified that by printing sizeof(s) before the loop, and sure enough it was 8. I thought this was it, it was just iterating over the length of the pointer, so it would make sense that it printed Bazinga 8 times. It also was clea to me now why they'd use Bazinga as the string to print - the data in the array was meant to be just a distraction. So I tried adding more data to the array, and sure enough it kept printing 8 times. Then I changed the first number of the array, -1, to check whether the data truly was meaningless or not, and this is where I was confused. It didn't print 8 times anymore, but just once. Surely the data in the array wasn't just a decoy, but for the life of me I couldn't figure out what was going on.
Using the following code
#include<stdio.h>
void knock_knock(char *s)
{
while (*s++ != '\0')
printf("Bazinga\n");
}
int main()
{
int data[5] = { -1, -3, 256, -4, 0 };
printf("%08X - %08X - %08X\n", data[0], data[1], data[2]);
knock_knock((char *) data);
return 0;
}
You can see that HEX values of data array are
FFFFFFFF - FFFFFFFD - 00000100
Function knock_knock print Bazinga until the pointed value is 0x00 due to
while (*s++ != '\0')
But the pointer here is pointing chars, so is pointing a single byte each loop and so, the first 0x00 is reached accessing the "first" byte of third value of array.
You need to look at the bytewise representation of data in the integer array data. Assuming an integer is 4 bytes, The representation below gives the numbers in hex
-1 --> FF FF FF FF
-3 --> FF FF FF FD
256 --> 00 00 01 00
-4 --> FF FF FF FC
0 --> 00 00 00 00
The array data is these numbers stored in a Little- Endian format. I.e. the LSbyte comes first. So,
data ={FF FF FF FF FD FF FF FF 00 01 00 00 FC FF FF FF 00 00 00 00};
The function knock_knock goes through this data bytewise and prints Bazinga for every non-zero. It stops at the first zero found, which will be after 8 bytes.
(Note: Size of Integer can 2 or 8 bytes, but given that your pointer size is 8 bytes, I am guessing that size of integer is 4 bytes).
It is easy to understand what occurs here if to output the array in hex as a character array. Here is shown how to do this
#include <stdio.h>
int main(void)
{
int data[] = { -1, -3, 256, -4, 0 };
const size_t N = sizeof( data ) / sizeof( *data );
char *p = ( char * )data;
for ( size_t i = 0; i < N * sizeof( int ); i++ )
{
printf( "%0X ", p[i] );
if ( ( i + 1) % sizeof( int ) == 0 ) printf( "\n" );
}
return 0;
}
The program output is
FFFFFFFF FFFFFFFF FFFFFFFF FFFFFFFF
FFFFFFFD FFFFFFFF FFFFFFFF FFFFFFFF
0 1 0 0
FFFFFFFC FFFFFFFF FFFFFFFF FFFFFFFF
0 0 0 0
So the string "Bazinga" will be outputted as many times as there are non-zero bytes in the representations of integer numbers in the array. As it is seen the first two negative numbers do not have zero bytes in their representations.
However the number 256 in any case has such a byte at the very beginning of its internal representation. So the string will be outputted exactly eight times provided that sizeof( int ) is equal to 4.

C array assign & access

I'm writing C code in a PIC16F1824, and are using UART.
I had this for receiving:
for(int i = 0; i < 9; i++){
while(!PIR1bits.RCIF);
RX_arr[i] = RCREG;
while(PIR1bits.RCIF);
}
RX_arr is an array declared as int RX_arr[9];
RCREG is the UART receving register and is supposed to have
0xFF 86 00 00 00 00 00 00 47;
Question1:
When i = 1, RX_arr[0]'s value(0xFF in this case) also changes when RX_arr[1] = RCREG is executed(Both postion 0 and 1 have 0x86).
Later I found using this segment of code instead of the for loop above will successfully assign one byte into each position:
if(!RCSTAbits.FERR && !RCSTAbits.OERR){
RX_arr[0] = RCREG;
}
However, the byte saved is wrong, only the 0xFF is always correct, other bytes are some other values. I found the OERR bit is set, which indicates an overrun error(2 byte FIFO buffer is full before accessed). How can I receive all the bytes in RCREG?
Question2:
After I get the response in question1, I read the useful information in it, which is in byte 2 and 3, and I made a variable called data. In my case the data is a sampled temperature, and I want to collect 20 samples,and save them in an array called ppm_array. sum_ppm is supposed to be the sum of all 20 samples, and will add the new sampled temperature to it every time a new sample is collected. When I run sum_ppm += ppm_array[i](sum_ppm = 0, ppm_array[0] = 1616, and the rest of the positions are 0),I got sum_ppm = 4508160. Why isn't it 1616??? Am I adding the address of them?
data = RX_arr[2] * 256 + RX_arr[3];
ppm_array[i] = data;
sum_ppm += ppm_array[i]
Many thanks.

fwrite file output is wrong

I am trying to write a binary representation of the integer into a file , accepted that I will get hexadecimal format in the file, however I don't get the expected result.
uint32_t a = 1;
FILE * file = fopen("out.txt", "ab+");
fwrite(&a, sizeof(uint32_t), 1, file );
I expect to get (little endian)
1000 0000
but instead I get in the file
0100 0000
The machine running this snippet of code is Ubuntu linux 32 bit (little endian).
Is there someone who could explain why it's like so , is the file's content consistent with the integer representation on my machine ?
Cheers.
Assuming each of those groups of two digits is a byte, what you're seeing is correct:
01 00 00 00
Little endian orders bytes, not nybbles within bytes. So what you have is:
01 00 00 00
|| || || ||
|| || || == -> 0 * 256 * 256 * 256
|| || == ----> 0 * 256 * 256
|| == -------> 0 * 256
== ----------> 1

Passing a 256-bit wire to a C function through the Verilog VPI

I have a 256-bit value in Verilog:
reg [255:0] val;
I want to define a system task $foo that calls out to external C using the VPI, so I can call $foo like this:
$foo(val);
Now, in the C definition for the function 'foo', I cannot simply read the argument as an integer (PLI_INT32), because I have too many bits to fit in one of those. But, I can read the argument as a string, which is the same thing as an array of bytes. Here is what I wrote:
static int foo(char *userdata) {
vpiHandle systfref, args_iter, argh;
struct t_vpi_value argval;
PLI_BYTE8 *value;
systfref = vpi_handle(vpiSysTfCall, NULL);
args_iter = vpi_iterate(vpiArgument, systfref);
argval.format = vpiStringVal;
argh = vpi_scan(args_iter);
vpi_get_value(argh, &argval);
value = argval.value.str;
int i;
for (i = 0; i < 32; i++) {
vpi_printf("%.2x ", value[i]);
}
vpi_printf("\n");
vpi_free_object(args_iter);
return 0;
}
As you can see, this code reads the argument as a string and then prints out each character (aka byte) in the string. This works almost perfectly. However, the byte 00 always gets read as 20. For example, if I assign the Verilog reg as follows:
val = 256'h000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f;
And call it using $foo(val), then the C function prints this at simulation time:
VPI: 20 01 02 03 04 05 06 07 08 09 0a 0b 0c 0d 0e 0f 10 11 12 13 14 15 16 17 18 19 1a 1b 1c 1d 1e 1f
I have tested this with many different values and have found that the byte 00 always gets mapped to 20, no matter where or how many times it appears in val.
Also, note that if I read the value in as a vpiHexStrVal, and print the string, it looks fine.
So, two questions:
Is there a better way to read in my 256-bit value from the Verilog?
What's going on with the 20? Is this a bug? Am I missing something?
Note: I am using Aldec for simulation.
vpiStringVal is used when the value is expected to be ASCII text, in order to get the value as a pointer to a C string. This is useful if you want to use it with C functions that expect a C string, such as printf() with the %s format, fopen(), etc. However, C strings cannot contain the null character (since null is used to terminate C strings), and also cannot represent x or z bits, so this is not a format that should be used if you need to distinguish any possible vector value. It looks like the simulator you are using formats the null character as a space (0x20); other simulators just skip them, but that doesn't help you either. To distinguish any possible vector value use either vpiVectorVal (the most compact representation) or vpiBinStrVal (a binary string with one 0/1/x/z character for each bit).

C Convert String to Ints Issue

I'm trying to parse some input on an embedded system.
I'm expecting something like this:
SET VARNAME=1,2,3,4,5,6,7,8,9,10\0
When I'm converting the separate strings to ints, both atoi() and strtol() seem to be returning 0 if the string begins with 8.
Here is my code:
char *pch, *name, *vars;
signed long value[256];
int i;
#ifdef UARTDEBUG
char convert[100];
#endif
if(strncmp(inBuffer, "SET",3)==0)
{
pch = strtok(inBuffer," ");
pch = strtok(NULL," ");
name = strtok(pch, "=");
vars = strtok(NULL,"=");
pch = strtok(vars,",");
i = 0;
while(pch != NULL)
{
value[i] = atoi(pch);
#ifdef UARTDEBUG
snprintf(convert, sizeof(convert), "Long:%d=String:\0", value[i]);
strncat(convert, pch, 10);
SendLine(convert);
#endif
i++;
pch = strtok(NULL,",");
// Check for overflow
if(i > sizeof(value)-1)
{
return;
}
}
SetVariable(name, value, i);
}
Passing it:
SET VAR=1,2,3,4,5,6,7,8,9,10\0
gives the following in my uart debug:
Long:1=String:1
Long:2=String:2
Long:3=String:3
Long:4=String:4
Long:5=String:5
Long:6=String:6
Long:7=String:7
Long:0=String:8
Long:9=String:9
Long:10=String:10
UPDATE:
I've checked the inBuffer both before and after 'value[i] = atoi(pch);' and it's identical and appears to have been split up to the right point.
S E T V A R 1 2 3 4 5 6 7 8 9 , 1 0
53 45 54 00 56 41 52 00 31 00 32 00 33 00 34 00 35 00 36 00 37 00 38 00 39 2c 31 30 00 00 00 00
UPDATE 2:
My UARTDEBUG section currently reads:
#ifdef UARTDEBUG
snprintf(convert, 20, "Long:%ld=String:%s", value[i], pch);
SendLine(convert);
#endif
If I comment out the snprintf() line, everything works perfectly. So what's going on with that?
can't you try to write your own atoi?
it's like ten lines long and then you can debug it easily (and check where the problem really is)
'0' = 0x30
'1' = 0x31
and so on, you just need to do something like
string[x] - 0x30 * pow(10, n)
for each digit you have
Not related, but
if(i > sizeof(value)-1)
{
return;
}
should be
if(i == sizeof(value)/sizeof(value[0]) )
{
return;
}
May be the cause of the problem if other pieces of code do the overflow checking in the wrong way and because of that they overwrite part of your string
I've just tried compiling and running your sample code on my own system. The output is correct (i.e. '8' appears where it should be in the output string) which indicates to me that something else is going on outside of the scope of the code you've provided to us.
I'm going to go out on a limb and say that one of your variables or functions is trampling your input string or some other variable or array. SendLine and SetVariable are places to look.
But more importantly, you haven't given us the tools to help you solve your problem. When asking people to help you debug your program, provide a simple test case, with full source, that exemplifies the problem. Otherwise, we're left to guess what the problem is, which is frustrating for us and unproductive for you.
atoi returns 0 for something that it can't render as numeric -- this is just a hunch, but have you tried dumping the binary representation of the string (or even checking that the string lengths match up)?

Resources