I have two variables: a float named diff with a value like 894077435904.000000 (not always only with zero in the decimal part) and a char[32] which is the result of a double-sha256 calculation. I need to do a comparison between them (if(hash < diff) { //do someting } ), but for this I need to convert one to the type of the other.
Is there a way to accomplish this? For example, converting the float to a char* (and using strcmp to do the comparison) or the char* to float (and using the above approach - if it's even possible, considering the char* is 256 bits, or 32 bytes long)?
I have tried converting float to char* like this:
char hex_str[2*sizeof(diff)+1];
snprintf(hex_str, sizeof(hex_str), "%0*lx", (int)(2*sizeof diff), (long unsigned int)diff);
printf("%s\n", hex_str);
When I have diff=894077435904.000000 I get hex_str=d02b2b00. How can I verify if this value is correct? Using this converter I obtain different results.
It is explained in great detail here.
Create an array of 32 unsigned bytes, set all its values to zero.
Extract the top byte from the difficulty and subtract that from 32.
Copy the bottom three bytes from the difficulty into the array, starting the number of bytes into the array that you computed in step 2.
This array now contains the difficulty in raw binary. Use memcmp to compare it to the hash in raw binary.
Example code:
#include <stdio.h>
#include <string.h>
char* tohex="0123456789ABCDEF";
void computeDifficulty(unsigned char* buf, unsigned j)
{
memset(buf, 0, 32);
int offset = 32 - (j >> 24);
buf[offset] = (j >> 16) & 0xffu;
buf[offset + 1] = (j >> 8) & 0xffu;
buf[offset + 2] = j & 0xffu;
}
void showDifficulty(unsigned j)
{
unsigned char buf[32];
computeDifficulty(buf, j);
printf("%x -> ", j);
for (int i = 0; i < 32; ++i)
printf("%c%c ", tohex[buf[i] >> 4], tohex[buf[i] & 0xf]);
printf("\n");
}
int main()
{
showDifficulty(0x1b0404cbu);
}
Output:
1b0404cb -> 00 00 00 00 00 04 04 CB 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
Related
typedef unsigned char *byte_pointer;
void show_bytes(byte_pointer start, size_t len) {
size_t i;
for (i = 0; i < len; i++)
printf(" %.2x", start[i]); //line:data:show_bytes_printf
printf("\n");
}
void show_integer(int* p,size_t len){
size_t i;
for(i=0;i<len;i++){
printf(" %d",p[i]);
}
printf("\n");
}
Suppose I have two functions above, and I use main function to test my functions:
int main(int argc, char *argv[])
{
int a[5]={12345,123,23,45,1};
show_bytes((byte_pointer)a,sizeof(a));
show_integer(a,5);
}
I got the following results in my terminal:
ubuntu#ubuntu:~/OS_project$ ./show_bytes
39 30 00 00 7b 00 00 00 17 00 00 00 2d 00 00 00 01 00 00 00
12345 123 23 45 1
Can someone tell me why I got the result? I understand the second function, but I have no idea why I got 39 30 00 00 7b 00 00 00 17 00 00 00 2d 00 00 00 01 00 00 00 for the first function. Actually I know the number sequence above are hexadecimal decimal for 12345, 123, 23, 45, 1. However, I have no idea: start[i] pointer doesn't point to the whole number such as 12345 or 123 in the first function. Instead, the start[0] just point to the least significant digit for the first number 12345? Can someone help me explain why these two functions are different?
12345 is 0x3039 in hex. because int is 32bits on your machine it will be represented as 0x00003039. then because your machine is little endian it will be represented as 0x39300000. you can read more about Big and Little endian on: https://www.cs.umd.edu/class/sum2003/cmsc311/Notes/Data/endian.html
the same applies for other results.
On your platform, sizeof(int) is 4 and your platform uses little endian system. The binary representation of 12345 using a 32-bit representation is:
00000000 00000000 00110000 00111001
In a little endian system, that is captured using the following byte sequence.
00111001 00110000 00000000 00000000
In hex, those bytes are:
39 30 00 00
That's what you are seeing as the output corresponding to the first number.
You can do similar processing of the other numbers in the array to understand the output corresponding to them.
In this example below, I would like to pass to a function that receive variable number of arguments the content of an array.
In other terms, I would like to pass to printf the content of foo by value and thus, pass these arguments on the stack.
#include <stdarg.h>
#include <stdio.h>
void main()
{
int foo[] = {1,2,3,4};
printf("%d, %d, %d, %d\n", foo);
}
I know this example looks stupid because I can use printf("%d, %d, %d, %d\n", 1,2,3,4);. Just imagine I'm calling void bar(char** a, ...) instead and the array is something I receive from RS232...
EDIT
In other words, I would like to avoid this:
#include <stdarg.h>
#include <stdio.h>
void main()
{
int foo[] = {1,2,3,4};
switch(sizeof(foo))
{
case 1: printf("%d, %d, %d, %d\n", foo[0]); break;
case 2: printf("%d, %d, %d, %d\n", foo[0], foo[1]); break;
case 3: printf("%d, %d, %d, %d\n", foo[0], foo[1], foo[2]); break;
case 4: printf("%d, %d, %d, %d\n", foo[0], foo[1], foo[2], foo[3]); break;
...
}
}
I would like to pass to printf the content of foo by value and thus, pass these arguments on the stack.
You cannot pass an array by value. Not by "normal" function call, and not by varargs either (which is, basically, just a different way of reading the stack).
Whenever you use an array as argument to a function, what the called function receives is a pointer.
The easiest example for this is the char array, a.k.a. "string".
int main()
{
char buffer1[100];
char buffer2[] = "Hello";
strcpy( buffer2, buffer1 );
}
What strcpy() "sees" is not two arrays, but two pointers:
char * strcpy( char * restrict s1, const char * restrict s2 )
{
// Yes I know this is a naive implementation in more than one way.
char * rc = s1;
while ( ( *s1++ = *s2++ ) );
return rc;
}
(This is why the size of the array is only known in the scope the array was declared in. Once you pass it around, it's just a pointer, with no place to put the size information.)
The same holds true for passing an array to a varargs function: What ends up on the stack is a pointer to the (first element of) the array, not the whole array.
You can pass an array by reference and do useful things with it in the called function if:
you pass the (pointer to the) array and a count of elements (think argc / argv), or
caller and callee agree on a fixed size, or
caller and callee agree on the array being "terminated" in some way.
Standard printf() does the last one for "%s" and strings (which are terminated by '\0'), but is not equipped to do so with, as in your example, an int[] array. So you would have to write your own custom printme().
In no case are you passing the array "by value". If you think about it, it wouldn't make much sense to copy all elements to the stack for larger arrays anyway.
As already said, you cannot pass an array by value in a va_arg directly. It is possible though if it is packed inside a struct. It is not portable but one can do some things when the implementation is known.
Here an example, that might help.
void call(size_t siz, ...);
struct xx1 { int arr[1]; };
struct xx10 { int arr[10]; };
struct xx20 { int arr[20]; };
void call(size_t siz, ...)
{
va_list va;
va_start(va, siz);
struct xx20 x = va_arg(va, struct xx20);
printf("HEXDUMP:%s\n", HEXDUMP(&x, siz));
va_end(va);
}
int main(void)
{
struct xx10 aa = { {1,2,3,4,5,[9]=-1}};
struct xx20 bb = { {[10]=1,2,3,4,5,[19]=-1}};
struct xx1 cc = { {-1}};
call(sizeof aa, aa);
call(sizeof bb, bb);
call(sizeof cc, cc);
}
Will print following (HEXDUMP() is one of my debug functions, it's obvious what it does).
HEXDUMP:
0x7fff1f154160:01 00 00 00 02 00 00 00 03 00 00 00 04 00 00 00 ................
0x7fff1f154170:05 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
0x7fff1f154180:00 00 00 00 ff ff ff ff ........
HEXDUMP:
0x7fff1f154160:00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
0x7fff1f154170:00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
0x7fff1f154180:00 00 00 00 00 00 00 00 01 00 00 00 02 00 00 00 ................
0x7fff1f154190:03 00 00 00 04 00 00 00 05 00 00 00 00 00 00 00 ................
0x7fff1f1541a0:00 00 00 00 00 00 00 00 00 00 00 00 ff ff ff ff ................
Tested on Linux x86_64 compiled with gcc 5.1 and Solaris SPARC9 compiled with gcc 3.4
I don't know if it is helpful, but it's maybe a start. As can be seen, using the biggest struct array in the functions va_arg allows to handle smaller arrays if the size is known.
But be careful, it probably is full of undefined behaviours (example, if you call the function with a struct array size smaller than 4 int, it doesn't work on Linux x86_64 because the struct is passed by registers, not as an array on stack, but on your embedded processor it might work).
Short answer: No, you can't do it, it's impossible.
Slightly longer answer: Well, maybe you can do it, but it's super tricky. You are basically trying to call a function with an argument list that is not known until run time. There are libraries that can help you dynamically construct argument lists and call functions with them; one library is libffi: https://sourceware.org/libffi/.
See also question 15.13 in the C FAQ list: How can I call a function with an argument list built up at run time?
See also these previous Stackoverflow questions:
C late binding with unknown arguments
How to call functions by their pointers passing multiple arguments in C?
Calling a variadic function with an unknown number of parameters
Ok look at this example, from my code. This is simple one way.
void my_printf(char const * frmt, ...)
{
va_list argl;
unsigned char const * tmp;
unsigned char chr;
va_start(argl,frmt);
while ((chr = (unsigned char)*frmt) != (char)0x0) {
frmt += 1;
if (chr != '%') {
dbg_chr(chr);
continue;
}
chr = (unsigned char)*frmt;
frmt += 1;
switch (chr) {
...
case 'S':
tmp = va_arg(argl,unsigned char const *);
dbg_buf_str(tmp,(uint16_t)va_arg(argl,int));
break;
case 'H':
tmp = va_arg(argl,unsigned char const *);
dbg_buf_hex(tmp,(uint16_t)va_arg(argl,int));
break;
case '%': dbg_chr('%'); break;
}
}
va_end(argl);
}
There dbg_chr(uint8_t byte) drop byte to USART and enable transmitter.
Use example:
#define TEST_LEN 0x4
uint8_t test_buf[TEST_LEN] = {'A','B','C','D'};
my_printf("This is hex buf: \"%H\"",test_buf,TEST_LEN);
As mentioned above, variadic argument might be passed as a struct-packed array:
void logger(char * bufr, uint32_t * args, uint32_t argNum) {
memset(buf, 0, sizeof buf);
struct {
uint32_t ar[16];
} argStr;
for(uint8_t a = 0; a < argNum; a += 1)
argStr.ar[a] = args[a];
snprintf(buf, sizeof buf, bufr, argStr);
strcat(buf, '\0');
pushStr(buf, strlen(buf));
}
tested and works with gnu C compiler
When a variable is associated with a union, the compiler allocates the memory by considering the size of the largest memory. So, size of union is equal to the size of largest member. so it means Altering the value of any of the member will alter other member values.
but when I am executing the following code,
output: 4 5 7.000000
union job
{
int a;
struct data
{
double b;
int x
}q;
} w;
w.q.b=7;
w.a=4;
w.q.x=5;
printf("%d %d %f",w.a,w.q.x,w.q.b);
return 0;
}
Issue is that, first I assign the value of a and later modified the value of q.x, then the value of a would be overridden by q.x. But in the output it still shows the original value of a as well as of q.x. I am not able to understand why it is happening?
Your understanding is correct - the numbers should change. I took your code, and added a little bit more, to show you exactly what is going on.
The real issue is quite interesting, and has to do with the way floating point numbers are represented in memory.
First, let's create a map of the bytes used in your struct:
aaaa
bbbbbbbbxxxx
As you can see, the first four bytes of b overlap with a. This will turn out to be important.
Now we have to take a look at the way double is typically stored (I am writing this from the perspective of a Mac, with 64 bit Intel architecture. It so happens that the format in memory is indeed the IEEE754 format):
The important thing to note here is that Intel machines are "little endian" - that is, the number that will be stored first is the "thing on the right", i.e. the least significant bits of the "fraction".
Now let's look at a program that does the same thing that your code did - but prints out the contents of the structure so we see what is happening:
#include <stdio.h>
#include <string.h>
void dumpBytes(void *p, int n) {
int ii;
char hex[9];
for(ii = 0; ii < n; ii++) {
sprintf(hex, "%02x", (char)*((char*)p + ii));
printf("%s ", hex + strlen(hex)-2);
}
printf("\n");
}
int main(void) {
static union job
{
int a;
struct data
{
double b;
int x;
}q;
} w;
printf("intial value:\n");
dumpBytes(&w, sizeof(w));
w.q.b=7;
printf("setting w.q.b = 7:\n");
dumpBytes(&w, sizeof(w));
w.a=4;
printf("setting w.a = 4:\n");
dumpBytes(&w, sizeof(w));
w.q.x=5;
printf("setting w.q.x = 5:\n");
dumpBytes(&w, sizeof(w));
printf("values are now %d %d %.15lf\n",w.a,w.q.x,w.q.b);
w.q.b=7;
printf("setting w.q.b = 7:\n");
dumpBytes(&w, sizeof(w));
printf("values are now %d %d %.15lf\n",w.a,w.q.x,w.q.b);
return 0;
}
And the output:
intial value:
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
All zeros (I declared the variable static - that makes sure everything will be initialized). Note that the function prints out 16 bytes, even though you might have thought that a struct whose biggest element is double + int should only be 12 bytes long. This is related to byte alignment - when the largest element is 8 bytes long, the structure will be aligned on 8 bit boundaries.
setting w.q.b = 7:
00 00 00 00 00 00 1c 40 00 00 00 00 00 00 00 00
Let's look at the bytes representing the double in their correct order:
40 1c 00 00 00 00 00 00
Sign bit = 0
exponent = 1 0000 0000 0111b (binary representation)
mantissa = 0
setting w.a = 4:
04 00 00 00 00 00 1c 40 00 00 00 00 00 00 00 00
When we now write a, we have modified the first byte. This corresponds to the least significant bits of the mantissa, which is now (in hex):
00 00 00 00 00 00 04
Now the format of the mantissa implies a 1 to the left of this number; so changing the last bits from 0 to 4 in changed the magnitude of the number by just a tiny fraction - you need to look at the 15th decimal to see it.
setting w.q.x = 5:
04 00 00 00 00 00 1c 40 05 00 00 00 00 00 00 00
The value 5 is written in its own little space
values are now 4 5 7.000000000000004
Note - when I used a large number of digits, you can see that the least significant part of b is not exactly 7 - even though a double is perfectly capable of representing an integer accurately.
setting w.q.b = 7:
00 00 00 00 00 00 1c 40 05 00 00 00 00 00 00 00
values are now 0 5 7.000000000000000
After writing 7 into the double again, you can see that the first byte is once again 00, and now the result of the printf statement is indeed 7.0 exactly.
So - your understanding was correct. The problem was in your diagnosis - the number was different but you couldn't see it.
Usually a good way to look for these things is to just store the number in a temporary variable, and look at the difference. You would have found it easily enough, then.
You can see altered values if you run the below code:-
#include <stdio.h>
union job
{
struct data
{
int x;
double b;
}q;
int a;
} w;
int main() {
w.q.b=7;
w.a=4;
w.q.x=5;
printf("%d %d %f",w.a,w.q.x,w.q.b);
return 0;
}
OUTPUT: 5 5 7.000000
I have slightly modified the structure inside the union, but that explains your concern.
Actualy the instruction w.a = 4 overrides the data of w.q.b. Here is how your memory looks like:
After w.q.b=7; After w.a=4; After w.q.x=5;
|0|1|0|0|0|0|0|0| |0|1|0|0|0|0|0|0| |0|1|0|0|0|0|0|0| \ \
|0|0|0|1|1|1|0|0| |0|0|1|1|1|0|0|0| |0|0|1|1|1|0|0|0| | w.a |
|0|0|0|0|0|0|0|0| |0|0|0|0|0|0|0|0| |0|0|0|0|0|0|0|0| | |
|0|0|0|0|0|0|0|0| |0|0|0|0|0|1|0|0| |0|0|0|0|0|1|0|0| / | w.q.b
|0|0|0|0|0|0|0|0| |0|0|0|0|0|0|0|0| |0|0|0|0|0|0|0|0| |
|0|0|0|0|0|0|0|0| |0|0|0|0|0|0|0|0| |0|0|0|0|0|0|0|0| |
|0|0|0|0|0|0|0|0| |0|0|0|0|0|0|0|0| |0|0|0|0|0|0|0|0| |
|0|0|0|0|0|0|0|0| |0|0|0|0|0|0|0|0| |0|0|0|0|0|0|0|0| /
----------------- ----------------- -----------------
|0|0|0|0|0|0|0|0| |0|0|0|0|0|0|0|0| |0|0|0|0|0|0|0|0| \
|0|0|0|0|0|0|0|0| |0|0|0|0|0|0|0|0| |0|0|0|0|0|0|0|0| | w.q.x
|0|0|0|0|0|0|0|0| |0|0|0|0|0|0|0|0| |0|0|0|0|0|0|0|0| |
|0|0|0|0|0|0|0|0| |0|0|0|0|0|0|0|0| |0|0|0|0|0|1|0|1| /
As you can see the 30th bit of w.q.b is changed from 0 to 1 due to the assignment of 4 to the first 4 bytes, but this change is too low as only the mantissa part is affected and the precision of printing w.q.b doesn't show this change.
I thought shift operator shifts the memory of the integer or the char on which it is applied but the output of the following code came a surprise to me.
#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
int main(void) {
uint64_t number = 33550336;
unsigned char *p = (unsigned char *)&number;
size_t i;
for (i=0; i < sizeof number; ++i)
printf("%02x ", p[i]);
printf("\n");
//shift operation
number = number<<4;
p = (unsigned char *)&number;
for (i=0; i < sizeof number; ++i)
printf("%02x ", p[i]);
printf("\n");
return 0;
}
The system on which it ran is little endian and produced the following output:
00 f0 ff 01 00 00 00 00
00 00 ff 1f 00 00 00 00
Can somebody provide some reference to the detailed working of the shift operators?
I think you've answered your own question. The machine is little endian, which means the bytes are stored in memory with the least significant byte to the left. So your memory represents:
00 f0 ff 01 00 00 00 00 => 0x0000000001fff000
00 00 ff 1f 00 00 00 00 => 0x000000001fff0000
As you can see, the second is the same as the first value, shifted left by 4 bits.
Everything is right:
(1 * (256^3)) + (0xff * (256^2)) + (0xf0 * 256) = 33 550 336
(0x1f * (256^3)) + (0xff * (256^2)) = 536 805 376
33 550 336 * (2^4) = 536 805 376
Shifting left by 4 bits is the same as multiplying by 2^4.
I think you printf confuses you. Here are the values:
33550336 = 0x01FFF000
33550336 << 4 = 0x1FFF0000
Can you read you output now?
It doesn't shift the memory, but the bits. So you have the number:
00 00 00 00 01 FF F0 00
After shifting this number 4 bits (one hexadecimal digit) to the left you have:
00 00 00 00 1F FF 00 00
Which is exactly the output you get, when transformed to little endian.
Your loop is printing bytes in the order they are stored in memory, and the output would be different on a big-endian machine. If you want to print the value in hex just use %016llx. Then you'll see what you expect:
0000000001fff000
000000001fff0000
The second value is left-shifted by 4.
I want these two print functions to do the same thing:
unsigned int Arraye[] = {0xffff,0xefef,65,66,67,68,69,0};
char Arrage[] = {0xffff,0xefef,65,66,67,68,69,0};
printf("%s", (char*)(2+ Arraye));
printf("%s", (char*)(2+ Arrage));
where Array is an unsigned int. Normally, I would change the type but, the problem is that most of the array is numbers, although the particular section should be printed as ASCII. Currently, the unsigned array prints as "A" and the char array prints as the desired "ABCDE".
This is how the unsigned int version will be arranged in memory, assuming 32-bit big endian integers.
00 00 ff ff 00 00 ef ef 00 00 00 41 00 00 00 42
00 00 00 43 00 00 00 44 00 00 00 45 00 00 00 00
This is how the char version will be arranged in memory, assuming 8-bit characters. Note that 0xffff does not fit in a char.
ff ef 41 42 43 44 45 00
So you can see, casting is not enough. You'll need to actually convert the data.
If you know that your system uses 32-bit wchar_t, you can use the l length modifier for printf.
printf("%ls", 2 + Arraye);
This is NOT portable. The alternative is to copy the unsigned int array into a char array by hand, something like this:
void print_istr(unsigned int const *s)
{
unsigned int const *p;
char *s2, *p2;
for (p = s; *p; p++);
s2 = xmalloc(p - s + 1);
for (p = s, p2 = s2; *p2 = *p; p2++, p++);
fputs(s2, stdout);
free(s2);
}
As Dietrich said, a simple cast will not do, but you don't need a complicated conversion either. Simply loop over your array.
uint_t Arraye[] = {0xffff,0xefef,65,66,67,68,69,0};
char Arrage[] = {0xffff,0xefef,65,66,67,68,69,0};
uint_t *p;
for(p = Arraye+2; p; p++)
printf("%c", p);
printf("%s", (char*)(2+ Arrage));