Combine two 32bit efficiently? - C - c

I have a function which expects a 8 bytes long unsigned char.
void f1(unsigned char *octets)
{
unsigned char i;
for (i=0;i<8;i++)
printf("(%d)",octets[i]);
}
This is how I use it while I have one 64bit integer:
unsigned long long ull = 1;
f1((unsigned char *) &ull);
(it uses the machine's native endianness.)
My question is, if instead of having 1x64bit integer, I have 2x32bit integers - is there a way to combine them efficiently as an input for this specific function?
unsigned long int low = 1;
unsigned long int high = 0;

Does a union work portably? If so, it's a good approach...
union {
struct {
unsigned char CharArray[8];
} ub;
struct {
unsigned long int IntArray[2];
} ul;
unsigned long long ull;
} Foo;

You could just put them in an array:
unsigned long int lohi[2] = {1, 0};
f1((unsigned char *) lohi);
edit: Using existing variables:
unsigned long int lohi[2] = {lo, hi};

Typecast, bitshift and do bitwise or.
unsigned long int low = 1;
unsigned long int high = 0;
unsigned long long ull = (unsigned long long) high << 32 | low;

You can use a combination of union and struct to keep your namings and not using arrays.
union {
struct {
unsigned long int low = 0;
unsigned long int high = 1;
};
unsigned long long int ull;
};
Use low and high as you would do, and use ull when calling f1. But notice that writing it this way, you assume a little endian ordering.
Also note that on Linux anh other UNIXes, in 64 bits mode, both long int and long long int are 64 bits (only int is 32 bits). Of what I know, only Windows has long int as 32 bits in 64 bits mode.

Different way of looking at it:
void f1( unsigned char* octet )
{
f2( octet, octet + 4 );
}
void f2( unsigned char* quad1, unsigned char *quad2 )
{
unsigned char i;
for (i=0;i<4;i++)
printf("(%d)",quad1[i]);
for (i=0;i<4;i++)
printf("(%d)",quad2[i]);
}
Works better in C++ when both functions can have the same name.

Related

What exactly happens when we cast a number to a smaller size number in C?

Suppose I have something like this:
unsigned int x = (unsigned int)SomeLong;
What exactly happens if SomeLong doesn't fit in 4 bytes? What does the new memory layout look like? How exactly does casting a number to a smaller size number in C work? What happens?
It truncates the memory. This is shown by this program, which shows the binary representation of the long, and then the binary representation of the long cast to a smaller int:
#include <stdio.h>
void Print8Byte(unsigned long Value) {
for (unsigned char i = 0; i < 64; i++) {
union {
unsigned long Value;
unsigned First:1;
} Cast = {.Value = Value>>i};
putchar('0'+Cast.First);
}
putchar('\n');
}
int main(int argc, char *argv[]) {
unsigned long Num = 0x284884848; //Arbitrary Value
Print8Byte(Num);
Print8Byte((unsigned int)Num);
}
Result:
0001001000010010000100010010000101000000000000000000000000000000
0001001000010010000100010010000100000000000000000000000000000000

how to store string of numbers into unsigned integer in C

Actually, I have to convert from command arguments which are three strings to bit field(three unsigned integers inside). This program is going to convert from bits into float. I firstly thought about using array to store the three arguments,but I don't really know how to convert from array to unsigned int. Should I just use atoi to change arg into int and then directly into unsigned int? It doesn't word on my computer. Got no idea.
Union32 getBits(char *sign, char *exp, char *frac)
{
Union32 new;
// this line is just to keep gcc happy
// delete it when you have implemented the function
//new.bits.sign = new.bits.exp = new.bits.frac = 0;
new.bits.sign = *(unsigned int *)atoi(sign);
new.bits.exp = *(unsigned int *)atoi(exp);
new.bits.frac = *(unsigned int *)atoi(frac);
//int i ;
//int balah[8] = {};
//for(i = 0; i < 8; i++){
//balah[i] = sign[i];
//}
//int j ;
//int bili[23] = {};
//for(j = 0; j < 23; j++){
//bili[j] = sign[j];
//}
//convert array into unsigned integer?
printf("%u %u %u\n", new.bits.sign, new.bits.exp, new.bits.frac);
// convert char *sign into a single bit in new.bits
// convert char *exp into an 8-bit value in new.bits
// convert char *frac into a 23-bit value in new.bits
enter code here
return new;
}
The following are the details about the typedef and unions that needed in this program, also the four functions in this program.
typedef uint32_t Word;
struct _float {
// define bit_fields for sign, exp and frac
// obviously they need to be larger than 1-bit each
// and may need to be defined in a different order
unsigned int sign:1, exp:8, frac:23;
};
typedef struct _float Float32;
union _bits32 {
float fval; // interpret the bits as a float
Word xval; // interpret as a single 32-bit word
Float32 bits; // manipulate individual bits
};
typedef union _bits32 Union32;
void checkArgs(int, char **);
Union32 getBits(char *, char *, char *);
char *showBits(Word, char *);
int justBits(char *, int);
getBits asks us to convert bits into float,
and showBits asks us to convert float into bits.
assuming the correct typedefs in your code:
new.bits.sign = (unsigned int)atoi(sign);
new.bits.exp = (unsigned int)atoi(exp);
new.bits.frac = (unsigned int)atoi(frac);

C bit manipulation DES permute

I was having trouble with implementing the DES algorithm in Python, so I thought I'd switch to C. But I've ran into an issue, which I haven't been able to fix in hours, hopefully you can help me. Here's the source:
int PI[64] = {58,50,42,34,26,18,10,2,
60,52,44,36,28,20,12,4,
62,54,46,38,30,22,14,6,
64,56,48,40,32,24,16,8,
57,49,41,33,25,17,9,1,
59,51,43,35,27,19,11,3,
61,53,45,37,29,21,13,5,
63,55,47,39,31,23,15,7};
unsigned long getBit(unsigned long mot, unsigned long position)
{
unsigned long temp = mot >> position;
return temp & 0x1;
}
void setBit(unsigned long* mot, int position, unsigned long value)
{
unsigned long code = *mot;
code ^= (-value ^ code) & (1 << position);
*mot = code;
}
void permute( unsigned long * mot, int * ordre, int taille )
{
unsigned long res;
int i = 0;
unsigned long bit;
for (i = 0; i < taille; i++)
{ setBit(&res, i, getBit(*mot, ordre[i] - 1)); }
*mot = res;
}
int main(int argc, char *argv[])
{
unsigned long bloc = 0x0123456789ABCDEF;
permute(&bloc, PI, 64);
printf(" end %lx\n", bloc);
return 1;
}
I made this permutation manually and with my Python program, and the result of this permutation should be 0xcc00ccfff0aaf0aa but I get 0xffffffffcc00ccff (which is, somehow, half correct and half broken). What is going on? How to fix this?
I added UL at the end of my hex word, and I used uint64_t instead of unsigned long int. When I changed -value, I got either fffffffffffffff or 0, but with UL and uint64_t I'm getting the correct result, which probably means, as you guys suggested, that my unsigned longs were not 64-bit longs. Thanks !

How do I convert an int into 5-bit 2s complement in C?

I have -9 as an integer, how would I convert this into a 5 bit 2s complement integer in C? Essentially getting 10111?
What my current code looks like:
char src2[3] = "-9";
int int_src2 = atoi(src2);
int_src2 = int_src2 & 31;
My immediate thoughts are int_src2 is 8388112 when set is 16 after the AND operation - whereas I wanted 10111
You could use a combination of bitfields and unions:
#include <stdio.h>
typedef struct
{
unsigned int bit1:1;
unsigned int bit2:1;
unsigned int bit3:1;
unsigned int bit4:1;
unsigned int bit5:1;
} five;
typedef union {five f; int i;} u;
int main() {
u test;
test.i = -9;
printf("%u%u%u%u%u\n",test.f.bit5,test.f.bit4,test.f.bit3,test.f.bit2,test.f.bit1);
return 0;
}
live example: https://ideone.com/ow0Bl6

Problem with big numbers in C

why should a code like this should provide a so high result when I give it the number 4293974227 (or higher)
int main (int argc, char *argv[])
{
unsigned long long int i;
unsigned long long int z = atoi(argv[1]);
unsigned long long int tmp1 = z;
unsigned long long int *numbers = malloc (sizeof (unsigned long long int) * 1000);
for (i=0; tmp1<=tmp1+1000; i++, tmp1++) {
numbers[i] = tmp1;
printf("\n%llu - %llu", numbers[i], tmp1);
}
}
Result should start with the provided number but starts like this:
18446744073708558547 - 18446744073708558547
18446744073708558548 - 18446744073708558548
18446744073708558549 - 18446744073708558549
18446744073708558550 - 18446744073708558550
18446744073708558551 - 18446744073708558551
ecc...
What's this crap??
Thanks!
atoi() returns int. If you need larger numbers, try strtol(), strtoll(), or their relatives.
atoi() returns (int), and can't deal with (long long). Try atoll(), or failing that atol() (the former is preferred).
You are printing signed integers as unsigned.

Resources