Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
I want to learn if this is possible:
for ex:
we have long Lvalue = 0xFF00f41a;
and also have int *p;
Can we point to last 2 byte of Lvalue
like p=&Lvalue <<16;
p pointed frist 16 bit value is it possible?
*p --> f41a;
*(p+1) --->0xFF00;
then if
p = 0xa011;
long Lvalue ---> 0xFF00a011
actually I need bit operations. I have 32 bit value but I can send only 16 bits and if I change 16 bit have to change first 16 bit last 16 bit of 32 bit value.
If you just want 16bits of the larger 32bits type, use bit mask for the task, for example, to get the lower 16 bits of value:
long value = 0xFF00f41a;
long lower = value & 0xFFFF;
To change the lower 16 bits of value, use bit operation:
lower = <some thing new>;
value = (value & 0xFFFF0000) | lower;
Don't use pointer to access part of the 32bit value, it crates undefined bahavior when you dereference it.
Following short example will work, if int is aligned on 4-bytes (which seems to be guaranteed by gcc, see Casting an int pointer to a char ptr and vice versa):
#include <stdio.h>
int main() {
int v = 0xcafe;
int *ip = &v;
char *cp = (char*) ip;
printf("%hhX\n", *cp); // FE on little-endian box
printf("%hhX\n", *(cp + 1));
*cp = 0xbe;
*(cp + 1) = 0xba;
printf("%X\n", *ip);
return 0;
}
You can guarantee alignment of int thus:
int main() {
__attribute__ ((aligned (4))) int v = 0xcafe;
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I want a function which would help me to extract few bits or bytes from a starting bit position from a byte array. The order of the byte array is LSB. The skeleton of the code is as follows:
typedef unsigned char uint8;
typedef unsigned short uint16;
uint16 ExtractBitsOrBytes(uint16 StartBit, uint8 *ByteArray, uint16 BitsWanted)
{
uint16 Result;
...
}
How can I implement this logic in C?. Any example or starting point is much appreciated.
Your problem is not fully specified: LSB refers to the order of bytes in memory for integral types spanning more than one byte. In your case you must specify how the bits are numbered in the array and composed to form the value extracted.
It would make sense to number the bits from 0, such that bit n is the bit with value 1 << (n % 8) in the byte at offset n / 8. For consistency, the bit with the lowest number should become the least significant bit of the extracted value. This convention is consistent with LSB as extracting 16 bits at offset 0 yields the value of the uint16 stored in the first 2 bytes of the array.
Here is a naive implementation with this convention:
typedef unsigned char uint8;
typedef unsigned short uint16;
uint16 ExtractBitsOrBytes(uint16 StartBit, const uint8 *ByteArray, uint16 BitsWanted) {
// assuming BitsWanted <= 16
uint16 result = 0;
uint16 i;
for (i = 0; i < BitsWanted; i++) {
result |= (uint16)((ByteArray[StartBit >> 3] >> (StartBit & 7)) & 1) << i;
StartBit++;
}
return result;
}
Note however that the convention used for monochrome bitmaps on many systems is different: the leftmost pixel corresponds to the most significant bit of the first byte, a convention inherited from choices made in the late 70s, mixing MSB and LSB, that made graphics software more complicated than it should have been.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
I am having trouble writing an algorithm for a 1byte / 8 bit checksum.
Obviously with 8bits over a decimal value of 255 the Most significant bits have to wrap around. I think I am doing it correctly.
Here is the code...
#include <stdio.h>
int main(void)
{
int check_sum = 0; //checksum
int lcheck_sum = 0; //left checksum bits
int rcheck_sum = 0; //right checksum bits
short int mask = 0x00FF; // 16 bit mask
//Create the frame - sequence number (S) and checksum 1 byte
int c;
//calculate the checksum
for (c = 0; c < length; c++)
{
check_sum = (int)buf[c] + check_sum;
printf("\n Check Sum %d ", check_sum); //debug
}
printf("\nfinal Check Sum %d", check_sum); //debug
//Take checksum and make it a 8 bit checksum
if (check_sum > 255) //if greater than 8 bits then encode bits
{
lcheck_sum = check_sum;
lcheck_sum >> 8; //shift 8 bits to the right
rcheck_sum = check_sum & mask;
check_sum = lcheck_sum + rcheck_sum;
}
//Take the complement
check_sum = ~check_sum;
//Truncate - to get rid of the 8 bits to the right and keep the 8 LSB's
check_sum = check_sum & mask;
printf("\nTruncated and complemented final Check Sum %d\n",check_sum);
return 0;
}
Short answer: you are not doing it correctly, even if the algorithm would be as your code implies (which is unlikely).
Standard warning: Do not use int if your variable might wrap (undefined behaviour) or you want to right-shift potentially negative values (implementation defined). OTOH, for unsigned types, wrapping and shifting behaviour is well defined by the standard.
Further note: Use stdint.h types if you need a specific bit-size! The built-in standard types are not guaranteed (including char) to provide such.
Normally an 8 bit checksum of an 8 bit buffer is calculated as follows:
#include <stdint.h>
uint8_t chksum8(const unsigned char *buff, size_t len)
{
unsigned int sum; // nothing gained in using smaller types!
for ( sum = 0 ; len != 0 ; len-- )
sum += *(buff++); // parenthesis not required!
return (uint8_t)sum;
}
It is not clear what you are doing with all the typecasts or shifts; uint8_t as being guaranteed the smallest (unsigned) type, the upper bits are guaranteed to be "cut off".
Just compare this and your code and you should be able to see if your code will work.
Also note that there is not the single checksum algorithm. I did not invert the result in my code, nor did I fold upper and lower bytes as you did (the latter is pretty uncommon, as it does not add much more protection).
So, you have to verify the algorithm to use. If that really requires to fold the two bytes of a 16 bit result, change sum to uint16_t` and fold the bytes as follows:
uint16_t sum;
...
// replace return with:
while ( sum > 0xFFU )
sum = (sum & 0xFFU) + ((sum >> 8) & 0xFFU);
return sum;
This cares about any overflow from adding the two bytes of sum (the loop could also be unrolled, as the overflow can only occur once).
Sometimes, CRC algorithms are called "checksum", but these are actually a very different beast (mathematically, they are the remainder of a binary polynomial division) and require much more processing (either at run-time, or to generate a lookup-table). OTOH, CRCs provide a much better detection of data corruption - but not to manipulation.
Revised Question -
How to calculate the mask-bits irrespective of the size of integer type?
I want to calculate the mask of first 4-bits, when I don't the size of integer.
I have two options to set the MSB 4-bits in the code -
if little_Endian -
then --
int t = 54342;
int k = t<<4;
int t = (k>>4)|0XF000
else big Endian --
then --
int t = 54342;
int k = t>>4;
int t = (k<<4)|0X000F
My question is is there any better way to do so. How can I make the code independent of the endianity? I can use union to determine the endianity. However, I want my code to simple. How can I do so?
Endianess is used to interpret the way in which the bytes are stored in the memory. It doesn't dictate how bytes are to be accessed if you are directly referring the variable without any pointer operations.
Which means, the below program will produce same result irrespective of the endianess of the platform.
int main(void)
{
int num = 0xDEADBEEF;
int mask = 0xF0000000;
printf("SET = %X\n", (unsigned int) (num | mask));
return 0;
}
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
main(){
char i[2];
* i = 0;
* (i + 1) = 1;
printf("len = %d \n",sizeof(int *));
printf("i[0] = %d \n",*(int *)i);
}
where the answer is not 16 the answer is 256
i use the turboc2.0 the hex is 100
This code depends on your system, specifically on the size of an int.
After initializing, your i array looks like this:
------------
|0x00 | 0x01 |
------------
Assumed an int is 32 bits on your system:
When casting i to an *int and dereferencing it, there will be four bytes which are accessed (since an int is 32 bits or four bytes):
--------------------------
|0x00 | 0x01 | 0x?? | 0x?? |
--------------------------
So, the last two bytes are out of bounds of your array, will have any value, and you will observe undefined behavior (on my system, actually, it prints different values each time I execute the code, like 1762656512, -375848704, ...).
Assumed an int is 16 bits on your system, it gets a littlebit "better":
In this case, when casting i to an *int and dereferencing it, the two bytes will be accessed as a 16 bit value. But, it then still depends on the endianess which value you get:
Little endian: *(int*) i = 0x0100 = 256
Big endian: *(int*) i = 0x0001 = 1
So, if you expect 256, you need to make sure to be on a little endian 16 bit system ...
BTW: When using sizeof() with printf(), make sure to use the %zu format specifier.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
I was asked an interview question: given a 6 byte input, which got from a big endian machine, please implement a function to convert/typecast it to 8 bytes, assume we do not know the endian of the machine running this function.
The point of the question seems to test my understanding of endianess because I was asked whether I know endianess before this question.
I do not know how to answer the question. e.g. do I need to pad 6 byte to 8 byte first? and how? Here is my code. is it correct?
bool isBigEndian(){
int num = 1;
char* b = (char*)(&num);
return b ? false:true;
}
long long* convert(char* arr[]){ //size is 6
long long* res = (long long*)malloc(long long);//...check res is NULL...
if (isBigEnian()){
for(int i = 0; i< 6; i++)
memset(res, i+2, arr[i]);
}
else {
for(int i = 0; i< 6; i++)
memset(res, i+2, arr[6-1-i]);
}
return res; //assume caller will free res.
}
update: to answer that my question is not clear, I just found a link: Convert Bytes to Int / uint in C with the similar question. based on my understanding of that, endianess of the host does matters. suppose if input is: char array[] = {01,02,03,04,05,06}, then if host is little endian, output is stored as 00,00,06,05,04,03,02,01, if big endian, output will be stored as 00,00,01,02,03,04,05,06, in both case, the 0000 are padded at beginning.
I am a kind of understand now: in the other machine, suppose there is a number xyz = 010203040506 because it is bigendian and 01 is MSB. so it is stored as char array = {01,02,03,04,05,06} where 01 has lowest address. then in this machine, if the machine is also big endian. it should be stored as {00,00,01,02,03,04,05,06 } where 01 is still MSB, so that it is cast to the same number int_64 xyz2 = 0000010203040506. but if the machine is little endian, it should be stored as {00,00,06,05,04,03,02,01 } where 01 is MSB has highest address in order for int_32 xyz2 = 0000010203040506.
please let me know if my undestanding is incorrect. and Can anybody tell me why 0000 is always padded at beginning no matter what endianess? shouldn't it be padded at the end if this machine is little endian since 00 is Most sign byte?
Before moving on, you should have asked for clarification.
What exactly means converting here? Padding each char with 0's? Prefixing each char with 0's?
I will assume that each char should be prefixed with 0's. This is a possible solution:
#include <stdint.h>
#include <limits.h>
#define DATA_WIDTH 6
uint64_t convert(unsigned char data[]) {
uint64_t res;
int i;
res = 0;
for (i = 0; i < DATA_WIDTH; i++) {
res = (res << CHAR_BIT) | data[i];
}
return res;
}
To append 0's to each char, we could, instead, use this inside the for:
res = (res << CHAR_BIT) | (data[i] << 2);
In an interview, you should always note the limitations for your solution. This solution assumes that the implementation provides uint64_t type (it is not required by the C standard).
The fact that the input is big endian is important because it lets you know that data[0] corresponds to the most significant byte, and it must remain so in your result. This solution works not matter what the target machine's endianness.
I don't understand why you think malloc is necessary. Why not just something like this?
long long convert(unsigned char data[]);
{
long long res;
res = 0;
for( int i=0;i < 6; ++i)
res = (res << 8) + data[i];
return res;
}