I have to say what the output of the program for w = 33. I do not know how to do it. Does anyone have an idea how to solve this without writing the binary representation of each number?
void notChicken(int w)
{
unsigned int v1 = 0x12345678;
unsigned int v2 = 0x87654785;
unsigned int v3 = 0xffffffff;
unsigned int tmp;
tmp = (v1 >> 3) | (v2 << 3);
tmp &= v3 & ~(v3 << (w >> 1));
printf("%8x\n", tmp);
}
Thanks
Although not a good idea, lets try to break down your operation.
You have given w = 33
The last part -
v3 & ~(v3 << (w >> 1)) is going to evaluate as v3 & ~(v3 << 16)
v3 << 16 is 0xffff0000 and ~ of that is 0xffff
since v3 is all ones you get 0xffff. This will mask off the upper 16 bits of the previous computation.
Now (v1 >> 3) | (v2 << 3);
We care only about the lower 16 bits.
>> 3 is dividing by 8 and << 3 is multiplying by 8.
So the result of first part will be
0x2468ACF | 0x3B2A3C28
Keeping only the lower 16 bits
0x8ACF | 0x3C28
Finally I don't know how you are going to do the OR without writing the bitwise representation. I can help with the last hex. It will be F.
Related
I took this example from the following page. I am trying to convert long into a 4 byte array. This is the original code from the page.
long n;
byte buf[4];
buf[0] = (byte) n;
buf[1] = (byte) n >> 8;
buf[2] = (byte) n >> 16;
buf[3] = (byte) n >> 24;
long value = (unsigned long)(buf[4] << 24) | (buf[3] << 16) | (buf[2] << 8) | buf[1];
I modified the code replacing
long value = (unsigned long)(buf[4] << 24) | (buf[3] << 16) | (buf[2] << 8) | buf[1];
for
long value = (unsigned long)(buf[3] << 24) | (buf[2] << 16) | (buf[1] << 8) | buf[0];
I tried the original code where n is 15000 and value would return 0. After modifiying the line in question (i think there was an error in the indexes on the original post?) value returns 152.
The objetive is to have value return the same number as n. Also, n can be negative, so value should also return the same negative number.
Not sure what I am doing wrong. Thanks!
You were correct that the indices were wrong. A 4-byte array indexes from 0 to 3, not 1 to 4.
The rest of the issues were because you were using a signed 'long' type. Doing bit-manipulations on signed datatypes is not well defined, since it assumes something about how signed integers are stored (twos-complement on most systems, although I don't think any standard requires it).
e.g. see here
You're then assigning between signed 'longs' and unsigned 'bytes'.
Someone else has posted an answer (possibly abusing casts) that I'm sure works. But without any explanation I feel it doesn't help much.
I have a variable in C with a binary value of '10010100'
and I have another variable with the value is '1111'.
What I want to achieve is to keep bits 7,6,1,0 intact and insert the second variable in [5..2].
I have been told I could use a mirror. I have done some research and I cannot find the right answer.
If I move bits bitwise, I lose part of the content.
Use a mask (bitwise AND) to turn the part that should be replaced to the new values to zero.
Use bitwise OR to put the new value to the zeroed area.
int a = 0x94; /* 10010100 */
int b = 0xf; /* 1111 */
/* do masking here */
/* | put the new value here */
/* | | */
/* v v */
a = (a & ~(0xf << 2)) | (b << 2);
The general solution to this problem is to clear the bits in the destination range with the & operator and an appropriate mask and set the bits from the second variable, shifted appropriately and potentially masked if it cannot be asserted that no other bits are set:
v1 = (v1 & ~(0xF << 2)) | ((v2 & 0xF) << 2);
If you know that v2 has all bits set in the destination range, you can simplify as:
v1 = v1 | (0xF << 2);
Note however that (v1 & ~(0xF << 2)) uses int arithmetics, which will extend the mask to the width of v1 if its type is larger than int. But if the destination included the sign bit of type int, shifting a 1 bit into this position is undefined behavior. Using an explicit constant would not work either because it would have type unsigned int and extending it to the type of v1 would also mask the high order bits of v1 if its type is larger than int. For example:
/* replacing bits 31,30,29,28 */
long long v1 = 0x987654321;
int v2 = 0xF;
v1 = (v1 & ~(0xF << 28)) | ((v2 & 0xF) << 28);
// v1 is now 0x9F7654321 but really undefined behavior
v1 = (v1 & ~0xF0000000) | ((v2 & 0xF) << 28);
// v1 is now 0xF7654321 instead of 0x9F7654321
A similar issue occurs if v2 has a type smaller than that of v1 and must be shifted beyond its own length.
A safer approach would use constants with trailing type markers that match the type of v1, but this would still not work if a bit has to be shifted into the sign bit of type long long:
v1 = (v1 & ~(0xFLL << 60)) | ((v2 & 0xFLL) << 60); // undefined behavior
The general solution is to use unsigned long long constants:
v1 = (v1 & ~(0xFULL << 28)) | ((v2 & 0xFULL) << 28);
The behavior on obsolete non 2's complement architectures is non trivial and will be ignored.
Here is how I would break it down into small pieces.
Note that the 0b prefix is a non-standard extension, but commonly found on several compilers.
#include <stdio.h>
#include <stdint.h>
int main(void) {
uint8_t a = 0b10010100;
uint8_t b = 0b00001111;
uint8_t keep7610 = a & 0b11000011; // Keep Bits 7, 6, 1, and 0. Set the others to 0
uint8_t insertb = keep7610 | (b << 2); // Add in variable B at position 2-5
printf("Final Answer: 0x%02X\n", insertb);
return 0;
}
Output
Success #stdin #stdout 0s 5444KB
Final Answer: 0xBC
(0xBC translates as 0b10111100, which is what I get when I follow your instructions manually)
Use functions!
unsigned replaceBits(unsigned val, int startBit, int nBits, unsigned newVal)
{
unsigned mask = ((1UL << nBits) - 1) << startBit;
//clear bits
val &= ~mask;
//set new value (adding with mask makes sure that we will not change any other bits.
val |= (newVal << startBit) & mask;
return val;
}
You can also add some checks to make sure that parameters have valid values.
Example:
https://godbolt.org/z/zofKqofx1
[Part of a HW question]
Assume 2's complement, 32bit word-length. Only signed int and constants 0 through 0xFF allowed. I've been asked to implement a logical right shift by "n" bits (0 <= n <= 31) using ONLY the operators:
! ~ & ^ | + << >>
I figured I could store and clear the sign bit, perform the shift, and replace the stored sign bit in its new location.
I would like to implement the operation "31 - n" (w/out using the "-" operator) to find the appropriate location for the stored sign bit post shift.
If n were positive, I could use the expression: "31 + (~n + 1)", but I don't believe this will work in the case when n = 0.
Here's what I have so far:
int logicalShift(int x, int n) {
/* Store & clear sign bit, perform shift, and replace stored sign bit
in new location */
int bit = (x >> 31) & 1; // Store most significant bit
x &= ~(1 << 31); // Clear most significant bit
x = x >> n; // Shift by n
x &= ~((~bit) << (31 - n)); // Replace MSbit in new location
return x;
}
Any help and/or hints are appreciated.
[EDIT: Solved]
Thanks to everyone for the help. ~n + 1 works to negate n in this situation, including for the case n = 0 (where it returns 0 as desired). Functional code is below (by no means the most elegant solution). Utility operations borrowed from: How do you set, clear, and toggle a single bit?
int logicalShift(int x, int n) {
/* Store & clear sign bit, perform shift, and replace stored sign bit
in new location */
int bit = (x >> 31) & 1; // Store most significant bit
x &= ~(1 << 31); // Clear most significant bit
x = x >> n; // Shift by n
x ^= ((~bit + 1) ^ x) & (1 << (31 + (~n + 1))); // Replace MSbit in new location
return x;
}
A simple solution is
int logicalShift(int x, int n) {
return (x >> n) ^ (((x & 0x80000000) >> n) << 1);
}
Sadly, using the constant 0x80000000 is forbidden. We could calculate it as 1 << 31 (ignoring undefined behavior in C) or, to save on instruction, calculate 31 - n as n ^ 31 and then use the following somewhat more contrived method:
int logicalShift(int x, int n) {
int b = 1 << (n ^ 31);
return b ^ ((x >> n) + b);
}
I think I confused myself with endianness and bit-shifting, please help.
I have 4 8-bit ints which I want to convert to a 32-bit int. This is what I an doing:
uint h;
t_uint8 ff[4] = {1,2,3,4};
if (BIG_ENDIAN) {
h = ((int)ff[0] << 24) | ((int)ff[1] << 16) | ((int)ff[2] << 8) | ((int)ff[3]);
}
else {
h = ((int)ff[0] >> 24) | ((int)ff[1] >> 16) | ((int)ff[2] >> 8) | ((int)ff[3]);
}
However, this seems to produce a wrong result. With a little experimentation I realised that it should be other way round: in the case of big endian I am supposed to shift bits to the right, and otherwise to the left. However, I don't understand WHY.
This is how I understand it. Big endian means most significant byte first (first means leftmost, right? perhaps this where I am wrong). So, converting 8-bit int to 32-bit int would prepend 24 zeros to my existing 8 bits. So, to make it a 1st byte I need to shift bits 24 to the left.
Please point out where I am wrong.
You always have to shift the 8-bit-values left. But in the little-endian case, you have to change the order of indices, so that the fourth byte goes into the most-significant position, and the first byte into the least-significant.
if (BIG_ENDIAN) {
h = ((int)ff[0] << 24) | ((int)ff[1] << 16) | ((int)ff[2] << 8) | ((int)ff[3]);
}
else {
h = ((int)ff[3] << 24) | ((int)ff[2] << 16) | ((int)ff[1] << 8) | ((int)ff[0]);
}
I have an array as follows,
unsigned char A[16]
I am using this array to represent a 128-bit hardware register. Now I want to implement a linear feedback shift register (LFSR, Fibonacci implementation) using this long register. The polynomials (or tap) which connect to the feedback xnor gate of this LFSR are [128, 29, 27, 2, 1].
The implementation of a 16-bit LFSR (taps at [16, 14, 13, 11]) can be obtained from Wikipedia as the following.
unsigned short lfsr = 0xACE1u;
unsigned bit;
unsigned rand()
{
bit = ((lfsr >> 0) ^ (lfsr >> 2) ^ (lfsr >> 3) ^ (lfsr >> 5) ) & 1;
return lfsr = (lfsr >> 1) | (bit << 15);
}
In my case, however, I need to shift bits from one byte element to another, e.g. the msb or A[0] need to be shift to the lsb of A1. What is minimum coding to do this shift?
Thank you!
To calculate the bit to shift in you don't need to shift the whole array every time since you are only interested in one bit (note the & 1 at the end of the bit = line from Wikipedia).
The right shift amounts are:
128 - 128 = 0 => byte 0 bit 0
128 - 29 = 99 => byte 12 bit 3
128 - 27 = 101 => byte 12 bit 5
128 - 2 = 126 => byte 15 bit 6
128 - 1 = 127 => byte 15 bit 7
So,
bit = ((A[0] >> 0)
^ (A[12] >> 3)
^ (A[12] >> 5)
^ (A[15] >> 6)
^ (A[15) >> 7)) & 1;
Now, you really need to shift in the bit:
A[0] = (A[0] >> 1) | (A[1] << 7);
A[1] = (A[1] >> 1) | (A[2] << 7);
// and so on, until
A[14] = (A[14] >> 1) | (A[15] << 7);
A[15] = (A[15] >> 1) | (bit << 7);
You can make this a bit more efficient by using uint32_t or uint64_t instead of unsigned chars (depending on your processor word size), but the principle is the same.