Sign extending to 32 bits, starting with n bits - C - c

I am new to C and getting some practice with bit manipulation.
Suppose I have an n bit two's complement number such that n > 0 and n < 31. If I know the size of n in advance, how can I sign extend it to 32 bits?
If n was 16 bits,
int32_t extendMe(int16_t n) {
return (int32_t) n;
}
assuming I have the data definitions.
Suppose I have an n bit value that I want to sign extend to 32, how can I accomplish this?
Thank you.

If this really is about interpreting arbitrary bit patterns as numbers represented in n bits using two's complement, here's some sloppy example code doing that:
#include <stdio.h>
#include <inttypes.h>
// this assumes the number is in the least significant `bits`, with
// the most significat of these being the sign bit.
int32_t fromTwosComplement(uint32_t pattern, unsigned int bits)
{
// read sign bit
int negative = !!(pattern & (1U << (bits-1)));
// bit mask for all bits *except* the sign bit
uint32_t mask = (1U << (bits-1)) - 1;
// extract value without sign
uint32_t val = pattern & mask;
if (negative)
{
// if negative, apply two's complement
val ^= mask;
++val;
return -val;
}
else
{
return val;
}
}
int main(void)
{
printf("%" PRId32 "\n", fromTwosComplement(0x1f, 5)); // output -1
printf("%" PRId32 "\n", fromTwosComplement(0x01, 5)); // output 1
}

An n-bit 2's complement number is negative if bit n - 1 is 1. In that case you want to fill all the bits from n to 31 with 1's. If it's zero, for completeness, you might also want to fill the bits from n to 31 with 0. So you need a mask, that you can use with bit operations to accomplish the above. This is easy to make. Assuming your n bit 2's complement number is held in a uint32_t:
int32_t signExtend(uint32_t number, int n)
{
uint32_t ret;
uint32_t mask = 0xffffffff << n;
if (number & (1 << (n - 1)) != 0)
{
// number is negative
ret = number | mask;
}
else
{
// number is positive
ret = number & ~mask;
}
return (int32_t) ret;
}
Completely untested and the last line might be UB but it should work on most implementations.

Related

Convert signed int of variable bit size

I have a number of bits (the number of bits can change) in an unsigned int (uint32_t). For example (12 bits in the example):
uint32_t a = 0xF9C;
The bits represent a signed int of that length.
In this case the number in decimal should be -100.
I want to store the variable in a signed variable and gets is actual value.
If I just use:
int32_t b = (int32_t)a;
it will be just the value 3996, since it gets casted to (0x00000F9C) but it actually needs to be (0xFFFFFF9C)
I know one way to do it:
union test
{
signed temp :12;
};
union test x;
x.temp = a;
int32_t result = (int32_t) x.temp;
now i get the correct value -100
But is there a better way to do it?
My solution is not very flexbile, as I mentioned the number of bits can vary (anything between 1-64bits).
But is there a better way to do it?
Well, depends on what you mean by "better". The example below shows a more flexible way of doing it as the size of the bit field isn't fixed. If your use case requires different bit sizes, you could consider it a "better" way.
unsigned sign_extend(unsigned x, unsigned num_bits)
{
unsigned f = ~((1 << (num_bits-1)) - 1);
if (x & f) x = x | f;
return x;
}
int main(void)
{
int x = sign_extend(0xf9c, 12);
printf("%d\n", x);
int y = sign_extend(0x79c, 12);
printf("%d\n", y);
}
Output:
-100
1948
A branch free way to sign extend a bitfield (Henry S. Warren Jr., CACM v20 n6 June 1977) is this:
// value i of bit-length len is a bitfield to sign extend
// i is right aligned and zero-filled to the left
sext = 1 << (len - 1);
i = (i ^ sext) - sext;
UPDATE based on #Lundin's comment
Here's tested code (prints -100):
#include <stdio.h>
#include <stdint.h>
int32_t sign_extend (uint32_t x, int32_t len)
{
int32_t i = (x & ((1u << len) - 1)); // or just x if you know there are no extraneous bits
int32_t sext = 1 << (len - 1);
return (i ^ sext) - sext;
}
int main(void)
{
printf("%d\n", sign_extend(0xF9C, 12));
return 0;
}
This relies on the implementation defined behavior of sign extension when right-shifting signed negative integers. First you shift your unsigned integer all the way left until the sign bit is becoming MSB, then you cast it to signed integer and shift back:
#include <stdio.h>
#include <stdint.h>
#define NUMBER_OF_BITS 12
int main(void) {
uint32_t x = 0xF9C;
int32_t y = (int32_t)(x << (32-NUMBER_OF_BITS)) >> (32-NUMBER_OF_BITS);
printf("%d\n", y);
return 0;
}
This is a solution to your problem:
int32_t sign_extend(uint32_t x, uint32_t bit_size)
{
// The expression (0xffffffff << bit_size) will fill the upper bits to sign extend the number.
// The expression (-(x >> (bit_size-1))) is a mask that will zero the previous expression in case the number was positive (to avoid having an if statemet).
return (0xffffffff << bit_size) & (-(x >> (bit_size-1))) | x;
}
int main()
{
printf("%d\n", sign_extend(0xf9c, 12)); // -100
printf("%d\n", sign_extend(0x7ff, 12)); // 2047
return 0;
}
The sane, portable and effective way to do this is simply to mask out the data part, then fill up everything else with 0xFF... to get proper 2's complement representation. You need to know is how many bits that are the data part.
We can mask out the data with (1u << data_length) - 1.
In this case with data_length = 8, the data mask becomes 0xFF. Lets call this data_mask.
Thus the data part of the number is a & data_mask.
The rest of the number needs to be filled with zeroes. That is, everything not part of the data mask. Simply do ~data_mask to achieve that.
C code: a = (a & data_mask) | ~data_mask. Now a is proper 32 bit 2's complement.
Example:
#include <stdio.h>
#include <inttypes.h>
int main(void)
{
const uint32_t data_length = 8;
const uint32_t data_mask = (1u << data_length) - 1;
uint32_t a = 0xF9C;
a = (a & data_mask) | ~data_mask;
printf("%"PRIX32 "\t%"PRIi32, a, (int32_t)a);
}
Output:
FFFFFF9C -100
This relies on int being 32 bits 2's complement but is otherwise fully portable.

Iterate bits from left to right for any number

I am trying to implement Modular Exponentiation (square and multiply left to right) algorithm in c.
In order to iterate the bits from left to right, I can use masking which is explained in this link
In this example mask used is 0x80 which can work only for a number with max 8 bits.
In order to make it work for any number of bits, I need to assign mask dynamically but this makes it a bit complicated.
Is there any other solution by which it can be done.
Thanks in advance!
-------------EDIT-----------------------
long long base = 23;
long long exponent = 297;
long long mod = 327;
long long result = 1;
unsigned int mask;
for (mask = 0x80; mask != 0; mask >>= 1) {
result = (result * result) % mod; // Square
if (exponent & mask) {
result = (base * result) % mod; // Mul
}
}
As in this example, it will not work if I will use mask 0x80 but if I use 0x100 then it works fine.
Selecting the mask value at run time seems to be an overhead.
If you want to iterate over all bits, you first have to know how many bits there are in your type.
This is a surprisingly complicated matter:
sizeof gives you the number of bytes, but a byte can have more than 8 bits.
limits.h gives you CHAR_BIT to know the number of bits in a byte, but even if you multiply this by the sizeof your type, the result could still be wrong because unsigned types are allowed to contain padding bits that are not part of the number representation, while sizeof returns the storage size in bytes, which includes these padding bits.
Fortunately, this answer has an ingenious macro that can calculate the number of actual value bits based on the maximum value of the respective type:
#define IMAX_BITS(m) ((m) /((m)%0x3fffffffL+1) /0x3fffffffL %0x3fffffffL *30 \
+ (m)%0x3fffffffL /((m)%31+1)/31%31*5 + 4-12/((m)%31+3))
The maximum value of an unsigned type is surprisingly easy to get: just cast -1 to your unsigned type.
So, all in all, your code could look like this, including the macro above:
#define UNSIGNED_BITS IMAX_BITS((unsigned)-1)
// [...]
unsigned int mask;
for (mask = 1 << (UNSIGNED_BITS-1); mask != 0; mask >>= 1) {
// [...]
}
Note that applying this complicated macro has no runtime drawback at all, it's a compile-time constant.
Your algorithm seems unnecessarily complicated: bits from the exponent can be tested from the least significant to the most significant in a way that does not depend on the integer type nor its maximum value. Here is a simple implementation that does not need any special case for any size integers:
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char **argv) {
unsigned long long base = (argc > 1) ? strtoull(argv[1], NULL, 0) : 23;
unsigned long long exponent = (argc > 2) ? strtoull(argv[2], NULL, 0) : 297;
unsigned long long mod = (argc > 3) ? strtoull(argv[3], NULL, 0) : 327;
unsigned long long y = exponent;
unsigned long long x = base;
unsigned long long result = 1;
for (;;) {
if (y & 1) {
result = result * x % mod;
}
if ((y >>= 1) == 0)
break;
x = x * x % mod;
}
printf("expmod(%llu, %llu, %llu) = %llu\n", base, exponent, mod, result);
return 0;
}
Without any command line arguments, it produces: expmod(23, 297, 327) = 185. You can try other numbers by passing the base, exponent and modulo as command line arguments.
EDIT:
If you must scan the bits in exponent from most significant to least significant, mask should be defined as the same type as exponent and initialized this way if the type is unsigned:
unsigned long long exponent = 297;
unsigned long long mask = 0;
mask = ~mask - (~mask >> 1);
If the type is signed, for complete portability, you must use the definition for its maximum value from <limits.h>. Note however that it would be more efficient to use the unsigned type.
long long exponent = 297;
long long mask = LLONG_MAX - (LLONG_MAX >> 1);
The loop will waste time running through all the most significant 0 bits, so a simpler loop could be used first to skip these bits:
while (mask > exponent) {
mask >>= 1;
}

CRC-15 giving wrong values

I am trying to create a CRC-15 check in c and the output is never correct for each line of the file. I am trying to output the CRC for each line cumulatively next to each line. I use: #define POLYNOMIAL 0xA053 for the divisor and text for the dividend. I need to represent numbers as 32-bit unsigned integers. I have tried printing out the hex values to keep track and flipping different shifts around. However, I just can't seem to figure it out! I have a feeling it has something to do with the way I am padding things. Is there a flaw to my logic?
The CRC is to be represented in four hexadecimal numbers, that sequence will have four leading 0's. For example, it will look like 0000xxxx where the x's are the hexadecimal digits. The polynomial I use is 0xA053.
I thought about using a temp variable and do 4 16 bit chunks of code per line every XOR, however, I'm not quite sure how I could use shifts to accomplish this so I settled for a checksum of the letters on the line and then XORing that to try to calculate the CRC code.
I am testing my code using the following input and padding with . until the string is of length 504 because that is what the pad character needs to be via the requirements given:
"This is the lesson: never give in, never give in, never, never, never, never - in nothing, great or small, large or petty - never give in except to convictions of honor and good sense. Never yield to force; never yield to the apparently overwhelming might of the enemy."
The CRC of the first 64 char line ("This is the lesson: never give in, never give in, never, never,) is supposed to be 000015fa and I am getting bfe6ec00.
My logic:
In CRCCalculation I add each character to a 32-bit unsigned integer and after 64 (the length of one line) I send it into the XOR function.
If it the top bit is not 1, I shift the number to the left one
causing 0s to pad the right and loop around again.
If the top bit is 1, I XOR the dividend with the divisor and then shift the dividend to the left one.
After all calculations are done, I return the dividend shifted to the left four ( to add four zeros to the front) to the calculation function
Add result to the running total of the result
Code:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <stdint.h>
#include <ctype.h>
#define POLYNOMIAL 0xA053
void crcCalculation(char *text, int length)
{
int i;
uint32_t dividend = atoi(text);
uint32_t result;
uint32_t sumText = 0;
// Calculate CRC
printf("\nCRC 15 calculation progress:\n");
i = length;
// padding
if(i < 504)
{
for(; i!=504; i++)
{
// printf("i is %d\n", i);
text[i] = '.';
}
}
// Try calculating by first line of crc by summing the values then calcuating, then add in the next line
for (i = 0; i < 504; i++)
{
if(i%64 == 0 && i != 0)
{
result = XOR(POLYNOMIAL, sumText);
printf(" - %x\n",result);
}
sumText +=(uint32_t)text[i];
printf("%c", text[i]);
}
printf("\n\nCRC15 result : %x\n", result);
}
uint32_t XOR(uint32_t divisor, uint32_t dividend)
{
uint32_t divRemainder = dividend;
uint32_t currentBit;
// Note: 4 16 bit chunks
for(currentBit = 32; currentBit > 0; --currentBit)
{
// if topbit is 1
if(divRemainder & 0x80)
{
//divRemainder = (divRemainder << 1) ^ divisor;
divRemainder ^= divisor;
printf("%x %x\n", divRemainder, divisor);
}
// else
// divisor = divisor >> 1;
divRemainder = (divRemainder << 1);
}
//return divRemainder; , have tried shifting to right and left, want to add 4 zeros to front so >>
//return divRemainder >> 4;
return divRemainder >> 4;
}
The first issue I see is the top bit check, it should be:
if(divRemainder & 0x8000)
The question doesn't state if the CRC is bit reflected (xor data into low order bits of CRC, right shift for cycle) or not (xor data into high order bits of CRC, left shift for cycle), so I can't offer help for the rest of the code.
The question doesn't state the initial value of CRC (0x0000 or 0x7fff), or if the CRC is post complemented.
The logic for a conventional CRC is:
xor a byte of data into the CRC (upper or lower bits)
cycle the CRC 8 times (or do a table lookup)
After generating the CRC for an entire message, the CRC can be appended to the message. If a CRC is generated for a message with the appended CRC and there are no errors, the CRC will be zero (or a constant value if the CRC is post complemented).
here is a typical CRC16, extracted from: <www8.cs.umu.se/~isak/snippets/crc-16.c>
#define POLY 0x8408
/*
// 16 12 5
// this is the CCITT CRC 16 polynomial X + X + X + 1.
// This works out to be 0x1021, but the way the algorithm works
// lets us use 0x8408 (the reverse of the bit pattern). The high
// bit is always assumed to be set, thus we only use 16 bits to
// represent the 17 bit value.
*/
unsigned short crc16(char *data_p, unsigned short length)
{
unsigned char i;
unsigned int data;
unsigned int crc = 0xffff;
if (length == 0)
return (~crc);
do
{
for (i=0, data=(unsigned int)0xff & *data_p++;
i < 8;
i++, data >>= 1)
{
if ((crc & 0x0001) ^ (data & 0x0001))
crc = (crc >> 1) ^ POLY;
else crc >>= 1;
}
} while (--length);
crc = ~crc;
data = crc;
crc = (crc << 8) | (data >> 8 & 0xff);
return (crc);
}
Since you want to calculate a CRC15 rather than a CRC16, the logic will be more complex as cannot work with whole bytes, so there will be a lot of bit shifting and ANDing to extract the desire 15 bits.
Note: the OP did not mention if the initial value of the CRC is 0x0000 or 0x7FFF, nor if the result is to be complemented, nor certain other criteria, so this posted code can only be a guide.

Negative numbers: How can I change the sign bit in a signed int to a 0?

I was thinking this world work, but it does not:
int a = -500;
a = a << 1;
a = (unsigned int)a >> 1;
//printf("%d",a) gives me "2147483148"
My thought was that the left-shift would remove the leftmost sign bit, so right-shifting it as an unsigned int would guarantee that it's a logical shift rather than arithmetic. Why is this incorrect?
Also:
int a = -500;
a = a << 1;
//printf("%d",a) gives me "-1000"
TL;DR: the easiest way is to use the abs function from <stdlib.h>. The rest of the answer involves the representation of negative numbers on a computer.
Negative integers are (almost always) represented in 2's complement form. (see note below)
The method of getting the negative of a number is:
Take the binary representation of the whole number (including leading zeroes for the data type, except the MSB which will serve as the sign bit).
Take the 1's complement of the above number.
Add 1 to the 1's complement.
Prefix a sign bit.
Using 500 as an example,
Take the binary representation of 500: _000 0001 1111 0100 (_ is a placeholder for the sign bit).
Take the 1's-complement / inverse of it: _111 1110 0000 1011
Add 1 to the 1's complement: _111 1110 0000 1011 + 1 = _111 1110 0000 1100. This is the same as 2147483148 that you obtained, when you replaced the sign-bit by zero.
Prefix 0 to show a positive number and 1 for a negative number: 1111 1110 0000 1100. (This will be different from 2147483148 above. The reason you got the above value is because you nuked the MSB).
Inverting the sign is a similar process. You get leading ones if you use 16-bit or 32-bit numbers leading to the large value that you see. The LSB should be the same in each case.
Note: there are machines with 1's complement representation, but they are a minority. The 2's complement is usually preferred because 0 has the same representation, i.e., -0 and 0 are represented as all-zeroes in the 2's complement notation.
Left-shifting negative integers invokes undefined behavior, so you can't do that. You could have used your code if you did a = (unsigned int)a << 1;. You'd get 500 = 0xFFFFFE0C, left-shifted 1 = 0xFFFFFC18.
a = (unsigned int)a >> 1; does indeed guarantee logical shift, so you get 0x7FFFFE0C. This is decimal 2147483148.
But this is needlessly complex. The best and most portable way to change the sign bit is simply a = -a. Any other code or method is questionable.
If you however insist on bit-twiddling, you could also do something like
(int32_t)a & ~(1u << 31)
This is portable to 32 bit systems, since (int32_t) guarantees two's complement, but 1u << 31 assumes 32 bit int type.
Demo:
#include <stdio.h>
#include <stdint.h>
int main (void)
{
int a = -500;
a = (unsigned int)a << 1;
a = (unsigned int)a >> 1;
printf("%.8X = %d\n", a, a);
_Static_assert(sizeof(int)>=4, "Int must be at least 32 bits.");
a = -500;
a = (int32_t)a & ~(1u << 31);
printf("%.8X = %d\n", a, a);
return 0;
}
As you put in the your "Also" section, after your first left shift of 1 bit, a DOES reflect -1000 as expected.
The issue is in your cast to unsigned int. As explained above, the negative number is represented as 2's complement, meaning the sign is determined by the left most bit (most significant bit). When cast to an unsigned int, that value no longer represents sign but increases the maximum value your int can take.
Assuming 32 bit ints, the MSB used to represent -2^31 (= -2147483648) and now represents positive 2147483648 in an unsigned int, for an increase of 2* 2147483648 = 4294967296. Add this to your original value of -1000 and you get 4294966296. Right shift divides this by 2 and you arrive at 2147483148.
Hoping this may be helpful: (modified printing func from Print an int in binary representation using C)
void int2bin(int a, char *buffer, int buf_size) {
buffer += (buf_size - 1);
for (int i = buf_size-1; i >= 0; i--) {
*buffer-- = (a & 1) + '0';
a >>= 1;
}
}
int main() {
int test = -500;
int bufSize = sizeof(int)*8 + 1;
char buf[bufSize];
buf[bufSize-1] = '\0';
int2bin(test, buf, bufSize-1);
printf("%i (%u): %s\n", test, (unsigned int)test, buf);
//Prints: -500 (4294966796): 11111111111111111111111000001100
test = test << 1;
int2bin(test, buf, bufSize-1);
printf("%i (%u): %s\n", test, (unsigned int)test, buf);
//Prints: -1000 (4294966296): 11111111111111111111110000011000
test = 500;
int2bin(test, buf, bufSize-1);
printf("%i (%u): %s\n", test, (unsigned int)test, buf);
//Prints: 500 (500): 00000000000000000000000111110100
return 0;
}

How do I extract bits from 32 bit number

I have do not have much knowledge of C and I'm stuck with a problem since one of my colleague is on leave.
I have a 32 bit number and i have to extract bits from it. I did go through a few threads but I'm still not clear how to do so. I would be highly obliged if someone can help me.
Here is an example of what I need to do:
Assume hex number = 0xD7448EAB.
In binary = 1101 0111 0100 0100 1000 1110 1010 1011.
I need to extract the 16 bits, and output that value. I want bits 10 through 25.
The lower 10 bits (Decimal) are ignored. i.e., 10 1010 1011 are ignored.
And the upper 6 bits (Overflow) are ignored. i.e. 1101 01 are ignored.
The remaining 16 bits of data needs to be the output which is 11 0100 0100 1000 11 (numbers in italics are needed as the output).
This was an example but I will keep getting different hex numbers all the time and I need to extract the same bits as I explained.
How do I solve this?
Thank you.
For this example you would output 1101 0001 0010 0011, which is 0xD123, or 53,539 decimal.
You need masks to get the bits you want. Masks are numbers that you can use to sift through bits in the manner you want (keep bits, delete/clear bits, modify numbers etc). What you need to know are the AND, OR, XOR, NOT, and shifting operations. For what you need, you'll only need a couple.
You know shifting: x << y moves bits from x *y positions to the left*.
How to get x bits set to 1 in order: (1 << x) - 1
How to get x bits set to 1, in order, starting from y to y + x: ((1 << x) -1) << y
The above is your mask for the bits you need. So for example if you want 16 bits of 0xD7448EAB, from 10 to 25, you'll need the above, for x = 16 and y = 10.
And now to get the bits you want, just AND your number 0xD7448EAB with the mask above and you'll get the masked 0xD7448EAB with only the bits you want. Later, if you want to go through each one, you'll need to shift your result by 10 to the right and process each bit at a time (at position 0).
The answer may be a bit longer, but it's better design than just hard coding with 0xff or whatever.
OK, here's how I wrote it:
#include <stdint.h>
#include <stdio.h>
main() {
uint32_t in = 0xd7448eab;
uint16_t out = 0;
out = in >> 10; // Shift right 10 bits
out &= 0xffff; // Only lower 16 bits
printf("%x\n",out);
}
The in >> 10 shifts the number right 10 bits; the & 0xffff discards all bits except the lower 16 bits.
I want bits 10 through 25.
You can do this:
unsigned int number = 0xD7448EAB;
unsigned int value = (number & 0x3FFFC00) >> 10;
Or this:
unsigned int number = 0xD7448EAB;
unsigned int value = (number >> 10) & 0xFFFF;
I combined the top 2 answers above to write a C program that extracts the bits for any range of bits (not just 10 through 25) of a 32-bit unsigned int. The way the function works is that it returns bits lo to hi (inclusive) of num.
#include <stdio.h>
#include <stdint.h>
unsigned extract(unsigned num, unsigned hi, unsigned lo) {
uint32_t range = (hi - lo + 1); //number of bits to be extracted
//shifting a number by the number of bits it has produces inconsistent
//results across machines so we need a special case for extract(num, 31, 0)
if(range == 32)
return num;
uint32_t result = 0;
//following the rule above, ((1 << x) - 1) << y) makes the mask:
uint32_t mask = ((1 << range) -1) << lo;
//AND num and mask to get only the bits in our range
result = num & mask;
result = result >> lo; //gets rid of trailing 0s
return result;
}
int main() {
unsigned int num = 0xd7448eab;
printf("0x%x\n", extract(num, 10, 25));
}

Resources