I was charged with the task of writing a method that "returns the word with all even-numbered bits set to 1." Being completely new to C this seems really confusing and unclear. I don't understand how I can change the bits of a number with C. That seems like a very low level instruction, and I don't even know how I would do that in Java (my first language)! Can someone please help me! This is the method signature.
int evenBits(void){
return 0;
}
Any instruction on how to do this or even guidance on how to begin doing this would be greatly appreciated. Thank you so much!
Break it down into two problems.
(1) Given a variable, how do I set particular bits?
Hint: use a bitwise operator.
(2) How do I find out the representation of "all even-numbered bits" so I can use a bitwise operator to set them?
Hint: Use math. ;-) You could make a table (or find one) such as:
Decimal | Binary
--------+-------
0 | 0
1 | 1
2 | 10
3 | 11
... | ...
Once you know what operation to use to set particular bits, and you know a decimal (or hexadecimal) integer literal to use that with in C, you've solved the problem.
You must give a precise definition of all even numbered bits. Bits are numbered in different ways on different architectures. Hardware people like to number them from 1 to 32 from the least significant to the most significant bit, or sometimes the other way, from the most significant to the least significant bit... while software guys like to number bits by increasing order starting at 0 because bit 0 represents the number 20, ie: 1.
With this latter numbering system, the bit pattern would be 0101...0101, thus a value in hex 0x555...555. If you number bits starting at 1 for the least significant bit, the pattern would be 1010...1010, in hex 0xAAA...AAA. But this representation actually encodes a negative value on current architectures.
I shall assume for the rest of this answer that even numbered bits are those representing even powers of 2: 1 (20), 4 (22), 16 (24)...
The short answer for this problem is:
int evenBits(void) {
return 0x55555555;
}
But what if int has 64 bits?
int evenBits(void) {
return 0x5555555555555555;
}
Would handle 64 bit int but would have implementation defined behavior on systems where int is smaller.
Using macros from <limits.h>, you could mask off the extra bits to handle 16, 32 and 64 bit ints:
#include <limits.h>
int evenBits(void) {
return 0x5555555555555555 & INT_MAX;
}
But this code still makes some assumptions:
int has at most 64 bits.
int has an even number of bits.
INT_MAX is a power of 2 minus 1.
These assumptions are valid for most current systems, but the C Standard allows for implementations where one or more are invalid.
So basically every other bit has to be set to one? This is why we have bitwise operations in C. Imagine a regular bitarray. What you want is the right most even bit and set it to 1(this is the number 2). Then we just use the OR operator (|) to modify our existing number. After doing that. we bitshift the number 2 places to the left (<< 2), this modifies the bit array to 1000 compared to the previous 0010. Then we do the same again and use the or operator. The code below describes it better.
#include <stdio.h>
unsigned char SetAllEvenBitsToOne(unsigned char x);
int IsAllEvenBitsOne(unsigned char x);
int main()
{
unsigned char x = 0; //char is one byte data type ie. 8 bits.
x = SetAllEvenBitsToOne(x);
int check = IsAllEvenBitsOne(x);
if(check==1)
{
printf("shit works");
}
return 0;
}
unsigned char SetAllEvenBitsToOne(unsigned char x)
{
int i=0;
unsigned char y = 2;
for(i=0; i < sizeof(char)*8/2; i++)
{
x = x | y;
y = y << 2;
}
return x;
}
int IsAllEvenBitsOne(unsigned char x)
{
unsigned char y;
for(int i=0; i<(sizeof(char)*8/2); i++)
{
y = x >> 7;
if(y > 0)
{
printf("x before: %d\t", x);
x = x << 2;
printf("x after: %d\n", x);
continue;
}
else
{
printf("Not all even bits are 1\n");
return 0;
}
}
printf("All even bits are 1\n");
return 1;
}
Here is a link to Bitwise Operations in C
Related
Purpose: Demonstrate the ability to manipulate bits using functions and to learn a little bit about parity bits.
Parity is a type of error detection where one of the bits in a bit string is used for this purpose. There are more complicated systems that can do more robust error detection as well as error correction. In this lab, we will use a simple version called odd parity. This reserves one bit as a parity bit. The other bits are examined, and the parity bit is set so that the number of 1 bits is odd. For example, if you have a 3-bit sequence, 110 and the rightmost bit is the parity bit, it would be set to 1 to make the number of 1s odd.
Notes: When referring to bit positions, bit 31 is the high-order bit (leftmost), and bit 0 is the low-order bit (rightmost). In order to work through these functions, you will likely have to map out bit patterns for testing to see how it all works. You may find using a converter that can convert between binary, hex, and decimal useful. Also, to assign bit patterns to integers, it might be easier to use hex notation. To assign a hex value in C, you can use the 0x????? where ????? are hex values. (There can be more or fewer than the number of ? here.) E.g.,
int i = 0x02A;
Would assign i = 42 in decimal.
Program Specifications: Write the functions below:
unsigned int leftRotate(unsigned int intArg, unsigned int rotAmt);
Returns an unsigned int that is intArg rotated left by rotAmt. Note: Rotate left is similar to shift left. The difference is that the bits shifted out at the left come back in on the right. Rotate is a common operation and often is a single machine instruction. Do not convert intArg to a string and operate on that. Do not use an array of ints (or other numbers). Use only integers or unsigned integers.
Example: Assuming you have 5-bit numbers, rotating the binary number 11000 left by 3 yields 00110
char *bitString(int intArg)
Returns a pointer to a character string containing the 32-bit pattern for the integer argument. The first character, index 0, should be the high-order bit and on down from there. For this function, you will need malloc. Can be used for printing bit patterns. E.g., if intArg = 24 the return string would be 00000000000000000000000000011000
unsigned int oddParitySet3(unsigned int intArg, unsigned int startBit);
This function will determine the odd parity for a 3-bit segment of intArg starting at bit startBit and set the parity bit (low-order bit) appropriately.
E.g., suppose intArg=3 and startBit = 2. The 32 bit representation, from high to low, would be 29 zeros then 110. So, bits 2 - 0 are 011. To make the parity odd, you would set bit zero to 0.
The return value is the modified intArg, in this case it would be 29 zeros then 010 or a value of 2.
Do not convert intArg to a string and operate on that. Use only integers or unsigned integers.
Note: If the start bit is greater than 31 or less than 2, this would present a problem (do you see this?). If this is the case, return a zero.
The compile command used by this zyLab is:
gcc main.c -Wall -Werror -Wextra -Wuninitialized -pedantic-errors -o a.out -lm
The program does not pass all tests and gives such errors:
enter image description here
C code:
#include<stdio.h>
#include<string.h>
#include<stdlib.h>
char * bitString(int intArg);
unsigned int leftRotate(unsigned int n, unsigned int d);
unsigned int oddParitySet3(unsigned int intArg, unsigned int startBit);
int main() {
return 0;
}
char * bitString(int intArg)
{
char *bits = (char*)malloc(33 * sizeof(char));
bits[32] = '\0';
for(int i = 31; i >= 0; i--)
{
if(intArg & (1 << i))
bits[31 - i] = '1';
else
bits[31 - i] = '0';
}
return bits;
}
unsigned int leftRotate(unsigned int intArg, unsigned int rotAmt)
{
return (intArg << rotAmt) | (intArg >> (32 - rotAmt));
}
unsigned int oddParitySet3(unsigned int intArg, unsigned int startBit){
unsigned int mask = 0x00000007;
unsigned int shiftedMask = mask << startBit;
unsigned int temp = intArg & shiftedMask;
unsigned int result = intArg;
if(__builtin_popcount(temp) % 2 == 0)
result |= shiftedMask;
else
result &= ~shiftedMask;
return result;
}
need help to fix the oddParitySet3 function so that it does not display errors that are in the photo.
I need to write a macro named CountBitsM. this macro has one parameter and produces a value of type int. The parameter is any expression with an object data type or the literal name of any object data type, so i used int. This macro determines the number of bits of storage used for the data type on any machine in which its run. And i can use a macro from limits.h. Here is what i wrote, does this look right?
#ifndef COUNTBITSM_H
#define COUNTBITSM_H
#include <limits.h>
#define CountBitsM(int) ((int)*(CHAR_BIT))
#endif
Second question was to create a function CountIntBitsF that counts the number of bits used to represent a type int value on any machine. However, i can NOT USE any #define, or #include header files, or any macro. I also can not use any multiplications or divisions. The hint that was given was to start with a value of 1 in a type unsigned int variable and left-shift it one bit at a time, keeping count of number of shifts, until the variables value becomes 0. Here is what i have so far:
int CountIntBitsF(void)
{
int IntgMax = 8;
unsigned int count = 1;
while (IntgMax = IntgMax>>2) count++;
return count;
}
First off, i am not supposed to use division or multiplication so am i doing the shift properly? And i cant assume a char/byte contains 8 or any other specific number of bits. So how or what should i set my IntgMax to? Thanks for any help. I am new to C.
Macro for Bits in a Type
A macro to produce the number of bits used to represent a type in storage is:
#define CountBitsM(x) (sizeof (x) * CHAR_BIT)
However, this produces a result with type size_t (usually). If you really need an int result as stated in the question, convert it (but be aware overflow becomes possible):
#define CountBitsM(x) ((int) (sizeof (x) * CHAR_BIT))
Counting Bits
The second question asks to count the number of bits “to represent a type int value” by shifting bits in an unsigned value. There are two theoretical problems here. One is that the number of bits used to represent a value may including padding bits, and counting the bits by shifting a 1 through them only counts the value bits, not the padding bits. The second is that an int may have more padding bits than an unsigned; it may use fewer bits for the sign and value. Overwhelmingly, modern systems will not have these issues; the number of used bits in an int will be the same as the total number of bits used to store it and the number of bits used in an unsigned.
That said, you can count the number of bits in an unsigned object with:
int count = 0;
for (unsigned u = 1; 0 != u; u <<= 1)
++count;
This repeatedly shifts the bit in u left until it is shifted out, while counting the number of iterations required to do this. Note that the bits in an int cannot properly be counted this way, because the behavior of left shift is not defined by the C standard when it overflows an int.
Question one
#define NBITS(type_or_object) (sizeof(type_or_object) * CHAR_BIT)
or without multiplication
#define NBITS(type_or_object) (sizeof(type_or_object) << (CHAR_BIT == 8 ? 3 : CHAR_BIT == 16 ? 4 : CHAR_BIT == 32 ? 5 : 0))
Second question:
For the most popular two's complement (but I think it will also work for sign bit as well as -0 < 0 as I remember). Ir is for signed type. Unsigned types are easy.
int CountIntBits(void)
{
int IntgMax = 1;
int count = 1;
while (IntgMax > 0 )
{
count++;
IntgMax <<= 1;
}
return count;
}
int main(void)
{
printf("%d\n", CountIntBits());
}
or (also no multiplication :) )
int CountIntBits(void)
{
int shift = CHAR_BIT == 8 ? 3 : CHAR_BIT == 16 ? 4 : CHAR_BIT == 32 ? 5 : 0;
return sizeof(int) << shift;
}
for unsigned types:
int CountIntBits(void)
{
unsigned IntgMax = 1;
int count = 0;
while (IntgMax)
{
count++;
IntgMax <<= 1;
}
return count;
}
My thoughts: if one declares an int it basically gets an unsigned int. So if I need a negative value I have to explicitly create a signed int.
I tried
int a = 0b10000101;
printf("%d", a); // i get 138 ,what i've expected
signed int b = 0b10000101; // here i expect -10, but i also get 138
printf("%d", b); // also tried %u
So am I wrong that an signed integer in binary is a negative value?
How can I create a negative value in binary format?
Edit Even if I use 16/32/64 bits I get the same result. unsigned/signed doest seems to make a difference without manually shifting the bits.
If numbers are represented as two's complement you just need to have the sign bit set to ensure that the number is negative. That's the MSB. If an int is 32 bits, then 0b11111111111111111111111111111111 is -1, and 0b10000000000000000000000000000000 is INT_MIN.
To adjust for the size int(8|16|64)_t, just change the number of bits. The sign bit is still the MSB.
Keep in mind that, depending on your target, int could be 2 or 4 bytes. This means that int a=0b10000101 is not nearly enough bits to set the sign bit.
If your int is 4 bytes, you need 0b10000000 0000000 0000000 00000000 (spaces added for clarity).
For example on a 32-bit target:
int b = 0b11111111111111111111111111111110;
printf("%d\n", b); // prints -2
because int a = 0b10000101 has only 8 bits, where you need 16 or 32. Try thi:
int a = 0b10000000000000000000000000000101
that should create negative number if your machine is 32bits. If this does not work try:
int a = 0b1000000000000101
there are other ways to produce negative numbers:
int a = 0b1 << 31 + 0b101
or if you have 16 bit system
int a = 0b1 << 15 + 0b101
or this one would work for both 32 or 16 bits
int a = ~0b0 * 0b101
or this is another one that would work on both if you want to get -5
int a = ~0b101 + 1
so 0b101 is 5 in binary, ~0b101 gives -6 so to get -5 you add 1
EDIT:
Since I now see that you have confusion of what signed and unsigned numbers are, I will try to explain it as simple as possible int
So when you have:
int a = 5;
is the same as:
signed int a = 5;
and both of them would be positive. Now it would be the same as:
unsigned int a = 5;
because 5 is positive number.
On the other hand if you have:
int a = -5;
this would be the same as
signed int a = -5;
but it would not be the same as following:
unsigned int a = -5;
the first 2 would be -5, the third one is not the same. In fact it would be the same if you entered 4294967291 because they are the same in binary form but the fact that you have unsigned in front means that compiler would store it the same way but treat it as positive value.
How to create a negative binary number using signed/unsigned in C?
Simply negate the constant of a positive value. To attempt to do so with many 1's
... 1110110 assumes a bit width for int. Better to be portable.
#include <stdio.h>
int main(void) {
#define NEGATIVE_BINARY_NUMBER (-0b1010)
printf("%d\n", NEGATIVE_BINARY_NUMBER);
}
Output
-10
I have been given this problem and would like to solve it in C:
Assume you have a 32-bit processor and that the C compiler does not support long long (or long int). Write a function add(a,b) which returns c = a+b where a and b are 32-bit integers.
I wrote this code which is able to detect overflow and underflow
#define INT_MIN (-2147483647 - 1) /* minimum (signed) int value */
#define INT_MAX 2147483647 /* maximum (signed) int value */
int add(int a, int b)
{
if (a > 0 && b > INT_MAX - a)
{
/* handle overflow */
printf("Handle over flow\n");
}
else if (a < 0 && b < INT_MIN - a)
{
/* handle underflow */
printf("Handle under flow\n");
}
return a + b;
}
I am not sure how to implement the long using 32 bit registers so that I can print the value properly. Can someone help me with how to use the underflow and overflow information so that I can store the result properly in the c variable with I think should be 2 32 bit locations. I think that is what the problem is saying when it hints that that long is not supported. Would the variable c be 2 32 bit registers put together somehow to hold the correct result so that it can be printed? What action should I preform when the result over or under flows?
Since this is a homework question I'll try not to spoil it completely.
One annoying aspect here is that the result is bigger than anything you're allowed to use (I interpret the ban on long long to also include int64_t, otherwise there's really no point to it). It may be temping to go for "two ints" for the result value, but that's weird to interpret the value of. So I'd go for two uint32_t's and interpret them as two halves of a 64 bit two's complement integer.
Unsigned multiword addition is easy and has been covered many times (just search). The signed variant is really the same if the inputs are sign-extended: (not tested)
uint32_t a_l = a;
uint32_t a_h = -(a_l >> 31); // sign-extend a
uint32_t b_l = b;
uint32_t b_h = -(b_l >> 31); // sign-extend b
// todo: implement the addition
return some struct containing c_l and c_h
It can't overflow the 64 bit result when interpreted signed, obviously. It can (and should, sometimes) wrap.
To print that thing, if that's part of the assignment, first reason about which values c_h can have. There aren't many possibilities. It should be easy to print using existing integer printing functions (that is, you don't have to write a whole multiword-itoa, just handle a couple of cases).
As a hint for the addition: what happens when you add two decimal digits and the result is larger than 9? Why is the low digit of 7+6=13 a 3? Given only 7, 6 and 3, how can you determine the second digit of the result? You should be able to apply all this to base 232 as well.
First, the simplest solution that satisfies the problem as stated:
double add(int a, int b)
{
// this will not lose precision, as a double-precision float
// will have more than 33 bits in the mantissa
return (double) a + b;
}
More seriously, the professor probably expected the number to be decomposed into a combination of ints. Holding the sum of two 32-bit integers requires 33 bits, which can be represented with an int and a bit for the carry flag. Assuming unsigned integers for simplicity, adding would be implemented like this:
struct add_result {
unsigned int sum;
unsigned int carry:1;
};
struct add_result add(unsigned int a, unsigned int b)
{
struct add_result ret;
ret.sum = a + b;
ret.carry = b > UINT_MAX - a;
return ret;
}
The harder part is doing something useful with the result, such as printing it. As proposed by harold, a printing function doesn't need to do full division, it can simply cover the possible large 33-bit values and hard-code the first digits for those ranges. Here is an implementation, again limited to unsigned integers:
void print_result(struct add_result n)
{
if (!n.carry) {
// no carry flag - just print the number
printf("%d\n", n.sum);
return;
}
if (n.sum < 705032704u)
printf("4%09u\n", n.sum + 294967296u);
else if (n.sum < 1705032704u)
printf("5%09u\n", n.sum - 705032704u);
else if (n.sum < 2705032704u)
printf("6%09u\n", n.sum - 1705032704u);
else if (n.sum < 3705032704u)
printf("7%09u\n", n.sum - 2705032704u);
else
printf("8%09u\n", n.sum - 3705032704u);
}
Converting this to signed quantities is left as an exercise.
I have a big char *str where the first 8 chars (which equals 64 bits if I'm not wrong), represents a bitmap. Is there any way to iterate through these 8 chars and see which bits are 0? I'm having alot of trouble understanding the concept of bits, as you can't "see" them in the code, so I can't think of any way to do this.
Imagine you have only one byte, a single char my_char. You can test for individual bits using bitwise operators and bit shifts.
unsigned char my_char = 0xAA;
int what_bit_i_am_testing = 0;
while (what_bit_i_am_testing < 8) {
if (my_char & 0x01) {
printf("bit %d is 1\n", what_bit_i_am_testing);
}
else {
printf("bit %d is 0\n", what_bit_i_am_testing);
}
what_bit_i_am_testing++;
my_char = my_char >> 1;
}
The part that must be new to you, is the >> operator. This operator will "insert a zero on the left and push every bit to the right, and the rightmost will be thrown away".
That was not a very technical description for a right bit shift of 1.
Here is a way to iterate over each of the set bits of an unsigned integer (use unsigned rather than signed integers for well-defined behaviour; unsigned of any width should be fine), one bit at a time.
Define the following macros:
#define LSBIT(X) ((X) & (-(X)))
#define CLEARLSBIT(X) ((X) & ((X) - 1))
Then you can use the following idiom to iterate over the set bits, LSbit first:
unsigned temp_bits;
unsigned one_bit;
temp_bits = some_value;
for ( ; temp_bits; temp_bits = CLEARLSBIT(temp_bits) ) {
one_bit = LSBIT(temp_bits);
/* Do something with one_bit */
}
I'm not sure whether this suits your needs. You said you want to check for 0 bits, rather than 1 bits — maybe you could bitwise-invert the initial value. Also for multi-byte values, you could put it in another for loop to process one byte/word at a time.
It's true for little-endian memory architecture:
const int cBitmapSize = 8;
const int cBitsCount = cBitmapSize * 8;
const unsigned char cBitmap[cBitmapSize] = /* some data */;
for(int n = 0; n < cBitsCount; n++)
{
unsigned char Mask = 1 << (n % 8);
if(cBitmap[n / 8] & Mask)
{
// if n'th bit is 1...
}
}
In the C language, chars are 8-bit wide bytes, and in general in computer science, data is organized around bytes as the fundamental unit.
In some cases, such as your problem, data is stored as boolean values in individual bits, so we need a way to determine whether a particular bit in a particular byte is on or off. There is already an SO solution for this explaining how to do bit manipulations in C.
To check a bit, the usual method is to AND it with the bit you want to check:
int isBitSet = bitmap & (1 << bit_position);
If the variable isBitSet is 0 after this operation, then the bit is not set. Any other value indicates that the bit is on.
For one char b you can simply iterate like this :
for (int i=0; i<8; i++) {
printf("This is the %d-th bit : %d\n",i,(b>>i)&1);
}
You can then iterate through the chars as needed.
What you should understand is that you cannot manipulate directly the bits, you can just use some arithmetic properties of number in base 2 to compute numbers that in some way represents some bits you want to know.
How does it work for example ? In a char there is 8 bits. A char can be see as a number written with 8 bits in base 2. If the number in b is b7b6b5b4b3b2b1b0 (each being a digit) then b>>i is b shifted to the right by i positions (in the left 0's are pushed). So, 10110111 >> 2 is 00101101, then the operation &1 isolate the last bit (bitwise and operator).
If you want to iterate through all char.
char *str = "MNO"; // M=01001101, N=01001110, O=01001111
int bit = 0;
for (int x = strlen(str)-1; x > -1; x--){ // Start from O, N, M
printf("Char %c \n", str[x]);
for(int y=0; y<8; y++){ // Iterate though every bit
// Shift bit the the right with y step and mask last position
if( str[x]>>y & 0b00000001 ){
printf("bit %d = 1\n", bit);
}else{
printf("bit %d = 0\n", bit);
}
bit++;
}
}
output
Char O
bit 0 = 1
bit 1 = 1
bit 2 = 1
bit 3 = 1
bit 4 = 0
bit 5 = 0
bit 6 = 1
bit 7 = 0
Char N
bit 8 = 0
bit 9 = 1
bit 10 = 1
...