Trying to populate an array with random 32-bit binary numbers - c

I have an assignment to program the game of life in Standard C. I am only allowed to use 1D arrays. My first issue is trying to get my array populated with random 32 bit binary digits. I have tried making code to do this (with code supplied by my teacher which is not commented at all, so I have to some extent no idea what I am doing). When I try displaying the contents of the array it seems that at every index the values are all the same. Can you guys help me populate my array with 32 random binary digits (32 bits in length).
#include <stdio.h>
#include <stdlib.h>
#define ARSZ 32
void displayBinary(unsigned long* , int);
unsigned long init32();
unsigned long Row[ARSZ];
int main(void){
int i, j;
for (i=0;i<ARSZ;i++){
Row[i] = init32();
}//populate array
for (j=0;j<ARSZ;j++){
displayBinary(Row, j);
printf("\n");
}//display array contents
}/** End Main **/
void displayBinary(unsigned long array[], int x){
unsigned long MASK = 0x80000000;
do {
printf("%c", (array[x] & MASK) ?'X':0x20);
}while ((MASK >>=1) != 0);
}
unsigned long init32(){
unsigned long init32;
srand(time(NULL));
init32 = ((double)rand()/RAND_MAX)*0xFFFFFFFF;
return init32;
}

You need to change the seed. The srand(time(NULL)) call is going to return the same sequence of values because the seed hasn't changed.
Take a cursory glance at how the pseudo-random number generator works in C and it'll make sense.

1) Repetitive calling of srand(time(NULL)). Rather than calling srand() in init32(), call it once in main(). #WhozCraig, #Albert Myers
2) Potentially too small RAND_MAX with init32 = ((double)rand()/RAND_MAX)*0xFFFFFFFF;. C only requires RAND_MAX to be at least 32767. This would give init32 at most 32768 different values. To certainly generate a 32-bit random number, use:
unsigned long init32;
init32 = rand();
init32 <<= 15;
init32 ^= rand();
init32 <<= 15;
init32 ^= rand();
// if pedantic
init32 &= 0xFFFFFFFFL;
Note:
A small bias in 32-bit number generation occurs if RAND_MAX + 1 is not a power-of-2.
Also, rand() itself is sometimes nor the uniformly random - depends on platform.

Related

Generating random 64/32/16/ and 8-bit integers in C

I'm hoping that somebody can give me an understanding of why the code works the way it does. I'm trying to wrap my head around things but am lost.
My professor has given us this code snippet which we have to use in order to generate random numbers in C. The snippet in question generates a 64-bit integer, and we have to adapt it to also generate 32-bit, 16-bit, and 8-bit integers. I'm completely lost on where to start, and I'm not necessarily asking for a solution, just on how the original snippet works, so that I can adapt it form there.
long long rand64()
{
int a, b;
long long r;
a = rand();
b = rand();
r = (long long)a;
r = (r << 31) | b;
return r;
}
Questions I have about this code are:
Why is it shifted 31 bits? I thought rand() generated a number between 0-32767 which is 16 bits, so wouldn't that be 48 bits?
Why do we say | (or) b on the second to last line?
I'm making the relatively safe assumption that, in your computer's C implementation, long long is a 64-bit data type.
The key here is that, since long long r is signed, any value with the highest bit set will be negative. Therefore, the code shifts r by 31 bits to avoid setting that bit.
The | is a logical bit operator which combines the two values by setting all of the bits in r which are set in b.
EDIT:
After reading some of the comments, I realized that my answer needs correction. rand() returns a value no more than RAND_MAX which is typically 2^31-1. Therefore, r is a 31-bit integer. If you shifted it 32 bits to the left, you'd guarantee that its 31st bit (0-up counting) would always be zero.
rand() generates a random value [0...RAND_MAX] of questionable repute - but let us set that reputation aside and assume rand() is good enough and it is a
Mersenne number (power-of-2 - 1).
Weakness to OP's code: If RAND_MAX == pow(2,31)-1, a common occurrence, then OP's rand64() only returns values [0...pow(2,62)). #Nate Eldredge
Instead, loop as many times as needed.
To find how many random bits are returned with each call, we need the log2(RAND_MAX + 1). This fortunately is easy with an awesome macro from Is there any way to compute the width of an integer type at compile-time?
#include <stdlib.h>
/* Number of bits in inttype_MAX, or in any (1<<k)-1 where 0 <= k < 2040 */
#define IMAX_BITS(m) ((m)/((m)%255+1) / 255%255*8 + 7-86/((m)%255+12))
#define RAND_MAX_BITWIDTH (IMAX_BITS(RAND_MAX))
Example: rand_ul() returns a random value in the [0...ULONG_MAX] range, be unsigned long 32-bit, 64-bit, etc.
unsigned long rand_ul(void) {
unsigned long r = 0;
for (int i=0; i<IMAX_BITS(ULONG_MAX); i += RAND_MAX_BITWIDTH) {
r <<= RAND_MAX_BITWIDTH;
r |= rand();
}
return r;
}

Function for binary conversion

I am trying to convert a decimal value to binary using the function I wrote in C below. I cannot figure out the reason why it is printing 32 zeroes rather than the binary value of 2.
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#include <limits.h>
int binaryConversion(int num){
int bin_buffer[32];
int mask = INT_MIN;
for(int i = 0; i < 32; i++){
if(num & mask){
bin_buffer[i] = 1;
mask >> 1;
}
else{
bin_buffer[i] = 0;
mask >> 1;
}
}
for(int j = 0; j < 32; j++){
printf("%d", bin_buffer[j]);
}
}
int main(){
binaryConversion(2);
}
Thanks
Two mistakes:
You use >> instead of >>=, so you're not actually ever changing mask.
You didn't declare mask as unsigned, so when you shift, it'll get sign-extended, which you don't want.
If you put a:
printf("%d %d\n", num, mask);
immediately inside your for loop, you'll see why:
2 -2147483648
2 -2147483648
2 -2147483648
2 -2147483648
:
2 -2147483648
The expression mask >> 1 does right shift the value of mask but doesn't actually assign it back to mask. I think you meant to use:
mask >>= 1;
On top of that (once you fix that problem), you'll see that the values in the mask are a bit strange because right-shifting a negative value can preserve the sign, meaning you will end up with multiple bits set.
You'd be better off using unsigned integers since the >> operator will act on them more in line with your expectations.
Additionally, there's little point in writing all those bits into a buffer just so you can print them out later. Unless you need to do some manipulation on the bits (and this appears to not be the case here), you can just output them directly as they're calculated (and get rid of the now unnecessary i variable).
So, taking all those points into account, you can greatly simplify your code such as with the following complete program:
#include <stdio.h>
#include <limits.h>
int binaryConversion(unsigned num) {
for (unsigned mask = (unsigned)INT_MIN; mask != 0; mask >>= 1)
putchar((num & mask) ? '1' : '0');
}
int main(void) {
binaryConversion(2);
putchar('\n');
}
And just one more note, the value of INT_MIN is not actually required to just have the top bit set. Because of the current allowance by C to handle ones' complement and sign-magnitude (as well as two's complement) for negative numbers, it possible for INT_MIN to have a value with multiple bits set (such as -32767).
There are moves afoot to remove these little-used encodings from C (C++20 has already flagged this) but, for maximum portability, you could opt instead for the following function:
int binaryConversion(unsigned int num) {
// Done once to set topBit.
static unsigned topBit = 0;
if (topBit == 0) {
topBit = 1;
while (topBit << 1 != 0) topBit <<= 1;
}
// Loop to process all bits.
for (unsigned mask = topBit; mask != 0; mask >>= 1)
putchar(num & mask ? '1' : '0');
}
This calculates the value with the top bit set the first time you call the function, irrespective of the vagaries of negative encodings. Just watch out if you call it concurrently in a threaded program.
But, as mentioned, this probably isn't necessary, the number of environments that use the other two encodings would be countable on the fingers of a very careless/unlucky industrial machine operator.
You already have your primary question answered regarding the use of >> rather than =>>. However, from a fundamental standpoint there is no need to buffer the 1 and 0 in an array of int (e.g. int bin_buffer[32];) and there is no need to use the variadic printf function to display int values if all you are doing is outputting the binary representation of the number.
Instead, all you need is putchar() to output '1' or '0' depending on whether any bit is set or clear. You can also make your output function a bit more useful by providing the size of the representation you want, e.g. a byte (8-bits), a word (16-bits), and so on.
For example, you could do:
#include <stdio.h>
#include <limits.h>
/** binary representation of 'v' padded to 'sz' bits.
* the padding amount is limited to the number of
* bits in 'v'. valid range: 0 - sizeof v * CHAR_BIT.
*/
void binaryConversion (const unsigned long v, size_t sz)
{
if (!sz) { fprintf (stderr, "error: invalid sz.\n"); return; }
if (!v) { while (sz--) putchar ('0'); return; }
if (sz > sizeof v * CHAR_BIT)
sz = sizeof v * CHAR_BIT;
while (sz--)
putchar ((v >> sz & 1) ? '1' : '0');
}
int main(){
fputs ("byte : ", stdout);
binaryConversion (2, 8);
fputs ("\nword : ", stdout);
binaryConversion (2, 16);
putchar ('\n');
}
Which allows you to set the number of bits you want displayed, e.g.
Example Use/Output
$ ./bin/binaryconversion
byte : 00000010
word : 0000000000000010
There is nothing wrong with your approach, but there may be a simpler way to arrive at the same output.
Let me know if you have further questions.
INT_MIN is a negative number so when you shifted to the right using >>, the most significant bit will still be 1 instead of zero and you will end up in mask=11111...111 all bits have value of 1. Also the mask value is not changing. better use >>= instead. You can try masking on 0x1 and shift the actual value of num instead of the mask like this.
int binaryConversion(int num) {
char bin_buffer[32 + 1]; //+1 for string terminator.
int shifted = num;
for (int i = 31; i >= 0; --i, shifted >>= 1) { //loop 32x
bin_buffer[i] = '0' + (shifted & 0x1);
}
bin_buffer[32] = 0; //terminate the string.
printf("%s", bin_buffer);
}

Printing a long in binary 64-bit representation

I'm trying to print the binary representation of a long in order to practice bit manipulation and setting various bits in the long for a project I am working on. I successfully can print the bits on ints but whenever I try to print 64 bits of a long the output is screwy.
Here is my code:
#include <stdio.h>
void printbits(unsigned long n){
unsigned long i;
i = 1<<(sizeof(n)*4-1);
while(i>0){
if(n&1)
printf("1");
else
printf("0");
i >>= 1;
}
int main(){
unsigned long n=10;
printbits(n);
printf("\n");
}
My output is 0000000000000000000000000000111111111111111111111111111111111110.
Thanks for help!
4 isn’t the right number of bits in a byte
Even though you’re assigning it to an unsigned long, 1 << … is an int, so you need 1UL
n&1 should be n&i
There’s a missing closing brace
Fixes only:
#include <limits.h>
#include <stdio.h>
void printbits(unsigned long n){
unsigned long i;
i = 1UL<<(sizeof(n)*CHAR_BIT-1);
while(i>0){
if(n&i)
printf("1");
else
printf("0");
i >>= 1;
}
}
int main(){
unsigned long n=10;
printbits(n);
printf("\n");
}
And if you want to print a 64-bit number specifically, I would hard-code 64 and use uint_least64_t.
The problem is that i = 1<<(sizeof(n)*4-1) is not correct for a number of reasons.
sizeof(n)*4 is 32, not 64. you probably want sizeof(n)*8
1<<63 may give you overflow because 1 may be 32-bits by default. You should use 1ULL<<(sizeof(n)*8-1)
unsigned long is not necessarily 64 bits. You should use unsigned long long
If you want to be extra thorough, use sizeof(n) * CHAR_BIT (defined in <limits.h>).
In general, you should use stdint defines (e.g. uint64_t) whenever possible.
The following should do what you want:
#include <stdio.h>
void printbits(unsigned long number, unsigned int num_bits_to_print)
{
if (number || num_bits_to_print > 0) {
printbits(number >> 1, num_bits_to_print - 1);
printf("%d", number & 1);
}
}
We keep calling the function recursively until either we've printed enough bits, or we've printed the whole number, whichever takes more bits.
wrapping this in another function directly does exactly what you want:
void printbits64(unsigned long number) {
printbits(number, 64);
}

Adding 32 bit signed in C

I have been given this problem and would like to solve it in C:
Assume you have a 32-bit processor and that the C compiler does not support long long (or long int). Write a function add(a,b) which returns c = a+b where a and b are 32-bit integers.
I wrote this code which is able to detect overflow and underflow
#define INT_MIN (-2147483647 - 1) /* minimum (signed) int value */
#define INT_MAX 2147483647 /* maximum (signed) int value */
int add(int a, int b)
{
if (a > 0 && b > INT_MAX - a)
{
/* handle overflow */
printf("Handle over flow\n");
}
else if (a < 0 && b < INT_MIN - a)
{
/* handle underflow */
printf("Handle under flow\n");
}
return a + b;
}
I am not sure how to implement the long using 32 bit registers so that I can print the value properly. Can someone help me with how to use the underflow and overflow information so that I can store the result properly in the c variable with I think should be 2 32 bit locations. I think that is what the problem is saying when it hints that that long is not supported. Would the variable c be 2 32 bit registers put together somehow to hold the correct result so that it can be printed? What action should I preform when the result over or under flows?
Since this is a homework question I'll try not to spoil it completely.
One annoying aspect here is that the result is bigger than anything you're allowed to use (I interpret the ban on long long to also include int64_t, otherwise there's really no point to it). It may be temping to go for "two ints" for the result value, but that's weird to interpret the value of. So I'd go for two uint32_t's and interpret them as two halves of a 64 bit two's complement integer.
Unsigned multiword addition is easy and has been covered many times (just search). The signed variant is really the same if the inputs are sign-extended: (not tested)
uint32_t a_l = a;
uint32_t a_h = -(a_l >> 31); // sign-extend a
uint32_t b_l = b;
uint32_t b_h = -(b_l >> 31); // sign-extend b
// todo: implement the addition
return some struct containing c_l and c_h
It can't overflow the 64 bit result when interpreted signed, obviously. It can (and should, sometimes) wrap.
To print that thing, if that's part of the assignment, first reason about which values c_h can have. There aren't many possibilities. It should be easy to print using existing integer printing functions (that is, you don't have to write a whole multiword-itoa, just handle a couple of cases).
As a hint for the addition: what happens when you add two decimal digits and the result is larger than 9? Why is the low digit of 7+6=13 a 3? Given only 7, 6 and 3, how can you determine the second digit of the result? You should be able to apply all this to base 232 as well.
First, the simplest solution that satisfies the problem as stated:
double add(int a, int b)
{
// this will not lose precision, as a double-precision float
// will have more than 33 bits in the mantissa
return (double) a + b;
}
More seriously, the professor probably expected the number to be decomposed into a combination of ints. Holding the sum of two 32-bit integers requires 33 bits, which can be represented with an int and a bit for the carry flag. Assuming unsigned integers for simplicity, adding would be implemented like this:
struct add_result {
unsigned int sum;
unsigned int carry:1;
};
struct add_result add(unsigned int a, unsigned int b)
{
struct add_result ret;
ret.sum = a + b;
ret.carry = b > UINT_MAX - a;
return ret;
}
The harder part is doing something useful with the result, such as printing it. As proposed by harold, a printing function doesn't need to do full division, it can simply cover the possible large 33-bit values and hard-code the first digits for those ranges. Here is an implementation, again limited to unsigned integers:
void print_result(struct add_result n)
{
if (!n.carry) {
// no carry flag - just print the number
printf("%d\n", n.sum);
return;
}
if (n.sum < 705032704u)
printf("4%09u\n", n.sum + 294967296u);
else if (n.sum < 1705032704u)
printf("5%09u\n", n.sum - 705032704u);
else if (n.sum < 2705032704u)
printf("6%09u\n", n.sum - 1705032704u);
else if (n.sum < 3705032704u)
printf("7%09u\n", n.sum - 2705032704u);
else
printf("8%09u\n", n.sum - 3705032704u);
}
Converting this to signed quantities is left as an exercise.

How does loop order affect bit manipulation for this case in C?

The code below ends up in a seemingly endless loop while printing some decimal numbers.
int main(){
show(0xBADECAFU);
}
void show(unsigned a){
unsigned pos=0;
for(; pos<UINT_MAX; pos++)
printf("%u", (1U<<pos) & a);
}
The code below actually shows the bits of the hex number. Why does the first program run improperly while the second does not?
int main(){
show(0xBADECAFU);
}
void show(unsigned n){
unsigned pos=31, count=1;
for(; pos!=UINT_MAX; pos--, count++){
printf("%u", n>>pos & 1U);
}
There are not UINT_MAX bits in an unsigned int. There are, however, CHAR_BIT * sizeof(unsigned int) bits.
/* nb: this prints bits LSB first */
void show(unsigned a){
unsigned pos=0;
for(; pos < CHAR_BIT*sizeof(unsigned); pos++)
printf("%u", (1U<<pos) & a ? 1 : 0);
}
Consider your second case, where you loop until pos equals UINT_MAX. This will properly* print out 32 bits of unsigned, assuming underflow goes to ~0 and sizeof(unsigned) is at least 4.
Your second example could be improved slightly:
void show(unsigned n){
int pos = (CHAR_BIT * sizeof(unsigned)) - 1;
for(; pos >= 0; pos--) {
printf("%u", (n>>pos) & 1U);
}
}
* Your code which "prints" the bits was odd, and in my example I've fixed it up.
UINT_MAX is the maximum value which can be stored in an unsigned int variable. It is not directly related to the number of bits.
Your first loop is incrementing over a huge number of ints.
Your second loop is decrementing from 31 to ??? (unsigned, so what happens when you decrement 0? - looks like you are lucky and 0-1 = UINT_MAX)
This is just a guess.
I think the problem with the first one is that (1U<<pos) invokes undefined behavior if pos >= (sizeof(unsigned int) * CHAR_BIT). In such cases, the compiler is free to do whatever it wants. It might assume that you would never create such a situation, hence pos must always be < (sizeof(unsigned int) * CHAR_BIT) < UINT_MAX, hence the loop condition can be optimized away. Unfortunately, this "optimization" leaves you with an infinite loop.

Resources