C Program to convert centimeters to inches and feet - c

I am trying to write a c program to convert centimeters to inches and then to feet. I have most of the code written but not sure how to debug it. I keep getting "0" as all of my output. Where am I going wrong?
#include <stdio.h>
// Main Function
int main(void)
{
//Variable declarations
int centimeters = 0 ;
int inches = 0;
int feet = 0;
/* ... */
// Print a title
printf("Convert centimeters to inches and feet\n");
{
printf("Input Length in Centimeters: "); // Prompt user for input
scanf("%i", & centimeters); // Get input form user for length in cm
}
// conversion calcs
inches = (double)centimeters * 2.54;
feet = (double) inches / 12;
/* ... */
// Print output
printf("Statistics:\n");
printf("centimeters%12f\n", centimeters);
printf("inches%12f\n", inches);
printf("feet%12f\n", feet);
/* ... */
return 0;

In C, types never change themselves. Even though you're assigning a floating point number to an integer, the integer will still be an integer. For an application like this, I would recommend you just make everything a floating point number.
//Variable declarations
int centimeters = 0 ;
int inches = 0;
int feet = 0;
should be
//Variable declarations
double centimeters = 0 ;
double inches = 0;
double feet = 0;
This will also allow your printf statements to keep on using %f to print the numbers. You will need to change your scanf statement to also use %f though.
As to why you were getting 0 as your results, it's actually pretty simple if you know how numbers are stored internally.
Most floating point implementations store the bits in this order: sign, exponent, mantissa. If your implementation is using 32 bit IEEE floats, the first 8 bits are used for the sign and exponent, and the last 24 are for the mantissa. This means that if your value fits completely within a single byte (between 0 and 255 inclusive) then it will be 0 if read as a float, as the mantissa will be completely empty.
This is because most Intel-based desktop computers store their bytes in little-endian format, which means the least significant byte comes first and the most significant is last. For example, in little-endian machines, a 32 bit number that represents these values would be stored like:
0x01: 0x01 0x00 0x00 0x00
0xFF: 0xFF 0x00 0x00 0x00
0x1FF: 0xFF 0x01 0x00 0x00
0xA0F00D: 0x0D 0xF0 0xA0 0x00
And so on...
So if you have a value that's less than 0x100 (255 or lower) then it will only occupy the first byte of memory. It just so happens that the first 8 bytes of a 32 bit float do not hold any significant digits, so it will be treated as 0 if it's read as a float.
A 64 bit float (a double) uses the first 11 bits for its sign and exponent, so you can hide values all the way up to 2048 in there.
EDIT: Just an addendum, since someone up-voted this recently:
This post isn't totally incorrect, however I failed to mention the fact that there is an implicit leading bit which is set (a one, so to speak) in the mantissa if the exponent is not all zeros. When such is the case, it is called a normalized value, which can never be zero. Only if the number is denormalized (all the exponent bits are unset/zeroed out) can you have a true zero value.
If the user were to store a value between 0 and 15 or between 128 and 143 into the first 8 bits of a 32 bit float, that would give an exponent which would make the represented value so small that most display functions will round it to zero. This is because that would result in a very small exponent, as 0-15 only consume the lower 4 bits, and 128-143 are the same values with the most significant bit set (which may end up in the sign bit). The specific mapping depends on how the machine orders its bits.

You are using the wrong format. Instead of %12f, use %12d.
printf("centimeters%12d\n", centimeters);
printf("inches%12d\n", inches);
printf("feet%12d\n", feet);
While it's true that you will lose fractional values, that might be your intention. However, passing an int to printf where a double is expected causes undefined behavior.
See the differences in the output when using %12f vs %12d at http://ideone.com/JdjE3z.

int main()
{
float c = 0.0,i = 0.0; // c = centimeters, i = inches
int f = 0; // f = feet
printf("Input length in centimeter: ");
scanf("%f",&c);
f = (c / 2.54) / 12;
i = (c / 2.54) - (f * 12);
printf("%.1f centimeters is %d feet %.1f inches.\n",c,f,i);
exit ( 0 );
}

Related

what does float cf = *(float *)&ci; in C do?

i'm trying to find out what this program prints exactly.
#include <stdio.h>
int main() {
float bf = -62.140625;
int bi = *(int *)&bf;
int ci = bi+(1<<23);
float cf = *(float *)&ci;
printf("%X\n",bi);
printf("%f\n",cf);
}
This prints out:
C2789000
-124.281250
But what happens line by line ? I do not understand .
Thanks in advance.
It is a convoluted way of doubling an 32bit floating point number by adding one to its exponent. Moreover it is incorrect due to violation of strict aliasing rule by accesing object if type float via type int.
Exponent is located at bits number 23 to 30. Adding 1<<23 increment the exponent by one what works like multiplication of the original number by 2.
If we rewrite this program to remove pointer punning
int main() {
float bf = -62.140625;
memcpy(&bi, &bf, sizeof(bi));
for(int i = 0; i < 32; i += 8)
printf("%02x ", ((unsigned)bi & (0xff << i)) >> i);
bi += (1<<23);
memcpy(&bf, &bi, sizeof(bi));;
printf("%f\n",bf);
}
Float numbers have the format:
-62.140625 has exponent == 0.
bi += (1<<23);
sets the exponent to 1 so the resulting float number will be -62.140625 * 2^1 and it is equal to -124.281250. If you change that line to
bi += (1<<24);
it will set the exponent to 4 so the resulting float number will be -62.140625 * 2^2 and it is equal to -248.562500.
float bf = -62.140625;
This creates a float object named bf and initializes it to −62.140625.
int bi = *(int *)&bf;
&bf takes the address of bf, which produces a pointer to a float. (int *) says to convert this to a pointer to an int. Then * says to access the pointed-to memory, as if it were an int.
The C standard does not define the behavior of this access, but many C implementations support it, sometimes requiring a command-line switch to enable support for it.
A float value is normally encoded in some way. −62.140625 is not an integer, so it cannot be stored as a binary numeral that represents an integer. It is encoded. Reinterpreting the bytes memory as an int using * (int *) &bf is an attempt to get the bits into an int so they can be manipulated directly, instead of through floating-point operations.
int ci = bi+(1<<23);
The format most commonly used for the float type is IEEE-754 binary32, also called “single precision.” In this format, bit 31 is a sign bit, bits 30-23 encode an exponent and/or some other information, and bits 22-0 encode most of a significand (or, in the case of a NaN, other information). (The significand is the fraction part of a floating-point representation. A floating-point format represents a number as ±F•be, where b is a fixed base, F is a number with a fixed precision in a certain range, and e is an exponent in a certain range. F is the significand.)
1<<23 is 1 shifted 23 bits, so it is 1 in the exponent field, bits 30-23.
If the exponent field contains 1 to 1021, then adding 1 to it increases the encoded exponent by 1. (The codes 0 and 1023 have special meaning in the exponent field. 1022 is a normal value, but adding 1 to it overflows the exponent in the special code 1023, so it will not increase the exponent in a normal way.)
Since the base b of a binary floating-point format is 2, increasing the exponent by 1 multiplies the number represented by 2. ±F•be becomes ±F•be+1.
float cf = *(float *)&ci;
This is the opposite of the previous reinterpretation: It says to reinterpet the bytes of the int as a float.
printf("%X\n",bi);
This says to print bi using a hexadecimal format. This is technically wrong; the %X format should be used with an unsigned int, not an int, but most C implementations let it pass.
printf("%f\n",cf);
This prints the new float value.

Using an unsigned int in a do-while loop

I'm new to coding in c and I've been trying to wrap my head around unsigned integers. This is the code I have:
#include <stdio.h>
int main(void)
{
unsigned int hours;
do
{
printf("Number of hours you spend sleeping a day: ");
scanf(" %u", &hours);
}
while(hours < 0);
printf("\nYour number is %u", hours);
}
However, when I run the code and use (-1) it does not ask the question again like it should and prints out (Your number is 4294967295) instead. If I change unsigned int to a normal int, the code works fine. Is there a way I can change my code to make the unsigned int work?
Appreciate any help!
Is there a way I can change my code to make the unsigned int work?
Various approaches possible.
Read as int and then convert to unsigned.
Given "Number of hours you spend sleeping a day: " implies a small legitimate range about 0 to 24, read as int and convert.
int input;
do {
puts("Number of hours you spend sleeping a day:");
if (scanf("%d", &input) != 1) {
Handle_non_text_input(); // TBD code for non-numeric input like "abc"
}
} while (input < 0 || input > 24);
unsigned hours = input;
An unsigned int cannot hold negative numbers. It is useful since it can store a full 32 bit number (twice as large as a regular int), but it cannot hold negative numbers So when you try to read your negative unsigned int, it is being read as a positive number. Although both int and unsigned int are 32 bit numbers, they will be interpreted much differently.
I would try the next test:
do:{
printf("enter valid input...")
scanf("new input...")
} while (hours > 24)
Why should it work?
An unsigned int in C is a binary number, with 32 bit. that means it's max value is 2^32 - 1.
Note that:
2^32 - 1 == 4294967295. That is no coincidence. Negative ints are usually represented using the "Two's complement" method.
A word about that method:
When I use a regular int, it's most significant bit is reserved for sign: 1 if negative, 0 if positive. A positive int than holds a 0 in it's most significant bit, and 1's and 0's on the remaining coordinates in the ordinary binary manner.
Negative ints, are represented differently:
Suppose K is a positive number, represented by N bits.
The number (-K) is represented using 1 in the most significant bit, and the POSITIVE NUMBER: (2^(N-1) - K) occupying the N-1 least significant bits.
Example:
Suppose N = 4, K = 7. Binary representation for 7 using 4 bit:
7 = 0111 (The most significant bit is reserved for sign, remember?)
-7 , on the other hand:
-7 = concat(1, 2^(4-1) - 7) == 1001
Another example:
1 = 0001, -1 = 1111.
Note that if we use 32 bits, -1 is 1...1 (altogether we have 32 1's). This is exactly the binary representation of the unsigned int 4294967295. When you use unsigned int, you instruct the compiler to refer to -1 as a positive number. This is where your unexpected "error" comes from.
Now - If you use the while(hours>24), you rule out most of the illegal input. I am not sure though if you rule out all illegal input. It might be possible to think of a negative number such that the compiler interpret it as a non-negative number in the range [0:24] when asked to ignore the sign, and refer to the most significant bit as 'just another bit'.

32-bits as hex number in C with simple function call

I would like to know if I could get away with using printf to print 32 bits of incoming binary data from a microcontroller as a hexadecimal number. I already have collected the bits into an large integer variable and I'm trying "%x" option in printf but all I seem to get are 8-bit values, although I can't tell if that's a limitation with printf or my microcontroller is actually returning that value.
Here's my code to receive data from the microcontroller:
printf("Receiving...\n");
unsigned int n=0,b=0;
unsigned long lnum=0;
b=iolpt(1); //call to tell micro we want to read 32 bits
for (n=0;n<32;n++){
b=iolpt(1); //read bit one at a time
printf("Bit %d of 32 = %d\n",n,b);
lnum<<1; //shift bits in our big number left by 1 position
lnum+=b; //and add new value
}
printf("\n Data returned: %x\n",lnum); //always returns 8-bits
The iolpt() function always returns the bit read from the microcontroller and the value returned is a 0 or 1.
Is my idea of using %x acceptable for a 32-bit hexadecimal number or should I attempt something like "%lx" instead of "%x" to try to represent long hex even though its documented nowhere or is printf the wrong function for 32-bit hex? If its the wrong function then is there a function I can use that works, or am I forced to break up my long number into four 8-bit numbers first?
printf("Receiving...\n");
iolpt(1); // Tell micro we want to read 32 bits.
/* Is this correct? It looks pretty simple to be
initiating a read. It is the same as the calls
below, iolpt(1), so what makes it different?
Just because it is first?
*/
unsigned long lnum = 0;
for (unsigned n = 0; n < 32; n++)
{
unsigned b = iolpt(1); // Read bits one at a time.
printf("Bit %u of 32 = %u.\n", n, b);
lnum <<= 1; // Shift bits in our big number left by 1 position.
// Note this was changed to "lnum <<= 1" from "lnum << 1".
lnum += b; // And add new value.
}
printf("\n Data returned: %08lx\n", lnum);
/* Use:
0 to request leading zeros (instead of the default spaces).
8 to request a field width of 8.
l to specify long.
x to specify unsigned and hexadecimal.
*/
Fixed:
lnum<<1; to lnum <<= 1;.
%x in final printf to %08lx.
%d in printf in loop to %u, in two places.
Also, cleaned up:
Removed b= in initial b=iolpt(1); since it is unused.
Moved definition of b inside loop to limit its scope.
Moved definition of n into for to limit its scope.
Used proper capitalization and punctuation in comments to improve clarity and aesthetics.
Would something like that work for you?
printf("Receiving...\n");
unsigned int n=0,b=0;
unsigned long lnum=0;
b=iolpt(1); //call to tell micro we want to read 32 bits
for (n=0;n<32;n++){
b=iolpt(1); //read bit one at a time
printf("Bit %d of 32 = %d\n",n,b);
lnum<<1; //shift bits in our big number left by 1 position
lnum+=b; //and add new value
}
printf("\n Data returned: %#010lx\n",lnum); //now returns 32-bit

Hex remove leading digits

When you do something like 0x01AE1 - 0x01AEA = fffffff7. I only want the last 3 digits. So I used the modulus trick to remove the extra digits. The displacement gets filled with hex values.
int extra_crap = 0;
int extra_crap1 = 0;
int displacement = 0;
int val1 = 0;
int val2 = 0;
displacement val1 - val2;
extra_crap = displacement % 0x100;
extra_crap1 = displacement % 256;
printf(" extra_crap is %x \n", extra_crap);
printf(" extra_crap1 is %x \n", extra_crap1);
Unfortunately this is having no effect at all. Is there another way to remove all but the last 3 digits?
'Unfortunately this is having no effect at all.'
That's probably because you do your calculations on signed int. Try casting the value to unsigned, or simply forget the remainder operator % and use bitwise masking:
displacement & 0xFF;
displacement & 255;
for two hex digits or
displacement & 0xFFF;
displacement & 4095;
for three digits.
EDIT – some explanation
A detailed answer would be quite long... You need to learn about data types used in C (esp. int and unsigned int, which are two of most used Integral types), the range of values that can be represented in those types and their internal representation in Two's complement code. Also about Integer overflow and Hexadecimal system.
Then you will easily get what happened to your data: subtracting 0x01AE1 - 0x01AEA, that is 6881 - 6890, gave the result of -9, which in 32-bit signed integer encoded with 2's complement and printed in hexadecimal is FFFFFFF7. That MINUS NINE divided by 256 gave a quotient ZERO and Remainder MINUS NINE, so the remainder operator % gave you a precise and correct result. What you call 'no effect at all' is just a result of your lack of understanding what you were actually doing.
My answer above (variant 1) is not any kind of magic, but just a way to enforce calculation on positive numbers. Casting values to unsigned type makes the program to interpret 0xFFFFFFF7 as 4294967287, which divided by 265 (0x100 in hex) results in quotient 16777215 (0xFFFFFF) and remainder 247 (0xF7). Variant 2 does no division at all and just 'masks' those necessary bits: numbers 255 and 4095 contain 8 and 12 low-order bits equal 1 (in hexadecimal 0xFF and 0xFFF, respectively), so bitwise AND does exactly what you want: removes the higher part of the value, leaving just the required two or three low-order hex dgits.

Convert from binary to floating point

I'm doing some exercises for Computer Science university and one of them is about converting an int array of 64 bit into it's double-precision floating point value.
Understanding the first bit, the sign +/-, is quite easy. Same for the exponent, as well as we know that the bias is 1023.
We are having problems with the significand. How can I calculate it?
In the end, I would like to obtain the real numbers that the bits meant.
computing the significand of the given 64 bit is quite easy.
according to the wiki article using the IEEE 754, the significand is made up the first 53 bits (from bit 0 to bit 52).
Now if you want to convert number having like 67 bits to your 64 bits value, it would be rounded by setting the trailing 64th bits of your value to 1, even if it was one before... because of the other 3 bits:
11110000 11110010 11111 becomes 11110000 11110011 after the rounding of the last byte;
therefore the there is no need to store the 53th bits because it has always a value a one.
that's why you only store in 52 bits in the significand instead of 53.
now to compute it, you just need to target the bit range of the significand [bit(1) - bit(52)] -bit(0) is always 1- and use it .
int index_signf = 1; // starting at 1, not 0
int significand_length = 52;
int byteArray[53]; // array containing the bits of the significand
double significand_endValue = 0;
for( ; index_signf <= significand_length ; index_signf ++)
{
significand_endValue += byteArray[index_signf] * (pow(2,-(index_signf)));
}
significand_endValue += 1;
Now you just have to fill byteArray accordlingly before computing it, using function like that:
int* getSignificandBits(int* array64bits){
//returned array
int significandBitsArray[53];
// indexes++
int i_array64bits = 0;
int i_significandBitsArray=1;
//set the first bit = 1
significandBitsArray[0] = 1;
// fill it
for(i_significandBitsArray=1, i_array64bits = (63 - 1); i_array64bits >= (64 - 52); i_array64bits--, i_significandBitsArray ++)
significandBitsArray[i_significandBitsArray] = array64bits[i_array64bits];
return significandBitsArray;
}
You could just load the bits into an unsigned integer of the same size as a double, take the address of that and cast it to a void* which you then cast to a double* and dereference.
Of course, this might be "cheating" if you really are supposed to parse the floating point standard, but this is how I would have solved the problem given the parameters you've stated so far.
If you have a byte representation of an object you can copy the bytes into the storage of a variable of the right type to convert it.
double convert_to_double(uint64_t x) {
double result;
mempcy(&result, &x, sizeof(x));
return result;
}
You will often see code like *(double *)&x to do the conversion, but whereas in practice this will always work it's undefined behavior in C.

Resources