Multiply two 8 bit numbers - c

I am trying to multiply two 8 bit numbers ans store the 16 bit result in two 8 bit variables, Eg :
91*16 = 1456
High = 14 and Low = 56
This is the code I'm using but not getting desired result . Can someone point out my error please?
#include <stdio.h>
#include <math.h>
#include <string.h>
#include <stdlib.h>
#include <stdint.h>
#include<time.h>
int main()
{
uint8_t a = 91;
uint8_t b = 16;
uint16_t c = a*b;
printf("Value at c is %d \n",c);
uint8_t lo = c & 0x00FF;
uint8_t hi = c >> 8;
printf("Hi and lo value are %d and %d \n",hi,lo);
return 0;
}
I get Hi = 5 and Lo = 176, why is it so ?

Result of a*b is
0000 0101 1011 0000
( 5 )( 176 )
That is what you printed.
You mixed up decimal representation with others. The value you see is not stored like 0001 0100 0101 0110. If it was, then yes you would get your desired result (But it is debatable how helpful it would be).
To get what you want, you might consider using c%100 and c/100.(You are getting the most significant 2 digits in hi and rest two digits in lo).
uint8_t lo = c %100;
uint8_t hi = c /100;
And print them properly
printf("%" PRIu8 "\n", lo); //56
printf("%" PRIu8 "\n", hi); //14
In this way to be precise printf("Hi and lo value are %" PRIu8" and %"PRIu8" \n",hi,lo);.
Note that any 2 digit positive integer can be placed in uint8_t but to be more generic, you might want to consider that sometimes the resultant multiplication can be of 5 decimal digits.

If you want decimal representation (which doesn't make sense as decimal does not evenly divide byte values) you need to use /100 and %100 instead of bitwise operations.

Related

How to cast 10 bit sign integer to 16 bit integer in C

I am parsing a data which is a 10 bit signed integer. Since the only way to represent this data is to either use int or short ( for sign 2-byte representation), I have to cast 10 bit to 16 bit.
I have applied 2 methods already but they are either slow or compiler depended.
The slow method is to use pow() function
value = pow(2,16) - pow(2,10) + value
The compiler dependent method is
value = (value << 6) >> 6 (right shift shifts the MSB which is a compiler dependent operation and may shift 0 if compiler is different)
Can someone help me find the standar way of casting non standard types to standard types
Here is the logic for the operations explicitly written out. Obviously you can do this with a one-liner, but I hope this explains why.
#include <stdio.h>
#include <string.h>
#include <stdint.h>
int main(int argc, char* argv[]) {
//int16_t value = 0x3fdb; // random examples
//int16_t value = 0x00f3;
int16_t value = 0x3f3;
printf("0x%04x (%i)\n", value, value); // in
uint16_t mask = 0x3ff; // 0000 0011 1111 1111 in binary
uint16_t masked = value & mask; // get only the 10 LSB
uint16_t extension = (0x200 & value) ? 0xFC00 : 0x0; // extend with 1s or 0s
printf("extension: %i\n", (extension)?1:0);
int16_t extended = extension | masked; // do the extension
printf("0x%04x (%i)\n", extended, extended); // out
return 0;
}
Examples:
0x00f3 (243)
extension: 0
0x00f3 (243)
0x3fdb (16347)
extension: 1
0xffffffdb (-37)
0xfffffff3 (-13)
extension: 1
0xfffffff3 (-13)
0x03f3 (1011)
extension: 1
0xfffffff3 (-13)
value = value & 0x03FF; //mask off the high 6 bits like you want.
There should be no 10 bit integers, i assume value is a short but you should add that relevant info.
edit
If you only want to mask if the 10th bit is set then:
value = (value & 0x0200) ? (value & 0x03FF) : value;

Why C programming gives different output? [duplicate]

This question already has answers here:
How does C Handle Integer Literals with Leading Zeros, and What About atoi?
(8 answers)
Closed 3 years ago.
I got unexpected outputs, and I could not figure out the reason.
#include <stdio.h>
int main() {
int a = 0100;
int b = 010;
int c = 1111;
int d = 01111;
printf("0100 => %d, 010 => %d, 1111 => %d, 01111=> %d\n", a, b, c, d);
}
Output:
0100 => 64, 010 => 8, 1111 => 1111, 01111=> 585
Why does such output occur?
In C, putting either 0x or 0 before an integer literal value will cause the compiler to interpret the number in base 16 (hexadecimal) and base 8 (octal), respectively.
Normally, we interpret and read numbers in base 10 (decimal). However, we sometimes will use these other bases because they are useful powers of 2 that can represent groups of bits (1 hexadecimal digit = 4 bits, 1 octal digit = 3 bits), and bit manipulation is something that C wants to provide. This is why you'll see something like char be represented with 2 hexadecimal digits (e.g. 0x12) to set a single char to be a specific bit sequence.
If you wanted to verify this, printf also allows you to output int in hexadecimal and octal as well using %x and %o respectively instead of %d.
#include <stdio.h>
int main()
{
int a = 0100;
int b = 010;
int c = 1111;
int d = 01111;
printf("0100 => %o, 010 => %o, 1111 => %d, 01111=> %o\n", a,b,c,d);
}
If you run this program, you'll get the following:
gcc -ansi -O2 -Wall main.c && ./a.out
0100 => 100, 010 => 10, 1111 => 1111, 01111=> 1111
...which is exactly what you set the values to in the program, just without the prefixes. You just mistakenly used another integer encoding by accident on assignment in the original code, and used a different one to output the value.
0 prefix gives you octal, and you tried to print decimal.

Why does compound bit shifting not set leading nibble to 0? [duplicate]

This question already has answers here:
Bit shifting a byte by more than 8 bit
(2 answers)
Closed 3 years ago.
I am trying to clear the first nibble in a 16-bit unsigned integer (set it to 0).
One of the approaches I attempted was to use left bit shifting and then right bit shifting to clear the first four bits.
Here is my program.
#include <stdio.h>
#include <stdint.h>
int main() {
uint16_t x = 0xabcd; // I want to print out "bcd"
uint16_t a = (x << 4) >> 4;
printf("After clearing first four bits: %x\n", a);
return 0;
}
I am expecting the first bit shift (x << 4) to evaluate to 0xbcd0 and then the right bit shift to push a leading 0 in front - 0x0bcd. However, the actual output is 0xabcd.
What confuses me more is if I try to use the same approach to clear the last four bits,
uint16_t a = (x >> 4) << 4;
it works fine and I get expected output: abc0.
Why, in the program above, does the right bit shifting not push a leading 0?
Happens due to integer promotion. On systems where int is larger than 16 bits, your expression is converted to
uint16_t a = (int)((int)x << 4) >> 4;
and the upper bits are not stripped therefore.
if you want to print out "bcd", maybe you can do like this:
#include <stdio.h>
#include <stdint.h>
int main() {
uint16_t x = 0xabcd; // I want to print out "bcd" // step 1
uint16_t c = x << 4 ; // step 2
uint16_t a = c >> 4; // step 3
printf("After clearing first four bits: %x\n", a);
return 0;
}
above output is :
After clearing first four bits: bcd
explanation:
when run step 1:
x is 1010101111001101 (binary) // 1010(a) 1011(b) 1100(c) 1101(d)
go to step 2:
c is 1011110011010000 (binary) // 1011(b) 1100(c) 1101(d) 0000(0)
finally go to step 3:
a is 101111001101 (binary) // 1011(b) 1100(c) 1101(d)

why it shows like this in hexadecimal notation?

I'm trying to do bit arithmetic with variables a and b. When I reverse a, which has the value 0xAF, the result shows as 8 digits.
Unlike the others that show as 2 digits.
I don't know why it happens, but guess that it is relevant to %x's showing way and little endian?
Here is my code:
#include <stdio.h>
int main()
{
int a = 0xAF; // 10101111
int b = 0xB5; // 10110101
printf("%x \n", a & b); // a & b = 10100101
printf("%x \n", a | b); // a | b = 10111111
printf("%x \n", a ^ b); // a ^ b = 00011010
printf("%x \n", ~a); // ~a = 1....1 01010000
printf("%x \n", a << 2);// a << 2 = 1010111100
printf("%x \n", b >> 3); // b >> 3 = 00010110
return 0;
}
Considering your int a is most likely 32 bit in size, your a actually looks like this:
int a = 0xAF; // 0000 0000 0000 0000 0000 0000 1010 1111
So if you flip all the bits on that, you have
1111 1111 1111 1111 1111 1111 0101 0000
Or, in hex
0xFFFFFF50
Which is exactly what you're getting. The others show only 2 digits because trailing zeroes are omitted when printing hex, and your other bit operations do not in fact change any of the leading zeroes.
---- Credit to # chqrlie for this ----
If you really only want to see 8 bits of the result, you can do
printf("%hhx \n", ~a); // ~a = 1....1 01010000 --> Output : 50
Which restricts the printed value to unsigned char (8 bit [on modern os, not guaranteed, but very likely for your purposes]) length.
There's a lot of potential problems with code like this.
Most importantly, you should never use signed integer types like int when doing bitwise operations. Because you could either end up with unexpected results, or you could end up with undefined/implementation-defined behavior bugs if using operators like << >> on negative value integers.
So step one is to ensure you have an unsigned integer type. Preferably uint32_t or similar from stdint.h.
Another related problem is that if you use small integer types in an expression, such as uint8_t, char, short, bool etc, then they will get implicitly promoted to type int, which is a signed type. You get that even if you use unsigned char or uint8_t. This is the source of many fatal bugs related to the bitwise operators.
And finally, the printf family of functions is dangerous to use when you need to be explicit about types. While these function have literally zero type safety, they at the same time assume a certain, specific type. If you give them the wrong type you invoke undefined behavior and the program will potentially crash & burn. Also, being variable-argument list functions, they also use implicit promotion of the arguments (default argument promotions) which might also cause unforeseen bugs.
The "strange" output you experience is a combination of doing bitwise ~ on a signed type and printf expecting an unsigned int when you give it the %x conversion specifier.
In order to get more deterministic output, you could do something like this:
#include <stdio.h>
#include <inttypes.h>
int main()
{
uint32_t a = 0xAF; // 10101111
uint32_t b = 0xB5; // 10110101
printf("%.8" PRIx32 "\n", a & b); // a & b = 10100101
printf("%.8" PRIx32 "\n", a | b); // a | b = 10111111
printf("%.8" PRIx32 "\n", a ^ b); // a ^ b = 00011010
printf("%.8" PRIx32 "\n", ~a); // ~a = 1....1 01010000
printf("%.8" PRIx32 "\n", a << 2); // a << 2 = 1010111100
printf("%.8" PRIx32 "\n", b >> 3); // b >> 3 = 00010110
return 0;
}

How do I extract bits from 32 bit number

I have do not have much knowledge of C and I'm stuck with a problem since one of my colleague is on leave.
I have a 32 bit number and i have to extract bits from it. I did go through a few threads but I'm still not clear how to do so. I would be highly obliged if someone can help me.
Here is an example of what I need to do:
Assume hex number = 0xD7448EAB.
In binary = 1101 0111 0100 0100 1000 1110 1010 1011.
I need to extract the 16 bits, and output that value. I want bits 10 through 25.
The lower 10 bits (Decimal) are ignored. i.e., 10 1010 1011 are ignored.
And the upper 6 bits (Overflow) are ignored. i.e. 1101 01 are ignored.
The remaining 16 bits of data needs to be the output which is 11 0100 0100 1000 11 (numbers in italics are needed as the output).
This was an example but I will keep getting different hex numbers all the time and I need to extract the same bits as I explained.
How do I solve this?
Thank you.
For this example you would output 1101 0001 0010 0011, which is 0xD123, or 53,539 decimal.
You need masks to get the bits you want. Masks are numbers that you can use to sift through bits in the manner you want (keep bits, delete/clear bits, modify numbers etc). What you need to know are the AND, OR, XOR, NOT, and shifting operations. For what you need, you'll only need a couple.
You know shifting: x << y moves bits from x *y positions to the left*.
How to get x bits set to 1 in order: (1 << x) - 1
How to get x bits set to 1, in order, starting from y to y + x: ((1 << x) -1) << y
The above is your mask for the bits you need. So for example if you want 16 bits of 0xD7448EAB, from 10 to 25, you'll need the above, for x = 16 and y = 10.
And now to get the bits you want, just AND your number 0xD7448EAB with the mask above and you'll get the masked 0xD7448EAB with only the bits you want. Later, if you want to go through each one, you'll need to shift your result by 10 to the right and process each bit at a time (at position 0).
The answer may be a bit longer, but it's better design than just hard coding with 0xff or whatever.
OK, here's how I wrote it:
#include <stdint.h>
#include <stdio.h>
main() {
uint32_t in = 0xd7448eab;
uint16_t out = 0;
out = in >> 10; // Shift right 10 bits
out &= 0xffff; // Only lower 16 bits
printf("%x\n",out);
}
The in >> 10 shifts the number right 10 bits; the & 0xffff discards all bits except the lower 16 bits.
I want bits 10 through 25.
You can do this:
unsigned int number = 0xD7448EAB;
unsigned int value = (number & 0x3FFFC00) >> 10;
Or this:
unsigned int number = 0xD7448EAB;
unsigned int value = (number >> 10) & 0xFFFF;
I combined the top 2 answers above to write a C program that extracts the bits for any range of bits (not just 10 through 25) of a 32-bit unsigned int. The way the function works is that it returns bits lo to hi (inclusive) of num.
#include <stdio.h>
#include <stdint.h>
unsigned extract(unsigned num, unsigned hi, unsigned lo) {
uint32_t range = (hi - lo + 1); //number of bits to be extracted
//shifting a number by the number of bits it has produces inconsistent
//results across machines so we need a special case for extract(num, 31, 0)
if(range == 32)
return num;
uint32_t result = 0;
//following the rule above, ((1 << x) - 1) << y) makes the mask:
uint32_t mask = ((1 << range) -1) << lo;
//AND num and mask to get only the bits in our range
result = num & mask;
result = result >> lo; //gets rid of trailing 0s
return result;
}
int main() {
unsigned int num = 0xd7448eab;
printf("0x%x\n", extract(num, 10, 25));
}

Resources