masm byte range - masm

I am having a hard time understanding this and hopefully someone can correct me on it. A BYTE is defined as 0 - 2^7 ? which would be 128, which is 8 bits, correct? But that cant be correct because I am now storing a value of 255 into a BYTE? Any kick in the right direction would be helpful

An unsigned byte is 2^8 = 256, but if you have to store the sign, the you need to sacrify a bit, then you have +- 2^7 = -127 + 128.

Related

Getting the integervalue of bits inside an unsigned int

I want to write a kernel module which reads out a register, (return value is saved in an unsigned int) and then read the the bits 16 to 22 from this variable and the convert it to an integer number. The conversion is no problem. But getting the bits out in the first place is the problem.
As example I have this 2 values in hex:
0x88290000d
0x005a0a00d
and from this 2 values I want the bits 16 to 22 as integer, any ideas how I can implement that in my kernel module ?
Here is how you extract bits 16 through 22, inclusive (7 bits):
Read the number from the register into unsigned int reg = ...
Shift reg to the right by 16, so bit 16 is at the least significant position: reg >>= 16
Mask the number with 00000000011111112, which is 0x7F: reg &= 0x7F
Note: The above counts bits starting from zero (the traditional way of numbering bits).
You can define a mask that is active for only 7 bits(16 bit to 22 bit). So your mask will be defined as.
mask=0x007F0000 //Your mask
var2=0x00270f00 //Variable which you want to mask
result=mask & var2 // bitwise-and operation to mask the value
result>>=16 // right shift the value to get only this value.
Hope this helps :)

When an int is cast to a short and truncated, how is the new value determined?

Can someone clarify what happens when an integer is cast to a short in C? I'm using Raspberry Pi, so I'm aware that an int is 32 bits, and therefore a short must be 16 bits.
Let's say I use the following C code for example:
int x = 0x1248642;
short sx = (short)x;
int y = sx;
I get that x would be truncated, but can someone explain how exactly? Are shifts used? How exactly is a number truncated from 32 bits to 16 bits?
According to the ISO C standard, when you convert an integer to a signed type, and the value is outside the range of the target type, the result is implementation-defined. (Or an implementation-defined signal can be raised, but I don't know of any compilers that do this.)
In practice, the most common behavior is that the high-order bits are discarded. So assuming int is 32 bits and short is 16 bits, converting the value 0x1248642 will probably yield a bit pattern that looks like 0x8642. And assuming a two's-complement representation for signed types (which is used on almost all systems), the high-order bit is the sign bit, so the numeric value of the result will be -31166.
int y = sx;
This also involves an implicit conversion, from short to int. Since the range of int is guaranteed to cover at least the entire range of short, the value is unchanged. (Since, in your example, the value of sx happens to be negative, this change of representation is likely to involve sign extension, propagating the 1 sign bit to all 16 high-order bits of the result.)
As I indicated, none of these details are required by the language standard. If you really want to truncate values to a narrower type, it's probably best to use unsigned types (which have language-specified wraparound behavior) and perhaps explicit masking operations, like this:
unsigned int x = 0x1248642;
unsigned short sx = x & 0xFFFF;
If you have a 32-bit quantity that you want to shove into a 16-bit variable, the first thing you should do is decide how you want your code to behave if the value doesn't fit. Once you've decided that, you can figure out how to write C code that does what you want. Sometimes truncation happens to be what you want, in which case your task is going to be easy, especially if you're using unsigned types. Sometimes an out-of-range value is an error, in which case you need to check for it and decide how to handle the error. Sometimes you might want the value to saturate, rather than truncate, so you'll need to write code to do that.
Knowing how conversions work in C is important, but if you start with that question you just might be approaching your problem from the wrong direction.
The 32 bit value is truncated to 16 bits in the same way a 32cm long banana bread would be cut if you jam it into a 16cm long pan. Half of it would fit in and still be a banana bread, and the rest will be "gone".
Truncation happens in CPU registers. These have different sizes: 8/16/32/64 bits. Now, you can imagine a register like:
<--rax----------------------------------------------------------------> (64-bit)
<--eax----------------------------> (32-bit)
<--ax-----------> (16-bit)
<--ah--> <--al--> (8-bit high & low)
01100011 01100001 01110010 01110010 01111001 00100000 01101111 01101110
x is first given the 32 bit value 0x1248642. In memory*, it'll look like:
-----------------------------
| 01 | 24 | 86 | 42 |
-----------------------------
31..24 23..16 15..8 7..0
Now, the compiler loads x in a register. From it, it can simply load the least significant 16 bits (namely, ax) and store them into sx.
*Endianness is not taken into account for the sake of simplicity
Simply the high 16 bits are cut off from the integer. Therefore your short will become 0x8642 which is actually negative number -31166.
Perhaps let the code speak for itself:
#include <stdio.h>
#define BYTETOBINARYPATTERN "%d%d%d%d%d%d%d%d"
#define BYTETOBINARY(byte) \
((byte) & 0x80 ? 1 : 0), \
((byte) & 0x40 ? 1 : 0), \
((byte) & 0x20 ? 1 : 0), \
((byte) & 0x10 ? 1 : 0), \
((byte) & 0x08 ? 1 : 0), \
((byte) & 0x04 ? 1 : 0), \
((byte) & 0x02 ? 1 : 0), \
((byte) & 0x01 ? 1 : 0)
int main()
{
int x = 0x1248642;
short sx = (short) x;
int y = sx;
printf("%d\n", x);
printf("%hu\n", sx);
printf("%d\n", y);
printf("x: "BYTETOBINARYPATTERN" "BYTETOBINARYPATTERN" "BYTETOBINARYPATTERN" "BYTETOBINARYPATTERN"\n",
BYTETOBINARY(x>>24), BYTETOBINARY(x>>16), BYTETOBINARY(x>>8), BYTETOBINARY(x));
printf("sx: "BYTETOBINARYPATTERN" "BYTETOBINARYPATTERN"\n",
BYTETOBINARY(y>>8), BYTETOBINARY(y));
printf("y: "BYTETOBINARYPATTERN" "BYTETOBINARYPATTERN" "BYTETOBINARYPATTERN" "BYTETOBINARYPATTERN"\n",
BYTETOBINARY(y>>24), BYTETOBINARY(y>>16), BYTETOBINARY(y>>8), BYTETOBINARY(y));
return 0;
}
Output:
19170882
34370
-31166
x: 00000001 00100100 10000110 01000010
sx: 10000110 01000010
y: 11111111 11111111 10000110 01000010
As you can see, int -> short yields the lower 16 bits, as expected.
Casting short to int yields the short with the 16 high bits set. However, I suspect this is implementation specific and undefined behavior. You're essentially interpreting 16 bits of memory as an integer, which reads 16 extra bits of whatever rubbish happens to be there (or 1's if the compiler is nice and wants to help you find bugs quicker).
I think it should be safe to do the following:
int y = 0x0000FFFF & sx;
Obviously you won't get back the lost bits, but this will guarantee that the high bits are properly zeroed.
If anyone can verify the short -> int high bit behavior with an authoritative reference, that would be appreciated.
Note: Binary macro adapted from this answer.
sx value will be the same as 2 least significant bytes of x, in this case it will be 0x8642 which (if interpreted as 16 bit signed integer) gives -31166 in decimal.

What does hibyte = Value >> 8 meaning?

I am using C for developing my program and I found out from an example code
unHiByte = unVal >> 8;
What does this mean? If unVal = 250. What could be the value for unHiByte?
>> in programming is a bitwise operation. The operation >> means shift right operation.
So unVal >> 8 means shift right unVal by 8 bits. Shifting the bits to the right can be interpreted as dividing the value by 2.
Hence, unHiByte = unval >> 8 means unHiByte = unVal/(2^8) (divide unVal by 2 eight times)
Without going into the shift operator itself (since that is answered already), here the assumption is that unVal is a two byte variable with a high byte (the upper 8 bits) and a low byte (the lower 8 bits). The intent is to obtain the value produced by ONLY the upper 8 bits and discarding the lower bits.
The shift operator though should easily be learned via any book / tutorial and perhaps was the reason some one down voted the question.
The >> is a bitwise right shift.
It operates on bits. With unHiByte = unVal >> 8; When unVal=250.
Its binary form is 11111010
Right shift means to shift the bits to the right. So when you shift 1111 1010, 8 digits to right you get 0000 0000.
Note: You can easily determine the right shift operation result by dividing the number to the left of >> by 2^(number to right of >>)
So, 250/28= 0
For example: if you have a hex 0x2A63 and you want to take 2A or you want to take 63 out of it, then you will do this.
For example, if we convert 2A63 to binary which is: 0010101001100011. (that is 16 bits, first 8 bits are 2A and the second 8 bits are 63)
The issue is that binary always starts from right. So we have to push the first 8 bits (2A) to the right side to be able to get it.
uint16_t hex = 0x2A63;
uint8_t part2A = (uint8_t)(hex >> 8) // Pushed the first
// eight bits (2A) to right and (63) is gone out of the way. Now we have 0000000000101010
// Now Line 2 returns for us 0x2A which the last 8 bits (2A).
// To get 63 we will do simply:
uint8_t part63 = (uint8_t)hex; // As by default the 63 is on the right most side in the binary.
It is that simple.

converting little endian hex to big endian decimal in C

I am trying to understand and implement a simple file system based on FAT12. I am currently looking at the following snippet of code and its driving me crazy:
int getTotalSize(char * mmap)
{
int *tmp1 = malloc(sizeof(int));
int *tmp2 = malloc(sizeof(int));
int retVal;
* tmp1 = mmap[19];
* tmp2 = mmap[20];
printf("%d and %d read\n",*tmp1,*tmp2);
retVal = *tmp1+((*tmp2)<<8);
free(tmp1);
free(tmp2);
return retVal;
};
From what I've read so far, the FAT12 format stores the integers in little endian format.
and the code above is getting the size of the file system which is stored in the 19th and 20th byte of boot sector.
however I don't understand why retVal = *tmp1+((*tmp2)<<8); works. is the bitwise <<8 converting the second byte to decimal? or to big endian format?
why is it only doing it to the second byte and not the first one?
the bytes in question are [in little endian format] :
40 0B
and i tried converting them manually by switching the order first to
0B 40
and then converting from hex to decimal, and I get the right output, I just don't understand how adding the first byte to the bitwise shift of second byte does the same thing?
Thanks
The use of malloc() here is seriously facepalm-inducing. Utterly unnecessary, and a serious "code smell" (makes me doubt the overall quality of the code). Also, mmap clearly should be unsigned char (or, even better, uint8_t).
That said, the code you're asking about is pretty straight-forward.
Given two byte-sized values a and b, there are two ways of combining them into a 16-bit value (which is what the code is doing): you can either consider a to be the least-significant byte, or b.
Using boxes, the 16-bit value can look either like this:
+---+---+
| a | b |
+---+---+
or like this, if you instead consider b to be the most significant byte:
+---+---+
| b | a |
+---+---+
The way to combine the lsb and the msb into 16-bit value is simply:
result = (msb * 256) + lsb;
UPDATE: The 256 comes from the fact that that's the "worth" of each successively more significant byte in a multibyte number. Compare it to the role of 10 in a decimal number (to combine two single-digit decimal numbers c and d you would use result = 10 * c + d).
Consider msb = 0x01 and lsb = 0x00, then the above would be:
result = 0x1 * 256 + 0 = 256 = 0x0100
You can see that the msb byte ended up in the upper part of the 16-bit value, just as expected.
Your code is using << 8 to do bitwise shifting to the left, which is the same as multiplying by 28, i.e. 256.
Note that result above is a value, i.e. not a byte buffer in memory, so its endianness doesn't matter.
I see no problem combining individual digits or bytes into larger integers.
Let's do decimal with 2 digits: 1 (least significant) and 2 (most significant):
1 + 2 * 10 = 21 (10 is the system base)
Let's now do base-256 with 2 digits: 0x40 (least significant) and 0x0B (most significant):
0x40 + 0x0B * 0x100 = 0x0B40 (0x100=256 is the system base)
The problem, however, is likely lying somewhere else, in how 12-bit integers are stored in FAT12.
A 12-bit integer occupies 1.5 8-bit bytes. And in 3 bytes you have 2 12-bit integers.
Suppose, you have 0x12, 0x34, 0x56 as those 3 bytes.
In order to extract the first integer you only need take the first byte (0x12) and the 4 least significant bits of the second (0x04) and combine them like this:
0x12 + ((0x34 & 0x0F) << 8) == 0x412
In order to extract the second integer you need to take the 4 most significant bits of the second byte (0x03) and the third byte (0x56) and combine them like this:
(0x56 << 4) + (0x34 >> 4) == 0x563
If you read the official Microsoft's document on FAT (look up fatgen103 online), you'll find all the FAT relevant formulas/pseudo code.
The << operator is the left shift operator. It takes the value to the left of the operator, and shift it by the number used on the right side of the operator.
So in your case, it shifts the value of *tmp2 eight bits to the left, and combines it with the value of *tmp1 to generate a 16 bit value from two eight bit values.
For example, lets say you have the integer 1. This is, in 16-bit binary, 0000000000000001. If you shift it left by eight bits, you end up with the binary value 0000000100000000, i.e. 256 in decimal.
The presentation (i.e. binary, decimal or hexadecimal) has nothing to do with it. All integers are stored the same way on the computer.

How to zero out bottom n bits in a C unsigned int?

I am trying to write my own C floor function. I am stuck on this code detail. I would just like to know how I can zero out the bottom n bits of an unsigned int.
For example, to round 51.5 to 51.0, I need to zero out the bottom 18 bits, and keep the top 14. Since it's a floor function, I want to make a mask to zero out the bottom (23 minus exponent) bits from the float representation. I know how to make a mask for individual cases like that, but I'm not sure how to code it so that it will work for all. Please help.
A much simpler way is doing just this:
value = (value >> bits) << bits
because the shift left will fill it in with zeroes, not whatever was in there.
Shift a number left N bits. Subtract one. Invert the bits. And with the number you need to mask.
1 << 14 = 00000000000000000010000000000000
-1 = 00000000000000000001111111111111
~ = 11111111111111111110000000000000
When you and with this, a 1 in the mask will preserve the input, and 0 in the mask will set the result to 0.

Resources