Bitwise expression to clear low byte of a constant - c

I am writing code that may run on architectures of different word size (32-bit, 64-bit, etc) and I want to clear the low byte of a value. There is a macro (MAX) that is set to the maximum value of a word. So, for example, on a 32-bit system MAX = 0xFFFFFFFF and on a 64-bit system MAX = 0xFFFFFFFFFFFFFFFF for unsigned values. If I have a word-sized variable that may be signed or unsigned, how can I clear the low byte of the variable with a single expression (no branching)?
My first idea was:
value & ~( MAX - 0xFF )
but this does not appear to work for signed values. My other thought was:
value = value - (value & 0xFF)
which has the disadvantage that it requires a stack operation.

To clear low byte, when not knowing the integer type width can result in incorrect code. So code should be careful.
Consider the below where value is wider than int/unsigned. 0xFF is an int constant with the value 255. ~0xFF is then that value with its bit inverted. With common 2's complemented, that would be -256 with its upper bits set as FF...FF00. -256 converted to a wider signed type retains its value and pattern FF...FF00. -256 converted to a wider unsigned type becomes Uxxx_MAX + 1 - 256, agian with the bit pattern FF...FF00. In both cases, the & will retain the uppers bits and clear the lower 8.
value_low_8bits_cleared = value & ~0xFF;
An alternative is to do all masking operation with unsigned math to avoid unexpected properties of int math and int encodings.
The below has no concerns about sign extension, int overflow. An optimizing compiler will certainly emit efficient code with a simply and mask. Further, there is no need to code the correct matching max value corresponding to value.
value_low_8bits_cleared = (value | 0xFFu) ^ 0xFFu;

here is the easy way to clear the low order 8 bits:
value &= ~0xFF;

I am writing code that may run on architectures of different word size
(32-bit, 64-bit, etc) and I want to clear the low byte of a value.
There is a macro (MAX) that is set to the maximum value of a word. So,
for example, on a 32-bit system MAX = 0xFFFFFFFF and on a 64-bit
system MAX = 0xFFFFFFFFFFFFFFFF for unsigned values.
Although C is designed so that implementations can take machine word size into account, the language itself has no inherent sense of machine words. C cares instead about types, and that makes a difference.
Anyway, I take you exactly at your word that you arrange for the replacement text of macro MAX to be one of the two alternatives you give, depending on the architecture of the machine. Note well that when that replacement text is interpreted as an integer constant, its type may vary between C implementations, and maybe even depending on compiler options.
If I have a
word-sized variable that may be signed or unsigned, how can I clear
the low byte of the variable with a single expression (no branching)?
The only reason I see for needing a single expression that cannot take the actual type of value explicitly into account is that you want to use the expression in a macro itself. In that case, you need to take great care around type conversions, especially when you have to account for signed types. This makes your MAX macro uncomfortable to work with for your purpose.
I'm inclined to suggest a different approach:
(value | 0xFF) ^ 0xFF
The constant 0xFF will be interpreted as a (signed) int with a positive value. Provided that value's type is not smaller than int, both appearances of 0xFF will be converted to that type without change in value, whether that type is signed or unsigned. Furthermore, the result of each operation and of the overall expression then has the same type as value, so no unexpected conversions occur.

How about
value & ~((intptr_t)0xFF)

First you want a mask that has all bits on, but those of the lower order byte
MAX ^ 0xFF
This converts 0xFF to the same type as MAX and then does the exclusive or with that value. Because MAX has all low order bits 1 these then become 0 and the high order bits stay as they are, that is 1.
Then you have to pull that mask over the value that interests you
value & ( MAX ^ 0xFF )

Related

shift count greater than width of type

I have a function that takes an int data_length and does the following:
unsigned char *message = (unsigned char*)malloc(65535 * sizeof(char));
message[2] = (unsigned char)((data_length >> 56) & 255);
I'm getting the following:
warning: right shift count >= width of type [-Wshift-count-overflow]
message[2] = (unsigned char)((data_length >> 56) & 255);
The program works as expected, but how can I remove the compiler warning (without disabling it)?
Similar questions didn't seem to use a variable as the data to be inserted so it seemed the solution was to cast them to int or such.
Shifting by an amount greater than the bit width of the type in question is not allowed by the standard, and doing so invokes undefined behavior.
This is detailed in section 6.5.7p3 of the C standard regarding bitwise shift operators.
The integer promotions are performed on each of the operands. The
type of the result is that of the promoted left operand. If
the value of the right operand is negative or is greater than
or equal to the width of the promoted left operand, the behavior is
undefined.
If the program appears to be working, it is by luck. You could make a unrelated change to your program or simply build it on a different machine and suddenly things will stop working.
If the size of data_length is 32 bits or less, then shifting right by 56 is too big. You can only shift by 0 - 31.
The problem is simple. You're using data_length as int when it should be unsigned as negative lengths hardly make sense. Also to be able to shift 56 bits the value must be at least 56 57 bits wide. Otherwise the behaviour is undefined.
In practice processors are known to do wildly different things. In one, shifting a 32-bit value right by 32 bits will clear the variable. In another, the value is shifted by 0 bits (32 % 32!). And then in some, perhaps the processor considers it invalid opcode and the OS kills the process.
Simple solution: declare uint64_t data_length.
If you really have limited yourself to 32-bit datatypes, then you can just assign 0 to these bytes that signify the most significant bytes. Or just cast to uint64_t or unsigned long long before the shift.

Subtracting 0x8000 from an int

I am reverse engineering some old C, running under Win95 (yes, in production) appears to have been compiled with a Borland compiler (I don't have the tool chain).
There is a function which does (among other things) something like this:
static void unknown(int *value)
{
int v = *value;
v-=0x8000;
*value = v;
}
I can't quite work out what this does. I assume 'int' in this context is signed 32 bit. I think 0x8000 would be unsigned 32bit int, and outside the range of a signed 32 bit int. (edit - this is wrong, it is outside of a signed 16 bit int)
I am not sure if one of these would be cast first, and how the casting would handle overflows, and/or how the subtraction would handle the over flow.
I could try on a modern system, but I am also unsure if the results would be the same.
Edit for clarity:
1: 'v-=0x8000;' is straight from the original code, this is what makes little sense to me. v is defined as an int.
2: I have the code, this is not from asm.
3: The original code is very, very bad.
Edit: I have the answer! The answer below wasn't quite right, but it got me there (fix up and I'll mark it as the answer).
The data in v is coming from an ambiguous source, which actually seems to be sending unsigned 16 bit data, but it is being stored as a signed int. Latter on in the program all values are converted to floats and normalised to an average 0 point, so actual value doesn't matter, only order. Because we are looking at an unsigned int as a signed one, values over 32767 are incorrectly placed below 0, so this hack leaves the value as signed, but swaps the negative and positive numbers around (not changing order). End results is all numbers have the same order (but different values) as if they were unsigned in the first place.
(...and this is not the worst code example in this program)
In Borland C 3.x, int and short were the same: 16 bits. long was 32-bits.
A hex literal has the first type in which the value can be represented: int, unsigned int, long int or unsigned long int.
In the case of Borland C, 0x8000 is a decimal value of 32768 and won't fit in an int, but will in an unsigned int. So unsigned int it is.
The statement v -= 0x8000 ; is identical to v = v - 0x8000 ;
On the right-hand side, the int value v is implicitly cast to unsigned int, per the rules, the arithmetic operation is performed, yielding an rval that is an unsigned int. That unsigned int is then, again per the rules, implicitly cast back to the type of the lval.
So, by my estimation, the net effect is to toggle the sign bit — something that could be more easily and clearly done via simple bit-twiddling: *value ^= 0x8000 ;.
There is possibly a clue on this page http://www.ousob.com/ng/borcpp/nga0e24.php - Guide to Borland C++ 2.x ( with Turbo C )
There is no such thing as a negative numeric constant. If
a minus sign precedes a numeric constant it is treated as
the unary minus operator, which, along with the constant,
constitutes a numeric expression. This is important with
-32768, which, while it can be represented as an int,
actually has type long int, since 32768 has type long. To
get the desired result, you could use (int) -32768,
0x8000, or 0177777.
This implies the use of two's complement for negative numbers. Interestingly, the two's complement of 0x8000 is 0x8000 itself (as the value +32768 does not fit in the range for signed 2 byte ints).
So what does this mean for your function? Bit wise, this has the effect of toggling the sign bit, here are some examples:
f(0) = f(0x0000) = 0x8000 = -32768
f(1) = f(0x0001) = 0x8001 = -32767
f(0x8000) = 0
f(0x7fff) = 0xffff
It seems like this could be represented as val ^= 0x8000, but perhaps the XOR operator was not implemented in Borland back then?

Bitwise AND on signed chars

I have a file that I've read into an array of data type signed char. I cannot change this fact.
I would now like to do this: !((c[i] & 0xc0) & 0x80) where c[i] is one of the signed characters.
Now, I know from section 6.5.10 of the C99 standard that "Each of the operands [of the bitwise AND] shall have integral type."
And Section 6.5 of the C99 specification tells me:
Some operators (the unary operator ~ , and the binary operators << , >> , & , ^ , and | ,
collectively described as bitwise operators )shall have operands that have integral type.
These operators return
values that depend on the internal representations of integers, and
thus have implementation-defined aspects for signed types.
My question is two-fold:
Since I want to work with the original bit patterns from the file, how can I convert/cast my signed char to unsigned char so that the bit patterns remain unchanged?
Is there a list of these "implementation-defined aspects" anywhere (say for MVSC and GCC)?
Or you could take a different route and argue that this produces the same result for both signed and unsigned chars for any value of c[i].
Naturally, I will reward references to relevant standards or authoritative texts and discourage "informed" speculation.
As others point out, in all likelyhood your implementation is based on two's complement, and will give exactly the result you expect.
However, if you're worried about the results of an operation involving a signed value, and all you care about is the bit pattern, simply cast directly to an equivalent unsigned type. The results are defined under the standard:
6.3.1.3 Signed and unsigned integers
...
Otherwise, if the new type is unsigned, the value is converted by repeatedly adding or
subtracting one more than the maximum value that can be represented in the new type
until the value is in the range of the new type.
This is essentially specifying that the result will be the two's complement representation of the value.
Fundamental to this is that in two's complement maths the result of a calculation is modulo some power of two (i.e. the number of bits in the type), which in turn is exactly equivalent to masking off the relevant number of bits. And the complement of a number is the number subtracted from the power of two.
Thus adding a negative value is the same as adding any value which differs from the value by a multiple of that power of two.
i.e:
(0 + signed_value) mod (2^N)
==
(2^N + signed_value) mod (2^N)
==
(7 * 2^N + signed_value) mod (2^N)
etc. (if you know modulo, that should be pretty self-evidently true)
So if you have a negative number, adding a power of two will make it positive (-5 + 256 = 251), but the bottom 'N' bits will be exactly the same (0b11111011) and it will not affect the outcome of a mathematical operation. As values are then truncated to fit the type, the result is exactly the binary value you expected with even if the result 'overflows' (i.e. what you might think happens if the number was positive to start with - this wrapping is also well defined behaviour).
So in 8-bit two's complement:
-5 is the same as 251 (i.e 256 - 5) - 0b11111011
If you add 30, and 251, you get 281. But that's larger than 256, and 281 mod 256 equals 25. Exactly the same as 30 - 5.
251 * 2 = 502. 502 mod 256 = 246. 246 and -10 are both 0b11110110.
Likewise if you have:
unsigned int a;
int b;
a - b == a + (unsigned int) -b;
Under the hood, this cast is unlikely to be implemented with arithmetic and will certainly be a straight assignment from one register/value to another, or just optimised out altogether as the maths does not make a distinction between signed and unsigned (intepretation of CPU flags is another matter, but that's an implementation detail). The standard exists to ensure that an implementation doesn't take it upon itself to do something strange instead, or I suppose, for some weird architecture which isn't using two's complement...
unsigned char UC = *(unsigned char*)&C - this is how you can convert signed C to unsigned keeping the "bit pattern". Thus you could change your code to something like this:
!(( (*(unsigned char*)(c+i)) & 0xc0) & 0x80)
Explanation(with references):
761 When a pointer to an object is converted to a pointer to a character type, the result points to the lowest addressed byte of the object.
1124 When applied to an operand that has type char, unsigned char, or signed char, (or a qualified version thereof) the result is 1.
These two implies that unsigned char pointer points to the same byte as original signed char pointer.
You appear to have something similar to:
signed char c[] = "\x7F\x80\xBF\xC0\xC1\xFF";
for (int i = 0; c[i] != '\0'; i++)
{
if (!((c[i] & 0xC0) & 0x80))
...
}
You are (correctly) concerned about sign extension of the signed char type. In practice, however, (c[i] & 0xC0) will convert the signed character to a (signed) int, but the & 0xC0 will discard any set bits in the more significant bytes; the result of the expression will be in the range 0x00 .. 0xFF. This will, I believe, apply whether you use sign-and-magnitude, one's complement or two's complement binary values. The detailed bit pattern you get for a specific signed character value varies depending on the underlying representation; but the overall conclusion that the result will be in the range 0x00 .. 0xFF is valid.
There is an easy resolution for that concern — cast the value of c[i] to an unsigned char before using it:
if (!(((unsigned char)c[i] & 0xC0) & 0x80))
The value c[i] is converted to an unsigned char before it is promoted to an int (or, the compiler might promote to int, then coerce to unsigned char, then promote the unsigned char back to int), and the unsigned value is used in the & operations.
Of course, the code is now merely redundant. Using & 0xC0 followed by & 0x80 is entirely equivalent to just & 0x80.
If you're processing UTF-8 data and looking for continuation bytes, the correct test is:
if (((unsigned char)c[i] & 0xC0) == 0x80)
"Since I want to work with the original bit patterns from the file,
how can I convert/cast my signed char to unsigned char so that the bit
patterns remain unchanged?"
As someone already explained in a previous answer to your question on the same topic, any small integer type, be it signed or unsigned, will get promoted to the type int whenever used in an expression.
C11 6.3.1.1
"If an int can represent all values of the original type (as
restricted by the width, for a bit-field), the value is converted to
an int; otherwise, it is converted to an unsigned int. These are
called the integer promotions."
Also, as explained in the same answer, integer literals are always of the type int.
Therefore, your expression will boil down to the pseudo code (int) & (int) & (int). The operations will be performed on three temporary int variables and the result will be of type int.
Now, if the original data contained bits that may be interpreted as sign bits for the specific signedness representation (in practice this will be two's complement on all systems), you will get problems. Because these bits will be preserved upon promotion from signed char to int.
And then the bit-wise & operator performs an AND on every single bit regardless of the contents of its integer operand (C11 6.5.10/3), be it signed or not. If you had data in the signed bits of your original signed char, it will now be lost. Because the integer literals (0xC0 or 0x80) will have no bits set that corresponds to the sign bits.
The solution is to prevent the sign bits from getting transferred to the "temporary int". One solution is to cast c[i] to unsigned char, which is completely well-defined (C11 6.3.1.3). This will tell the compiler that "the whole contents of this variable is an integer, there are no sign bits to be concerned about".
Better yet, make a habit of always using unsigned data in every form of bit manipulations. The purist, 100% safe, MISRA-C compliant way of re-writing your expression is this:
if ( ((uint8_t)c[i] & 0xc0u) & 0x80u) > 0u)
The u suffix actually enforces the expression to be of unsigned int, but it is good practice to always cast to the intended type. It tells the reader of the code "I actually know what I am doing and I also understand all weird implicit promotion rules in C".
And then if we know our hex, (0xc0 & 0x80) is pointless, it is always true. And x & 0xC0 & 0x80 is always the same as x & 0x80. Therefore simplify the expression to:
if ( ((uint8_t)c[i] & 0x80u) > 0u)
"Is there a list of these "implementation-defined aspects" anywhere"
Yes, the C standard conveniently lists them in Appendix J.3. The only implementation-defined aspect you encounter in this case though, is the signedness implementation of integers. Which in practice is always two's complement.
EDIT:
The quoted text in the question is concerned with that the various bit-wise operators will produce implementation-defined results. This is just briefly mentioned as implementation-defined even in the appendix with no exact references. The actual chapter 6.5 doesn't say much regarding impl.defined behavior of & | etc. The only operators where it is explicitly mentioned is the << and >>, where left shifting a negative number is even undefined behavior, but right shifting it is implementation-defined.

How do I byte-swap a signed number in C?

I understand that casting from an unsigned type to a signed type of equal rank produces an implementation-defined value:
C99 6.3.1.3:
Otherwise, the new type is signed and the value cannot be represented in it; either the result is implementation-defined or an implementation-defined signal is raised.
This means I don't know how to byte-swap a signed number. For instance, suppose I am receiving two-byte, twos-complement signed values in little-endian order from a peripheral device, and processing them on a big-endian CPU. The byte-swapping primitives in the C library (like ntohs) are defined to work on unsigned values. If I convert my data to unsigned so I can byte-swap it, how do I reliably recover a signed value afterward?
As you say in your question the result is implementation-defined or an implementation-defined signal is raised - i.e. depends on the platform/compiler what happens.
To byte-swap a signed number while avoiding as much implementation-defined behavior as possible, you can make use of a wider signed intermediate, one that can represent the entire range of the unsigned type with the same width as the signed value you wanted to byte-swap. Taking your example of little-endian, 16-bit numbers:
// Code below assumes CHAR_BIT == 8, INT_MAX is at least 65536, and
// signed numbers are twos complement.
#include <stdint.h>
int16_t
sl16_to_host(unsigned char b[2])
{
unsigned int n = ((unsigned int)b[0]) | (((unsigned int)b[1]) << 8);
int v = n;
if (n & 0x8000) {
v -= 0x10000;
}
return (int16_t)v;
}
Here's what this does. First, it converts the little-endian value in b to a host-endian unsigned value (regardless of which endianness the host actually is). Then it stores that value in a wider, signed variable. Its value is still in the range [0, 65535], but it is now a signed quantity. Because int can represent all the values in that range, the conversion is fully defined by the standard.
Now comes the key step. We test the high bit of the unsigned value, which is the sign bit, and if it's true we subtract 65536 (0x10000) from the signed value. That maps the range [32768, 655535] to [-32768, -1], which is precisely how a twos-complement signed number is encoded. This is still happening in the wider type and therefore we are guaranteed that all the values in the range are representable.
Finally, we truncate the wider type to int16_t. This step involves unavoidable implementation-defined behavior, but with probability one, your implementation defines it to behave as you would expect. In the vanishingly unlikely event that your implementation uses sign-and-magnitude or ones-complement representation for signed numbers, the value -32768 will be mangled by the truncation, and may cause the program to crash. I wouldn't bother worrying about it.
Another approach, which may be useful for byteswapping 32-bit numbers when you don't have a 64-bit type available, is to mask out the sign bit and handle it separately:
int32_t
sl32_to_host(unsigned char b[4])
{
uint32_t mag = ((((uint32_t)b[0]) & 0xFF) << 0) |
((((uint32_t)b[1]) & 0xFF) << 8) |
((((uint32_t)b[2]) & 0xFF) << 16) |
((((uint32_t)b[3]) & 0x7F) << 24);
int32_t val = mag;
if (b[3] & 0x80) {
val = (val - 0x7fffffff) - 1;
}
return val;
}
I've written (val - 0x7fffffff) - 1 here, instead of just val - 0x80000000, to ensure that the subtraction happens in a signed type.
I understand that casting from an unsigned type to a signed type of equal rank produces an implementation-defined value.
It will be implementation-defined only because the signedness format in C is implementation-defined. For example, two's complement is one such implementation-defined format.
So the only issue here is if either side of the transmission would not be two's complement, which is not likely going to happen in the real world. I would not bother to design programs to be portable to obscure, extinct one's complement computers from the dark ages.
This means I don't know how to byte-swap a signed number. For instance, suppose I am receiving two-byte, twos-complement signed values in little-endian order from a peripheral device, and processing them on a big-endian CPU
I suspect a source of confusion here is that you think a generic two's complement number will be transmitted from a sender that is either big or little endian and received by one which is either big/little. Data transmission protocols don't work like that though: they explicitly specify endianess and signedness format. So both sides have to adapt to the protocol.
And once that's specified, there's really no rocket science here: you are receiving 2 raw bytes. Store them in an array of raw data. Then assign them to your two's complement variable. Suppose the protocol specified little endian:
int16_t val;
uint8_t little[2];
val = (little[1]<<8) | little[0];
Bit shifting has the advantage of being endian-independent. So the above code will work no matter if your CPU is big or little. So although this code contains plenty of ugly implicit promotions, it is 100% portable. C is guaranteed to treat the above as this:
val = (int16_t)( ((int)((int)little[1]<<8)) | (int)little[0] );
The result type of the shift operator is that of its promoted left operand. The result type of | is the balanced type (usual arthmetic conversions).
Shifting signed negative numbers would give undefined behavior, but we get away with the shift because the individual bytes are unsigned. When they get implicitly promoted, the numbers are still treated as positive.
And since int is guaranteed to be at least 16 bits, the code will work on all CPUs.
Alternatively, you could use pedantic style that completely excludes all implicit promotions/conversions:
val = (int16_t) ( ((uint32_t)little[1] << 8) | (uint32_t)little[0] );
But this comes at the cost of readability.

Why does (1 >> 0x80000000) == 1?

The number 1, right shifted by anything greater than 0, should be 0, correct? Yet I can type in this very simple program which prints 1.
#include <stdio.h>
int main()
{
int b = 0x80000000;
int a = 1 >> b;
printf("%d\n", a);
}
Tested with gcc on linux.
6.5.7 Bitwise shift operators:
If the value of the right operand is negative or is greater than or equal to the width of the promoted left operand, the behavior is undefined.
The compiler is at license to do anything, obviously, but the most common behaviors are to optimize the expression (and anything that depends on it) away entirely, or simply let the underlying hardware do whatever it does for out-of-range shifts. Many hardware platforms (including x86 and ARM) mask some number of low-order bits to use as a shift-amount. The actual hardware instruction will give the result you are observing on either of those platforms, because the shift amount is masked to zero. So in your case the compiler might have optimized away the shift, or it might be simply letting the hardware do whatever it does. Inspect the assembly if you want to know which.
according to the standard, shifting for more than the bits actually existing can result in undefined behavior. So we cannot blame the compiler for that.
The motivation probably resides in the "border meaning" of 0x80000000 that sits on the boundary of the maximum positive and negative together (and that is "negative" having the highmost bit set) and on certain check that should be done and that the compiled program doesn't to to avoid to waste time verifying "impossible" things (do you really want the processor to shift bits 3 billion times?).
It's very probably not attempting to shift by some large number of bits.
INT_MAX on your system is probably 2**31-1, or 0x7fffffff (I'm using ** to denote exponentiation). If that's the case, then In the declaration:
int b = 0x80000000;
(which was missing a semicolon in the question; please copy-and-paste your exact code) the constant 0x80000000 is of type unsigned int, not int. The value is implicitly converted to int. Since the result is outside the bounds of int, the result is implementation-defined (or, in C99, may raise an implementation-defined signal, but I don't know of any implementation that does that).
The most common way this is done is to reinterpret the bits of the unsigned value as a 2's-complement signed value. The result in this case is -2**31, or -2147483648.
So the behavior isn't undefined because you're shifting by value that equals or exceeds the width of type int, it's undefined because you're shifting by a (very large) negative value.
Not that it matters, of course; undefined is undefined.
NOTE: The above assumes that int is 32 bits on your system. If int is wider than 32 bits, then most of it doesn't apply (but the behavior is still undefined).
If you really wanted to attempt to shift by 0x80000000 bits, you could do it like this:
unsigned long b = 0x80000000;
unsigned long a = 1 >> b; // *still* undefined
unsigned long is guaranteed to be big enough to hold the value 0x80000000, so you avoid part of the problem.
Of course, the behavior of the shift is just as undefined as it was in your original code, since 0x80000000 is greater than or equal to the width of unsigned long. (Unless your compiler has a really big unsigned long type, but no real-world compiler does that.)
The only way to avoid undefined behavior is not to do what you're trying to do.
It's possible, but vanishingly unlikely, that your original code's behavior is not undefined. That can only happen if the implementation-defined conversion of 0x80000000 from unsigned int to int yields a value in the range 0 .. 31. IF int is smaller than 32 bits, the conversion is likely to yield 0.
well read that maybe can help you
expression1 >> expression2
The >> operator masks expression2 to avoid shifting expression1 by too much.
That because if the shift amount exceeded the number of bits in the data type of expression1, all the original bits would be shifted away to give a trivial result.
Now for ensure that each shift leaves at least one of the original bits,
the shift operators use the following formula to calculate the actual shift amount:
mask expression2 (using the bitwise AND operator) with one less than the number of bits in expression1.
Example
var x : byte = 15;
// A byte stores 8 bits.
// The bits stored in x are 00001111
var y : byte = x >> 10;
// Actual shift is 10 & (8-1) = 2
// The bits stored in y are 00000011
// The value of y is 3
print(y); // Prints 3
That "8-1" is because x is 8 bytes so the operacion will be with 7 bits. that void remove last bit of original chain bits

Resources