The prescaler contains the least significant bits of the count When Counting down and contains the most significant bits of the count when counting up.
How does the prescaler works in M4?
Related
I have a 16-bit number, the LSB 4 bits are used as bitfields to check settings, and the MSB 12 bits are used as a number that is incremented.
I know that tempNum = (data_bits >> 4) will get me the number out of the larger one. If I want to increment that tempNum by 1 and then put that back into the overall 16-bit number as a replacement without affecting the lower 4 bits, how would I go about doing this? I want to do this using bitwise operations only.
The simplest way to do this would be to increment starting after 4 bits, i.e.:
data_bits += 1 << 4;
This leaves the lower 4 bits unchanged.
I have a sample set of 32-bit data in the format 24-bit, 2’s complement, MSB first. The data precision is 18 bits; unused bits are zeros.
I want to process the numbers in this sample set to find their average value.
However, I am not sure how to convert all the numbers to the same type and then use them for calculating the average.
One way is to bit shift all the numbers from the sample set to right by 14. This way I directly get the 18 useful bits (since it is 24-bit data with 18-bit precision, so extract only 18 useful bits).
Then I can directly use these numbers to calculate their average.
Following is an example sample set of data samples:-
0xF9AFC000
0xF9AFC000
0xF9AE4000
0xF9AE0000
0xF9AE0000
0xF9AD0000
0xF9AC8000
0xF9AC8000
0xF9AC4000
0xF9AB4000
0xF9AB8000
0xF9AB4000
0xF9AA4000
0xF9AA8000
0xF9A98000
0xF9A8C000
0xF9A8C000
0xF9A8C000
0xF9A88000
0xF9A84000
However, the 18-bit number still has the sign bit (MSB). This bit is not always set and might be 0 or 1 depending upon the data.
Should I just mask the sign bit by &ing all the numbers with 0x1FFFF and use them for calculating average?
Or should I first convert them from 2's complement to integers by negating and adding 1?
Please suggest a proper way to extract and process "24-bit, 2’s complement, MSB first" number from a 32-bit number.
Thanks in advance!
Well, providing sample data isn't a complete spec, but let's look at
F9AFC000
It looks like the data are in the high order 3 bytes. That's a guess. If they're indeed 24 bits of 2's complement, then getting the true value into a 32 bit int is just
int32_t get_value_from_datum(uint32_t datum) {
return (int32_t) datum >> 8;
}
On the sample, this will sign extend the high bit of the leading F. The result will be FFF9AFC0. As a 2's complement integer written in base 10, this is -413760.
Or perhaps you mean that the 18 bits of interest are fully left-justified in the 32-bit word. Then it's
int32_t get_value_from_datum(uint32_t datum) {
return (int32_t) datum >> 14;
}
This results in -6465.
As I said in the comment, you need to more clearly explain the data format.
A precise spec is most easily shown as a picture of the 32-bit word, MSB to LSB, which identifies which 18 bits are the data bits.
The ISO C standard states that A "plain" int object has the natural size suggested by the architecture of the execution environment
However, it is also guaranteed that int is at least as large as short, which is at least 16 bits in size.
The natural size suggested by an 8-bit processor, such as a 6502 or 8080, would seem to be an 8-bit int, however that would make int shorter than 16 bits.
So, how large would int be on one of these 8 bit processors?
The 6502 had only the instruction pointer as 16 bit register, the 16 bit integers were handled with 8 bits with multiple statements, e.g. if you do in 16 bits c = a + b
clc ; clear carry bit
lda A_lo ; lower byte of A into accumulator
adc B_lo ; add lower byte of B to accumulator, put carry to carry bit
sta C_lo ; store the result to lower byte of C
lda A_hi ; higher byte of A into accumulator
adc B_hi ; add higher byte of B using carry bit
sta C_hi ; store the result to higher byte of C
8080 and Z80 CPUs at that time had 16 bit registers as well.
The Z80 CPU was still 8 bit architecture. It's 16 bit registers were eventually pairing two 8 bit registers, like BC, DE. The operations with them were much slower then with 8 bit registers because the CPU architecture was 8 bit, but this way 16 bit registers and 16 operations were provided.
8088 architecture was mixed, because it also had 8 bit data bus, but it had 16 bit registers, AX, BX, etc., lower and higher bytes also separately usably as 8 bit registers, AL, AH, etc.
So there were different solutions to use 16 bit integers but 8 bit is simply not a useful integer. That's why C and C++ used also 16 bit for int.
From Section 6.2.5 Types, p5
5 An object declared as type signed char occupies the same amount of storage as a ''plain'' char object. A ''plain'' int object has the natural size suggested by the architecture of the execution environment (large enough to contain any value in the range INT_MIN to INT_MAX as defined in the header <limits.h>).
And 5.2.4.2.1 Sizes of integer types <limits.h> p1
Their implementation-defined values shall be equal or greater in magnitude (absolute value) to those shown, with the same sign.
...
minimum value for an object of type int
INT_MIN -32767 // -(215 - 1)
maximum value for an object of type int
INT_MAX +32767 // 215 - 1
Then in those platforms, int must be at least 16 bits
I have a card game where I need to display the value of the card after I shuffle I display the value of the card using values (x >> 1)& 0xf where x iterates through the list of 13 cards this is found as bit 1-4 is the value of the card
the above is card type
But when I come across finding the highest pair in the card it only works when I use values[(afterfindingpairs[a]&0xf0)>>4].
This is worked out as 0-4 bits are the no of pairs whereas the 4-7 bits are the values of the pair in the byte of pair type.
It just displays the highest pair as Ace when I use
values[(afterfindingpairs[a]&0xf)>>4].
I'm confused wouldnt the hexadecimal 0xf0 deal with 8 bits rather than the 4 bit between 4-7 of the pair type which would be found by values[(afterfindingpairs[a]&0xf)>>4] which is incorrect.
Explanation as to why this happens would be much appreciated.
You appear to want to manipulate 8-bit values, extracting various ranges of bits. However in some cases you're doing so in such a way as to discard all the bits.
The 8 bits are arranged from least significant (bit 0, which is '1' in decimal), to the most significant (bit 7, which is '128' in decimal).
So if we had the binary number 10010110, this would represent the number (128 + 16 + 4 + 2), or 150, or 0x96 in hex.
If you apply a right-shift to such a number, the bits will be moved to the right by the appropriate number of places. So if we did >>4 to the number above, the result will be 00001001 - or 9. I have assumed we are dealing with unsigned values here, so the upper bits will be filled in with '0'. Note that the result is that the original bits 4-7 are now bits 0-3, and the original bits 0-3 have been discarded.
If you and two numbers, the result is that only bits which are set in both will be set in the result. So effectively this is masking bits. If you mask with 0xf0, this is in binary 11110000, so only the upper bits, 4-7 will remain in the result, and the lower bits 0-3 will be set to zero.
Take your statement:
values[(afterfindingpairs[a]&0xf0)>>4]
The expression afterfindingpairs[a]&0xf0, as per my explanation above, will simply set bits 0-3 to zero, retaining bits 4-7.
The next part of the expression, >>4 will shift those remaining bits down so they become bits 0-3 of the result. Note that this also discards the original bits 0-3, making the previous mask operation redundant (unless we are not dealing with 8-bit values...)
Your other statement:
values[(afterfindingpairs[a]&0xf)>>4]
Is more problematic. You first apply a mask (0xf) retains only bits 0-3, setting all others to zero. Then you apply a shift which throws away bits 0-3, by shifting bits 4-7 (which are already zero) down into their place.
In other words, this latter expression is always zero.
I'm not so good with bitwise operators so please excuse the question but how would I clear the lower 16 bits of a 32-bit integer in C/C++?
For example I have an integer: 0x12345678 and I want to make that: 0x12340000
To clear any particular set of bits, you can use bitwise AND with the complement of a number that has 1s in those places. In your case, since the number 0xFFFF has its lower 16 bits set, you can AND with its complement:
b &= ~0xFFFF; // Clear lower 16 bits.
If you wanted to set those bits, you could instead use a bitwise OR with a number that has those bits set:
b |= 0xFFFF; // Set lower 16 bits.
And, if you wanted to flip those bits, you could use a bitwise XOR with a number that has those bits set:
b ^= 0xFFFF; // Flip lower 16 bits.
Hope this helps!
To take another path you can try
x = ((x >> 16) << 16);
One way would be to bitwise AND it with 0xFFFF0000 e.g. value = value & 0xFFFF0000
Use an and (&) with a mask that is made of the top 16 bit all ones (that will leave the top bits as they are) and the bottom bits all zeros (that will kill the bottom bits of the number).
So it'll be
0x12345678 & 0xffff0000
If the size of the type isn't known and you want to mask out only the lower 16 bits you can also build the mask in another way: use a mask that would let pass only the lower 16 bits
0xffff
and invert it with the bitwise not (~), so it will become a mask that kills only the lower 16 bits:
0x12345678 & ~0xffff
int x = 0x12345678;
int mask = 0xffff0000;
x &= mask;
Assuming the value you want to clear bits from has an unsigned type not of "small rank", this is the safest, most portable way to clear the lower 16 bits:
b &= -0x10000;
The value -0x10000 will be promoted to the type of b (an unsigned type) by modular arithmetic, resulting in all high bits being set and the low 16 bits being zero.
Edit: Actually James' answer is the safest (broadest use cases) of all, but the way his answer and mine generalize to other similar problems is a bit different and mine may be more applicable in related problems.