I'm programming an stm8s micro controller and I'm using STVD IDE and COSMIC compiler.
The result of a subtracting two uint32_t variables is saved in another uint32_t variable. Sometimes a weird value results from this process. This weird value is always the expected value with the most significant bits are set to 1s.
Here is a snippet of my code:
static uint32_t lastReceivedLed = 0;
uint32_t timeSinceLast = 0;
timeSinceLast = IL_TimTimeNow() - lastReceivedLed;
if(timeSinceLast > 2500U)
{
Inhibitor = ACTIVE; // HERE IS MY BREAKPOINT
}
Here is how IL_TimTimeNow() is defined:
volatile uint32_t IL_TimNow = 0;
uint32_t IL_TimTimeNow(void)
{
return IL_TimNow; // Incremented in timer ISR
}
Here are some real values from a debugging session:
timeSinceLast should be 865280 - 865055 = 225 = 0xE1
However, the result calculated by the compiler is 4294967265 = 0xFFFFFFE1
Notice that the least significant byte is correct while the rest of the bytes are set to 1s in the compiler's result!!
Also notice that this situation only happens once in a while. Otherwise, it works perfectly as expected.
Is this an overflow? What can cause this situation?
The values shown in the debugger are:
IL_TimNow = 865280
lastReceivedLed = 865055
timeSinceLast = 4294967265
Note that 4294967265 is also what you get when you convert -31 to a uint32_t. This suggests that the value of IL_TimNow returned by IL_TimTimeNow() just before the subtraction was actually lastReceivedLed - 31, which is 865055 - 31, which is 865024.
The difference between the value of IL_TimNow shown in the debugger (865280), and the value of IL_TimNow just before the subtraction (865024), is 256. Moreover, the least-significant 8 bits of both values are all zero. This suggests that the value was being read just as the least-significant byte was wrapping round to 0 and the next byte was being incremented. The comment in IL_TimTimeNow() says // Incremented in timer ISR. Since the 8-bit microcontroller can only read one byte at a time, it seems that the timer ISR occurred while the four bytes of IL_TimNow were being read by the function.
There are two ways to solve the problem. The first way is to disable the timer interrupt in IL_TimTimeNow() while the value of IL_TimNow is being read. So the IL_TimTimeNow() function can be changed to something like this:
uint32_t IL_TimTimeNow(void)
{
uint32_t curTime;
disable_timer_interrupt();
curTime = IL_TimNow;
enable_timer_interrupt();
return curTime;
}
However, you will need to check that disabling the timer interrupt temporarily only results in the interrupt being delayed, and not skipped altogether (otherwise you will lose timer ticks).
The other way to solve the problem is to keep reading IL_TimNow in IL_TimTimeNow() until you get two identical values. So the IL_TimTimeNow() function can be changed to something like this:
uint32_t IL_TimTimeNow(void)
{
uint32_t prevTime, curTime;
curTime = IL_TimNow;
do
{
prevTime = curTime;
curTime = IL_TimNow;
} while (curTime != prevTime);
return curTime;
}
There will usually be a single iteration of the do ... while loop, reading IL_TimNow twice. Occasionally, there will be two iterations of the loop, reading IL_TimNow three times. In practice, I wouldn't expect more than two iterations of the loop, but the function can handle that as well.
A less safe, but possibly slightly faster version of the above would be to only read IL_TimNow twice when the least-significant byte is 0:
uint32_t IL_TimTimeNow(void)
{
uint32_t curTime;
curTime = IL_TimNow;
if ((curTime & 0xFF) == 0)
{
// Least significant byte possibly just wrapped to 0
// so remaining bytes may be stale. Read it again to be sure.
curTime = IL_TimNow;
}
return curTime;
}
If performance is not an issue, use one of the safer versions.
Related
Having trouble understanding what happens when the 32bit system tick on a STM32 MCU rolls over using the ST supplied HAL platform.
If the MCU has been running until HAL_GetTick() returns its maximum of 2^32 -1 =0xFFFFFFFF which is 4,294,967,295 / 1000 / 60 / 60 / 24 = approx 49 days (when calculating the 1ms tick to the maximum duration that can be measured).
What happens if you have a timer that running across the rollover point?
Example code creating 100ms delay on a rollover event:
uint32_t start = HAL_GetTick() // start = 0xFFFF FFFF (in this example)
--> Interrupt increments systick which rolls it over to 0 at this point
while ((HAL_GetTick() - start) < 100);
So when the expression in the loop is first evaluated HAL_GetTick() = 0x0000 0000 and start = 0xFFFF FFFF. Hence 0x0000 00000 - 0xFFFF FFFF = ? (This number doesn't exist as it's negative and we are doing unsigned arithmetic)
However when I run the following code on my STM32 that is compiled with the GCC ARM :
uint32_t a = 0xFFFFFFFFUL;
uint32_t b = 0x00000000UL;
uint32_t c = b - a;
printf("a =%lu b=%lu c=%lu\r\n", a, b, c);
The output is:
a =4294967295 b=0 c=1
The fact that c=1 is good from the point of view of the code functioning properly across the overflow but I don't understand what is actually happening here at the low level. How does 0 - 4294967295 = 1 ?? How would I calculate this on paper to show what the arithmetic logic unit inside the MCU is doing when this situation is encountered?
This is a characteristic of modular arithmetic. Or modulo wrapping is what happens when an unsigned integer overflows.
When working with a fixed number of digits/bits, arithmetic operations can overflow the fixed number of digits. But the overflow portion cannot be represented in the fixed number of digits/bits and is basically masked away. The overflow portion can be considered a modulus and the portion within the fixed number of digits/bits is the remainder or modulo. Given the modulus, the modulo value remains correct/congruent after the operation that caused the overflow.
The best way to understand is to do a few operations with a pen on paper. Choose a base. Hexadecimal is great but it works for decimal, binary, and every base. Choose a fixed number of digits/bits. For uint32_t you have 8 hex digits or 32 bits. Choose two values that will overflow the fixed number of digits when you add them. Do the math on paper and include any overflow into an extra digit. Now perform the modulo operation by covering the overflow with your hand. Your CPU does this modulo operation automatically by virtue of having a fixed number of digits (i.e., uint32_t). Repeat this with different numbers and repeat with a subtraction/underflow. Eventually you'll start to trust that it works.
You do have to be careful when setting up this operation. Use unsigned types and subtract the start ticks value from the current ticks value, like is done in your example code. (Do not, for example, add the delay to start ticks and compare with the current ticks.) Raymond Chen's article, Using modular arithmetic to avoid timing overflow problems has more information.
How does 0 - 4294967295 = 1 ?? How would I calculate this on paper to
show what the arithmetic logic unit inside the MCU is doing when this
situation is encountered?
First write it in hex like this:
0000_0000
- FFFF_FFFF
_____________
Then realize that there can be a modulus value of 0x1_0000_0000 on the first value (minuend). (Because according to modular arithmetic, "0x0_0000_0000 and 0x1_0000_0000 are congruent modulo 0x1_0000_0000"). Then it should become obvious that the difference is 1.
1_0000_0000
- 0_FFFF_FFFF
_____________
0_0000_0001
Nothing bad will happen. It will work the same as before the wraparound.
int main(void)
{
uint32_t start = UINT32_MAX - 20;
uint32_t current = start;
for(uint32_t x = 0; x < 100; x++)
{
printf("start = 0x%08"PRIx32" current = 0x%08"PRIx32 " current - start = %"PRIu32"\n", start, current, current-start);
current++;
}
}
You can see it here:
https://godbolt.org/z/jx4T4fhsW
0x00000000 - 0xffffffff will be 1 as 1 needs to be added to 0xffffffff to get 0x00000000. Same with other numbers.
BTW it is much easier to understand if you use hex numbers instead of decimals which have very limited use in programming.
Given a counter/timer that increases and simply wraps at a given bit width, a well-known solution to the problem of finding the difference between two captured values of the counter (where the counter might have wrapped between the two points) is simply to perform unsigned subtraction on the counter (possibly then interpreting the result as signed if it's not known which one is larger).
For example given a 32-bit timer, code like this can be used to determine the length of time some code takes to run:
uint32_t start = GetSomePlatformSpecificTimer();
RunSomeOtherCode();
uint32_t end = GetSomePlatformSpecificTimer();
uint32_t platformTicksTakenByCode = end - start;
Or alternatively to check if some time limit has been reached:
uint32_t limit = GetSomePlatformSpecificTimer() + timeLimitInTicks;
while (true)
{
bool finished = DoSomethingSmall();
if (finished)
break;
if ((int32_t)(GetSomePlatformSpecificTimer() - limit) >= 0)
return ERROR_TIMEOUT;
}
This works great if the timer is known to be 32 bits wide. It also can be adjusted for 16-bit or 8-bit timers by changing the types used.
Is there a similarly simple way to do the same thing where the timer size does not match a type size? For example, a 24-bit timer, or an 18-bit timer.
Assume that the bit size is <= 32 and is specified by a #define COUNTER_WIDTH in some external header (and might change).
Is the best solution to sign-extend the two counter values from COUNTER_WIDTH to 32-bits and then use the code above? I can see that possibly working for the FF -> 00 rollover but I think it would break the 7F -> 80 rollover, so presumably there would have to be some sort of check for this (perhaps sign-extending if the values are near zero and zero-extending if the values are near the midpoint). I think this also means that the difference between two values should be no more than a quarter of the counter range, otherwise it could cause issues.
Or is there a better way to do this?
Instead of sign-extending, you could multiply up so that the full range becomes the same size as your arithmetic type. In other words, use fixed-point arithmetic to fill the integer. In your case, with uint32_t, that would look like
uint32_t start = GetSomePlatformSpecificTimer();
RunSomeOtherCode();
uint32_t end = GetSomePlatformSpecificTimer();
start <<= 32-COUNTER_WIDTH;
end <<= 32-COUNTER_WIDTH;
uint32_t platformTicksTakenByCode = end - start;
platformTicksTakenByCode >>= 32-COUNTER_WIDTH;
Obviously you'd want to encapsulate that arithmetic:
const uint32_t start = GetScaledTimer();
RunSomeOtherCode();
const uint32_t end = GetScaledTimer();
const uint32_t platformTicksTakenByCode = RescaleDuration(end - start);
with
uint32_t GetScaledTimer()
{
return GetSomePlatformSpecificTimer() << 32-COUNTER_WIDTH;
}
uint32_t RescaleDuration(uint32_t d)
{
return d >> 32-COUNTER_WIDTH;
}
You then have much the same behaviour as for your full-width timer, and the same option to use signed types if necessary.
I am trying to get the remainder when two integers are divided.And also trying to get the quotient.My variables are follows:
const uint16_t key = 1000;
uint8_t remainder;
uint16_t temp;
temp = somefunction(); //This returns a uint32_t
while((UCSR0A&(1<<RXC0)) == 0); //WAIT FOR CHAR
//Wait for a char from serial
remainder = temp % key;
quotient = (temp/key);
//Now I check to see if I got the correct remainder
while((UCSR0A&(1<<UDRE0)) == 0); //wait until empty
UDR0 = remainder;
//The remainder I get in minicom is something I am not expecting.
//I checked the result of somefunction() and it is correct
Please help!
Based on your comments:
The value which is being returned from somefunction() - 101010 - is beyond the range of the uint16_t variable temp which you are assigning it to. It is being truncated to 35474 (101010 mod 65536) when it is assigned to that variable, which would cause the results of the division and modulo to be 35 and 474, respectively.
You will need to change the type of temp to uint32_t, and change the type of remainder to uint32_t as well to avoid truncating the result.
The value you write to UDR0 is being sent over serial as a character code, not as a human-readable number. If you want something you can read, you will need to do formatting yourself, or link the C library and use something like printf() or itoa().
I am trying to convert a bit of code from Python to C. I have got it all working other than the section below. All the variables have been defined as ints. I believe the problem has to do with pointers and addresses but I can't work it out.
for(j=0; j<12; j++)
{
digitalWrite(CLOCK, 1);
sleep(0.001);
bit = digitalRead(DATA_IN);
sleep(0.001);
digitalWrite(CLOCK, 0);
value = bit * 2 ** (12-j-1); // error
anip = anip + value;
printf("j:%i bit:%i value:%i anip:%i", j, bit, value, anip);
}
The error is invalid type argument of unary β*β (have βintβ)
C has no exponentiation operator, which is what I guess you do ** for.
You can use e.g. pow if it's okay to typecast the result from a floating point value back to an integer.
In C, 1<<i is the best way to raise i to the power of 2.
Do not use ints for bit manipulation, because they vary in size by platform. Use uint32_t from /usr/include/stdint.h.
The sleep() function takes an integer argument and waits for the specified number of seconds. The argument 0.001 becomes 0, which is probably not what you want. Instead, try usleep(), which takes an argument that represents milliseconds.
The other answers will solve the generic problem of raising an arbitrary number to a power, or to a power of 2, but this is a very specific case.
The purpose of the loop is to read 11 bits serially from MSB to LSB and convert them into an integer. The implementation you've shown attempts to do this by reading a bit, shifting it to the correct position, and accumulating the result into anip. But there's an easier way:
anip = 0;
for (j=0; j<12; ++j) {
// Pulse the CLOCK line and read one bit, MSB first.
digitalWrite(CLOCK, 1);
usleep(1);
bit = digitalRead(DATA_IN);
usleep(1);
digitalWrite(CLOCK, 0);
// Accumulate the bits.
anip <<= 1; // Shift to make room for the new bit.
anip += bit; // Add the new bit.
printf("j:%i bit:%i anip:%i", j, bit, anip);
}
As an example, suppose the first 4 bits are 1,0,0,1. Then the output will be
j:0 bit:1 anip:1
j:1 bit:0 anip:10
j:2 bit:0 anip:100
j:3 bit:1 anip:1001
When the loop completes, anip will contain the value of the entire sequence of bits. This is a fairly standard idiom for reading data serially.
Although the advice to use uint32_t is generally appropriate, the C standard defines int to be at least 16 bits, which is more than the 12 you need (including the sign bit, if anip is signed). Moreover, you're probably writing this for a specific platform and therefore aren't worried about portability.
I am trying to write a function in C that will shift out the individual bits of a byte based on a clock signal. So far I have come up with this...
void ShiftOutByte (char Data)
{
int Mask = 1;
int Bit = 0;
while(Bit < 8)
{
while(ClkPin == LOW);
DataPin = Data && Mask;
Mask = Mask * 2;
Bit++;
}
}
where DataPin represents the port pin that I want to shift data out on and ClkPin is the clock port pin.
I want the device to shift out 8 bits, starting on the LSB of the byte. For some reason my output pin stays high all the time. I am certain that the port pins are configured properly so it is purely a logical issue.
You want to use &, not &&. && is the logical and, whereas & is the bitwise and.
Your solution is almost there, but there a few problems:
When clock goes high, you will pump out data in a burst before the clock has a chance to go low again, so you need to pause while clock is high before you check for it being low (unless the hardware pauses execution until clock goes low). This can be done either at the end of the loop, or at the beginning. I chose the beginning in the sample below, because it allows you to return from the function while clock is still high, and do some processing while it is still high.
You used the logical and (&&) instead of bitwise and (&) as others have pointed out
I am not familiar with your architecture, so I can't say if DataPin can only accept 0 or 1. But that is also another point where things may go wrong. Data & Mask will return a bit that is shifted left (1, 2, 4, 8, ...) depending on the bit position.
So, in summary, I would do something like:
void ShiftOutByte (unsigned char Data)
{
int Bit;
for(Bit=0; Bit < 8; ++Bit)
{
while(ClkPin == HIGH);
while(ClkPin == LOW);
DataPin = Data & 1;
Data >>= 1;
}
}
Note: I have used a for loop instead of the while ... Bit++ pattern for clarity of purpose.
You have a typo, that has replaced one operator with another that has a similar function. That is, && is guaranteed to produce the answer 1 if both of its operands are logically true. Since you are almost certainly testing with Data not zero, then this is the case for all eight iterations of your loop.
This may be masking a more subtle issue. I don't know what your target architecture is, but if DataPin is effectively a one-bit wide register, then you probably need to be careful to not try to assign values other than 0 or 1 to it. One way to achieve that is to write DataPin = !!(Data & Mask);. Another is to shift the data the other direction and use a fixed mask of 1.