How are conflicts in struct tm members resolved? - c

time.h declares struct tm that has (amongst other members) the following:
int tm_mon; /* Month [0, 11] (January = 0) */
int tm_mday; /* Day of the month [1, 31] */
int tm_wday; /* Day of the week [0, 6] (Sunday = 0) */
int tm_yday; /* Day of the year [0, 365] (Jan/01 = 0) */
A structure like this allows you to get into impossible situations... for example:
tm_mon=0 ; tm_mday=1 ; tm_yday=360
or
tm_year=0 ; tm_yday=1 ; tm_wday=5 ; // January 1, 1900 is on a Monday
I've had good luck memsetting the structure to 0s and then only setting the fields I want to set.
My question is: Is there a deterministic way that struct tm should be interpreted?
I've been experimenting with this for awhile, so this isn't a "how can I get my code to work" question. Mostly, I'm asking about other experienced programmers' experience with struct tm and fish for any gotchas.

Is there a deterministic way that struct tm should be interpreted?
Yes, mostly.
Call mktime() to resolve those impossible situations in struct tm.
As well mention by #pm100, calling mktime() will ignore members .tm_wday and .tm_yday and proceed to resolve other member combinations that are out of the normal range as if the time was a local time.
Unusual "gotcha" time stamp examples include:
One member is out of primary range: reduce to the primary range and add the excess to the next most significant member. Repeat as needed. See later exception.
February 29, (non-leap year).
.tm_min = 30, .tm_isdst < 0 and .tm_hour is in the missing hour of a 23-hour day when the zone goes on daylight time (DST).
.tm_min = 30, .tm_isdst < 0 and .tm_hour is in the added hour of a 25-hour day when the zone goes off daylight time.
I have doubt about the reliability of mktime() to handle pathological cases like .tm_year = INT_MAX/12 + 100, .tm_mon = INT_MIN. Such extremes may show different resolutions (or a error return value of -1) on various platforms and reflect a quality of implementation difference.
mktime() returns -1 to indicate an error. Unfortunately it can also rarely return -1 to indicate a valid time. C spec offers no clear way to distinguish.
struct tm may have other members than the specified 9. I have seen .tm_nsecs, .tm_usec, .tm_timezone, .tm_tzoffset or equivalents. That is why it is best to initialize with { 0 } or first memset(0) the entire object when populating a struct tm with custom code.
The trickiest ones - when .tm_year, .tm_mon, .tm_mday are all out of range: Which to resolve first? Depending on order, the result differs. C does specify:
the final value of tm_mday is not set until tm_mon and tm_year are determined.
ISO 8601 does allow 24:00:00 to refer to the instant at the end of a calendar day. That is the same time 0:00:00 of the next day. So sometimes, reducing via mktime() is not always desirable.
For simplicity and sanity's sake, discussion about leap seconds ignored.

as per man page for mktime (the only thing that reads a tm)
The values of the members tm_wday and tm_yday of timeptr are ignored,
when reading a tm - from localtime, say, all fields are set. Its up to you to choose which ones are interesting

Related

Uniform Distribution: Bug or Paradox

Imagine 10 cars randomly, uniformly distributed on a round track of length 1. If the positions are represented by a C double in the range [0,1> then they can be sorted and the gaps between the cars should be the position of the car in front minus the position of the car behind. The last gap needs 1 added to account for the discontinuity.
In the program output, the last column has very different statistics and distribution from the others. The rows correctly add to 1. What's going on?
#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
int compare (const void * a, const void * b)
{
if (*(double*)a > *(double*)b) return 1;
else if (*(double*)a < *(double*)b) return -1;
else return 0;
}
double grand_f_0_1(){
static FILE * fp = NULL;
uint64_t bits;
if(fp == NULL) fp = fopen("/dev/urandom", "r");
fread(&bits, sizeof(bits), 1, fp);
return (double)bits * 5.421010862427522170037264004349e-020; // https://stackoverflow.com/a/26867455
}
int main()
{
const int n = 10;
double values[n];
double diffs[n];
int i, j;
for(j=0; j<10000; j++) {
for(i=0; i<n; i++) values[i] = grand_f_0_1();
qsort(values, n, sizeof(double), compare);
for(i=0; i<(n-1); i++) diffs[i] = values[i+1] - values[i];
diffs[n-1] = 1. + values[0] - values[n-1];
for(i=0; i<n; i++) printf("%.5f%s", diffs[i], i<(n-1)?"\t":"\n");
}
return(0);
}
Here is a sample of the output. The first column represents the gap between the first and second car. The last column represents the gap between 10th car and the first car, across the start/finish line. Large numbers like .33 and .51 are much more common in the last column and very small numbers are relatively rare.
0.13906 0.14241 0.24139 0.29450 0.01387 0.07906 0.02905 0.03160 0.00945 0.01962
0.01826 0.36875 0.04377 0.05016 0.05939 0.02388 0.10363 0.04640 0.03538 0.25037
0.04496 0.05036 0.00536 0.03645 0.13741 0.00538 0.24632 0.04452 0.07750 0.35176
0.00271 0.15540 0.03399 0.05654 0.00815 0.01700 0.24275 0.25494 0.00206 0.22647
0.34420 0.03226 0.01573 0.08597 0.05616 0.00450 0.05940 0.09492 0.05545 0.25141
0.18968 0.34749 0.07375 0.01481 0.01027 0.00669 0.04306 0.00279 0.08349 0.22796
0.16135 0.02824 0.07965 0.11255 0.05570 0.05550 0.05575 0.05586 0.07156 0.32385
0.12799 0.18870 0.04153 0.16590 0.02079 0.06612 0.08455 0.14696 0.13088 0.02659
0.00810 0.06335 0.13014 0.06803 0.01878 0.10119 0.00199 0.06656 0.20922 0.33263
0.00715 0.03261 0.05779 0.47221 0.13998 0.11044 0.06397 0.00238 0.04157 0.07190
0.33703 0.02945 0.06164 0.01555 0.03444 0.14547 0.02342 0.03804 0.16088 0.15407
0.10912 0.14419 0.04340 0.09204 0.23033 0.09240 0.14530 0.00960 0.03412 0.09950
0.20165 0.09222 0.04268 0.17820 0.19159 0.02074 0.05634 0.00237 0.09559 0.11863
0.09296 0.01148 0.20442 0.07070 0.05221 0.04591 0.08455 0.25799 0.01417 0.16561
0.08846 0.07075 0.03732 0.11721 0.03095 0.24329 0.06630 0.06655 0.08060 0.19857
0.06225 0.10971 0.10978 0.01369 0.13479 0.17539 0.17540 0.02690 0.00464 0.18744
0.09431 0.10851 0.05079 0.07846 0.00162 0.00463 0.06533 0.18752 0.30896 0.09986
0.23214 0.11937 0.10215 0.04040 0.02876 0.00979 0.02443 0.21859 0.15627 0.06811
0.04522 0.07920 0.02432 0.01949 0.03837 0.10967 0.11123 0.01490 0.03846 0.51915
0.13486 0.02961 0.00818 0.11947 0.17204 0.08967 0.09767 0.03349 0.08077 0.23426
Your code is ok. The mean value of the last difference it two times larger than the others.
The paradox comes from the fact is that rather than selecting 10 points on a unit interval one actually tries to divide it into 11 sub-intervals with 10 cuts.Therefore the expected length of each sub-interval is 1/11.
The difference between consecutive points is approaching 1/11 except the last pair because it contains the last sub-interval (between the last point and point 1) and the first one (between point 0 and the first point).
Thus the mean of the last difference is 2/11.
"There is no special points on the circle"
The thing is that, on a circle, one car looks always the same, and so there is no need to relate to zero: you can just relate to the first car. This means that you can fix the first car at zero, and treat the random positions of the other cars as related to it (measured from it).
And so, the convenient solution is to fix the first car at zero and think of the 9 numbers you still generate as positions related to the first one.
Hope it's a satisfying answer :-)
IDENTITY (or Which diff is first?)
If 10 cars with labels ("1","2" and so on) are placed randomly on a circle, the difference from the "1" to the next will average 1/10.
While sorting, the first diff "loses it's identity", what it is changes: it's similar to that if you chose the 1st diff to be the longest one, it would average more. Choosing it based on relation of cars to zero skews (or, in nicer terms: changes) things in a similar manner.
The first difference (2nd, 3rd etc.) just becomes something different – defining it as a difference from a given car is more intuitive, and gives an option to use it as a reference (playing nicely with circle symmetry); the distribution of the rest of cars with respect to it is a uniform one. Dealing with the smallest of random points is not that simple.
Summary: define what you're calculating, know your definitions and probability is non-intuitive
After 3 months of puzzling over this, I have an explanation that is intuitive, at least to me. This is cumulative to the answers provided by #wojand and #tstanisl.
My original code is correct: it uniformly distributes points on the interval, and the forward differences of all points have the same statistical distribution. The paradox is that the forward difference of the highest-value point, the one that crosses the 0-1 discontinuity, is on average twice the others and it's distribution has a different shape.
The reason this forward difference has a different distribution is that it contains the value 0. Larger forward differences (gaps) are more likely to contain any fixed value, simply because they are larger.
We could search for the gap that contains 1/pi, for example, and it too would have the same atypical distribution.

Led bar indicator flickering within adjacent values and how to avoid this (embedded-C)

I am designing a measurement instrument that has a visible user output on a 30-LEDs bar. The program logic acts in this fashion (pseudo-code)
while(1)
{
(1)Sensor_Read();
(2)Transform_counts_into_leds();
(3)Send_to_bar();
{
The relevant function (2) is a simple algorithm that transforms the counts from the I2C sensor to a value serially sent to shift-registers that control the single leds.
The variable sent to function (3) is simply the number of LEDs that have to stay on (0 for al LEDs off, 30 for all LEDs on)
uint8_t Transform_counts_into_leds(uint16_t counts)
{
float on_leds;
on_leds = (uint8_t)(counts * 0.134); /*0.134 is a dummy value*/
return on_leds;
}
using this program logic when counts value is on the threshold between two LEDs, the next led flickers
I think this is a bad user experience for my device and I want the LEDs, once lit, to stay stable in a small range of values.
QUESTION: How a solution to this problem could be implemented in my project?
Hysteresis is useful for a number of applications, but I would suggest not appropriate in this instance. The problem is that if the level genuinely falls from say 8 to 7 for example you would not see any change until at least one sample at 6 and it would jump to 6 and there would have to be a sample of 8 before it went back to 7.
A more appropriate solution in the case is a moving average, although it is simpler and more useful to use a moving sum and use the higher resolution that gives. For example a moving-sum of 16 effectively adds (almost) 4 bits of resolution, making a 8 bit sensor effectively 12 bit - at the cost of bandwidth of course; you don't get something for nothing. In this case lower bandwidth (i.e. less responsive to higher frequencies is exactly what you need)
Moving sum:
#define BUFFER_LEN 16 ;
#define SUM_MAX (255 * BUFFER_LEN)
#define LED_MAX 30
uint8_t buffer[BUFFER_LEN] = {0} ;
int index = 0 ;
uint16_t sum = 0 ;
for(;;)
{
uint8_t sample = Sensor_Read() ;
// Maintain sum of buffered values by
// subtracting oldest buffered value and
// adding the new sample
sum -= buffer[index] ;
sum += sample ;
// Replace oldest sample with new sample
// and increment index to next oldest sample
buffer[index] = sample ;
index = (index + 1) % BUFFER_LEN ;
// Transform to LED bar level
int led_level = (LED_MAX * sum) / SUM_MAX ;
// Show level
setLedBar( led_level ) ;
}
The underlying problem -- displaying sensor data in a human-friendly way -- is very interesting. Here's my approach in pseudocode:
Loop:
Read sensor
If sensor outside valid range:
Enable warning LED
Sleep in a low-power state for a while
Restart loop
Else:
Disable warning LED
Filter sensor value
Compute display value from sensor value with extra precision:
If new display value differs sufficiently from current value:
Update current displayed value
Update display with scaled-down display value
Filtering deals with noise in the measurements. Filtering smoothes out any sudden changes in the measurement, removing sudden spikes. It is like erosion, turning sharp and jagged mountains into rolling fells and hills.
Hysteresis hides small changes, but does not otherwise filter the results. Hysteresis won't affect noisy or jagged data, it only hides small changes.
Thus, the two are separate, but complementary methods, that affect the readout in different ways.
Below, I shall describe two different filters, and two variants of simple hysteresis implementation suitable for numeric and bar graph displays.
If possible, I'd recommend you write some scripts or test programs that output the input data and the variously filtered output data, and plot it in your favourite plotting program (mine is Gnuplot). Or, better yet, experiment! Nothing beats practical experiments for human interface stuff (at least if you use existing suggestions and known theory as your basis, and leap forward from there).
Moving average:
You create an array of N sensor readings, updating them in a round-robin fashion, and using their average as the current reading. This produces very nice (as in human-friendly, intuitive) results, as only the N latest sensor readings affect the average.
When the application is first started, you should copy the very first reading into all N entries in the averaging array. For example:
#define SENSOR_READINGS 32
int sensor_reading[SENSOR_READINGS];
int sensor_reading_index;
void sensor_init(const int reading)
{
int i;
for (i = 0; i < SENSOR_READINGS; i++)
sensor_reading[i] = reading;
sensor_reading_index = 0;
}
int sensor_update(const int reading)
{
int i, sum;
sensor_reading_index = (sensor_reading_index + 1) % SENSOR_READINGS;
sensor_reading[sensor_reading_index] = reading;
sum = sensor_reading[0];
for (i = 1; i < SENSOR_READINGS; i++)
sum += sensor_reading[i];
return sum / SENSOR_READINGS;
}
At start-up, you call sensor_init() with the very first valid sensor reading, and sensor_update() with the following sensor readings. The sensor_update() will return the filtered result.
The above works best when the sensor is regularly polled, and SENSOR_READINGS can be chosen large enough to properly filter out any unwanted noise in the sensor readings. Of course, the array requires RAM, which may be on short supply in some microcontrollers.
Exponential smoothing:
When there is not enough RAM to use a moving average to filter data, an exponential smoothing filter is often applied.
The idea is that we keep an average value, and recalculate the average using each new sensor reading using (A * average + B * reading) / (A + B). The effect of each sensor reading on the average decays exponentially: the weight of the most current sensor reading is always B/(A+B), the weight of the previous one is A*B/(A+B)^2, the weight of the one before that is A^2*B/(A+B)^3, and so on (^ indicating exponentiation); the weight of the n'th sensor reading in the past (with current one being n=0) is A^n*B/(A+B)^(n+1).
The code corresponding to the previous filter is now
#define SENSOR_AVERAGE_WEIGHT 31
#define SENSOR_CURRENT_WEIGHT 1
int sensor_reading;
void sensor_init(const int reading)
{
sensor_reading = reading;
}
int sensor_update(const int reading)
return sensor_reading = (sensor_reading * SENSOR_AVERAGE_WEIGHT +
reading * SENSOR_CURRENT_WEIGHT) /
(SENSOR_AVERAGE_WEIGHT + SENSOR_CURRENT_WEIGHT);
}
Note that if you choose the weights so that their sum is a power of two, most compilers optimize the division into a simple bit shift.
Applying hysteresis:
(This section, including example code, edited on 2016-12-22 for clarity.)
Proper hysteresis support involves having the displayed value in higher precision than is used for output. Otherwise, your output value with hysteresis applied will never change by a single unit, which I would consider a bad design in an user interface. (I'd much prefer a value to flicker between two consecutive values every few seconds, to be honest -- and that's what I see in e.g. the weather stations I like best with good temperature sensors.)
There are two typical variants in how hysteresis is applied to readouts: fixed, and dynamic. Fixed hysteresis means that the displayed value is updated whenever the value differs by a fixed limit; dynamic means the limits are set dynamically. (The dynamic hysteresis is much rarer, but it may be very useful when coupled with the moving average; one can use the standard deviation (or error bars) to set the hysteresis limits, or set asymmetric limits depending on whether the new value is smaller or greater than the previous one.)
The fixed hysteresis is very simple to implement. First, because we need to apply the hysteresis to a higher-precision value than the output, we choose a suitable multiplier. That is, display_value = value / DISPLAY_MULTIPLIER, where value is the possibly filtered sensor value, and display_value is the integer value displayed (number of bars lit, for example).
Note that below, display_value and the value returned by the functions, refer to the integer value displayed, for example the number of lit LED bars. value is the (possibly filtered) sensor reading, and saved_value containing the sensor reading that is currently displayed.
#define DISPLAY_HYSTERESIS 10
#define DISPLAY_MULTIPLIER 32
int saved_value;
void display_init(const int value)
{
saved_value = value;
}
int display_update(const int value)
{
const int delta = value - saved_value;
if (delta < -DISPLAY_HYSTERESIS ||
delta > DISPLAY_HYSTERESIS)
saved_value = value;
return saved_value / DISPLAY_MULTIPLIER;
}
The delta is just the difference between the new sensor value, and the sensor value corresponding to the currently displayed value.
The effective hysteresis, in units of displayed value, is DISPLAY_HYSTERESIS/DISPLAY_MULTIPLIER = 10/32 = 0.3125 here. It means that the displayed value can be updated three times before a visible change is seen (if e.g. slowly decreasing or increasing; more if the value is just fluctuating, of course). This eliminates rapid flickering between two visible values (when the value is in the middle of two displayed values), but ensures the error of the reading is less than half display units (on average; half plus effective hysteresis in the worst case).
In a real life application, you usually use a more complete form return (saved_value * DISPLAY_SCALE + DISPLAY_OFFSET) / DISPLAY_MULTIPLIER, which scales the filtered sensor value by DISPLAY_SCALE/DISPLAY_MULTIPLIER and moves the zero point by DISPLAY_OFFSET/DISPLAY_MULTIPLIER, both evaluated at 1.0/DISPLAY_MULTIPLIER precision, but only using integer operations. However, for simplicity, I'll just assume that to derive the display value value, say the number of lit LED bars, you just divide the sensor value by DISPLAY_MULTIPLIER. In either case, the hysteresis is DISPLAY_HYSTERESIS/DISPLAY_MULTIPLIER of the output unit. Ratios of about 0.1 to 0.5 work fine; and the below test values, 10 and 32, yields 0.3125, which is about midway of the range of ratios that I believe work best.
Dynamic hysteresis is very similar to above:
#define DISPLAY_MULTIPLIER 32
int saved_value_below;
int saved_value;
int saved_value_above;
void display_init(const int value, const int below, const int above)
{
saved_value_below = below;
saved_value = value;
saved_value_above = above;
}
int display_update(const int value, const int below, const int above)
{
if (value < saved_value - saved_value_below ||
value > saved_value + saved_value_above) {
saved_value_below = below;
saved_value = value;
saved_value_above = above;
}
return saved_value / DISPLAY_MULTIPLIER;
}
Note that if DISPLAY_HYSTERESIS*2 <= DISPLAY_MULTIPLIER, the displayed value is always within a display unit of the actual (filtered) sensor value. In other words, hysteresis can easily deal with flickering, but it does not need to add much error to the displayed value.
In many practical cases the best amount of hysteresis applied depends on the amount of short-term variations in the sensor samples. This includes not only noise, but also the types of signals that are to be measured. A hysteresis of just 0.3 (relative to the output unit) is sufficient to completely eliminate the flicker when sensor readings flip the filtered sensor value between two consecutive integers that map to different integer ouputs, as it ensures that the filtered sensor value must change by at least 0.3 (in output display units) before it effects a change in the display.
The maximum error with hysteresis is half display units plus the current hysteresis. The half unit is the minimum error possible (since consecutive units are one unit apart, so when the true value is in the middle, either value shown is correct to within half a unit). With dynamic hysteresis, if you always start with some fixed hysteresis value when a reading changes enough, but when the reading is within the hysteresis, you instead just decrease the hysteresis (if greater than zero). This approach leads to a changing sensor value being tracked correctly (maximum error being half an unit plus the initial hysteresis), but a relatively static value being displayed as accurately as possible (at half an unit maximum error). I don't show an example of this, because it adds another tunable (how the hysteresis decays towards zero), and requires that you verify (calibrate) the sensor (including any filtering) first; otherwise it's like polishing a turd: possible, but not useful.
Also note that if you have 30 bars in the display, you actually have 31 states (zero bars, one bar, .., 30 bars), and thus the proper range for the value is 0 to 31*DISPLAY_MULTIPLIER - 1, inclusive.

Get unique random number every hour

I was wondering what the best way to get a unique random number every hour in C was. I have integers of hour, month, day of month, and day of week and want to get a random number between 0 and 8 every hour. Initially I tried doing (hour* month* day_month*week_month)%8 but I think it repeats certain numbers often. Would there be a better way of doing this?
One easy way would be to do something like the following:
int hourly_random()
{
srand(time(NULL)/3600);
return rand() % 8;
}
Just simply multiplying non-random integers will not give you something random. Just to illustrate some of the many problems: if one of the values is 0, the result will be 0. If you swap the value of two input variables, the result will still be the same.
This is also the case if you use that result as the seed for a random generator. Because the seed has definite pattern, the resulting random numbers will follow the same pattern.
If you want to use the variables you mentioned, you should combine them such that they don't "interfere" with each other. An obvious way is to multiply the subparts by the maximum value you are adding. That is, something like hours + 24*(months + 12*(...)).
When making your own way to generate random numbers, perhaps you should look at how existing random number generators work. That said, in general you really don't want to make such a thing yourself because of all the pitfalls. It's probably better to rely on the work that as already been done before.
Rather than reseeding the random number generator each time, you can simply keep track of when the last call was done. If it was in the same hour, return the same value, otherwise get a new random value.
int hourly_random()
{
static time_t last = 0;
static int rand_val = 0;
time_t current;
current = time(NULL);
if (!last) {
srand(current+getpid());
}
if ((current/3600) > (last/3600)) {
last = current;
rand_val = rand() % 8;
}
return rand_val;
}

Parse time periods

I have time format, that looks like this:
10:30-12:30,18:00-00:30 Mo-We,Th,Sa
and
Mo-Fr 08:00-13:30; Sa 08:00-12:30
Is there any easy and fast way to parse this? I need a function to compare current time to date in this formats.
Is this standart format ?
No, it's not a standard format. I'll try to split your problem in a few key parts:
1) Data storage:
Since the problem is limited to the days of the week, you could create an array of pairs of tm structs for each day of the week:
#include <time.h>
typedef tm interval[2];
typedef interval* daysofweek[7];
void main(void)
{
int number_of_intervals = 2; //this must be calculated for each day but to exemplify how you could store your data I initialised it to 2
interval *intervals;
intervals = (interval*)malloc(number_of_intervals);
daysofweek d;
d[0] = intervals; //you must set the intervals for each day
}
2) Parsing
You should define a set of rules based on the sintax of the strings. From the two strings you exemplified I would define the following rules:
split and treat separately all strings based on the ; character
split the resulted strings into a part with numerals (plus -, : and ,) respectively a part with characters
parse the part with numerals and create the time intervals
parse the part with characters and populate the array for each day, keep track of all the days so that you can initialize the remaining days with 0 second intervals
3) Comparison
After you populated your data structure, you can parse it and deduce if you are in an interval or not by using the comparison functions found in ctime.
HTH,
JP.
P.S. if you would use C++ and STL instead of plain C, the task would be a bit easier.

Arrival-time handling with wrap around in C

I´m currently writing a software in Ansi-C and are struggling to get one of the basic functionality to work.
The software will receive messages over a CAN-network and when these messages arrive, I need to make sure that they are delivered before a expected time and after the previous message.
Only unsigned variables are allowed to be used, so there will be problems with wrap around when the timers reach their maximum value (255 in my case).
It is easy to verify that messages arrive before the expected time, since I know the maximum time between two messages.
This example handles wrap around and discovers messages that are late:
UC_8 arrival = 250;
UC_8 expected = 15;
UC_8 maxInterArrTime = 30;
result = expected - arrival;
if(result <= maxInterArrTime){
// ON TIME!
}
else{
// DELAYED
}
This is the easy part, but I must also check that the arrived message actually have arrived after the previous message. My problem is that I do not know how to solve this with the wrap around problem. I tried to mimic the solution that finds delayed messages, but without any luck.
UC_8 arrival = 10; // Wrapped around
UC_8 lastArrival = 250;
UC_8 expected = 15;
UC_8 maxInterArrTime = 30;
result = expected - arrival;
result2 = lastArrival - arrival; //Problem
if(result2 >= ???){ // How should I compare and with what?
//Message received after previous msg
if(result <= maxInterArrTime){
// ON TIME!
}
else{
// DELAYED
}
else{
//Message received before previous msg - ERROR
}
My problem is when the arrival time value is lower than the previous arrival time, but is actually "larger" since it has wrapped around. I guess I might need to do it i several steps.
Any suggestions how I can solve this? I need to keep the number of if-statements low, the code will be analysed for complexity and other stuff.
If you can GUARANTEE that the delay between packets will not be 256 or more then the following will account for wrap around
if (newerTime >= olderTime)
delay = newerTime - olderTime;
else
delay = 256 - olderTime + newerTime;
If you can't guarantee the delay is less than 256 then unwind is correct, and you can't do what you want to do.
Huh? You can't magically code your way around a case of missing information. If you only have 8-bit unsigned timestamps, then you will not be able to differentiate between something that happened 3 ticks ago, and something that happened 259 ticks ago, and so on.
Look into making larger (more bits) timestamps available.
If you can ensure that the absolute value of the time delta is less than 1/2 of the maximum measurable time span then you can determine the time delta.
int8_t delta_u8(uint8_t a, uint8_t b) {
int8_t delta = a - b;
return delta;
}
...
delta = delta_u8(newerTime, olderTime);
delay = abs( (int) delta ); // or you could have a byte version of abs, since
// I suspect that you may be doing embedded stuff
// and care about such things.
If you can ensure that time always move forward then you can do better. By time moving forward I mean that in your case newerTime is always greater than olderTime, regardless of how they numerically compare. In this case you can measure deltas up to the maximum measurable time span -- which really goes without saying.
uint8_t delta_i8(uint8_t a, uint8_t b) {
return a - b;
}
If you know that two events can't happen during the same tick you can do even better by 1. If you know that two events can't happen closer together than a certain amount of time then you can calculate deltas up to maximum time span representable by your time stamp + the amount of time that must be between events, but then you have to use a larger variable size to do the actual math.
All of these work because the values rap around when you do math on them. You can very easily think of this as turning one of your known time values into the new origin (0) and adjusting the other time value to match this shift.

Resources