How can I cap the framerate at 60fps? - c

Alright, so I'm trying to cap my framerate at 60 frames per second, but the method I'm using is slowing it down to like 40.
#define TICK_INTERVAL 30
Uint32 TimeLeft(void){
static Uint32 next_time = 0;
Uint32 now;
now = SDL_GetTicks();
if ( next_time <= now ) {
next_time = now+TICK_INTERVAL;
return(0);
}
return(next_time-now);
}
Then I call it like this: SDL_Delay(TimeLeft());
How can I cap my framerate without going over it, or having it cap it too soon?

You need to record the time before drawing the current frame, and then delay the appropriate amount from then.
For example, some pseudo code to do it would be
markedTime = currentTime();
drawFrame();
delayFrom(markedTime, 1/60);
markedTime is the time recorded before drawFrame() was called. delayFrom() is a function that delays from a given time instead of "now". 1/60 is the amount of time to delay from the first argument, in seconds.

Related

Creating a Program that slowly Increases the brightness of an LED as a start-up

I am wanting to create a program using a for-loop that slowly increases the brightness of an LED as "start-up" when I press a button.
I have basically no knowledge of for loops. I've tried messing around by looking at similar programs and potential solutions, but I was unable to do it.
This is my start code, which I have to use PWMperiod to achieve.
if (SW3 == 0) {
for (unsigned char PWMperiod = 255; PWMperiod != 0; PWMperiod --) {
if (TonLED4 == PWMperiod) {
TonLED4 += 1;
}
__delay_us (20);
}
}
How would I start this/do it?
For pulse width modulation, you'd want to turn the LED off for a certain amount of time, then turn the LED on for a certain amount of time; where the amounts of time depend on how bright you want the LED to appear and the total period ("on time + off time") is constant.
In other words, you want a relationship like period = on_time + off_time where period is constant.
You also want the LED to increase brightness slowly. E.g. maybe go from off to max. brightness over 10 seconds. This means you'll need to loop total_time / period times.
How bright the LED should be, and therefore how long the on_time should be, will depend on how much time has passed since the start of the 10 seconds (e.g. 0 microseconds at the start of the 10 seconds and period microseconds at the end of the 10 seconds). Once you know the on_time you can calculate off_time by rearranging that "period = on_time + off_time" formula.
In C it might end up something like:
#define TOTAL_TIME 10000000 // 10 seconds, in microseconds
#define PERIOD 1000 // 1 millisecond, in microseconds
#define LOOP_COUNT (TOTAL_TIME / PERIOD)
int on_time;
int off_time;
for(int t = 0; t < LOOP_COUNT; t++) {
on_time = period * t / LOOP_COUNT;
off_time = period - on_time;
turn_LED_off();
__delay_us(off_time);
turn_LED_on();
__delay_us(on_time);
}
Note: on_time = period * t / LOOP_COUNT; is a little tricky. You can think it as on_time = period * (t / LOOP_COUNT); where t / LOOP_COUNT is a fraction that goes from 0.00000 to 0.999999 representing the fraction of the period that the LED should be turned on, but if you wrote it like that the compiler will truncate the result of t / LOOP_COUNT to an integer (round it towards zero) so the result will be zero. When written like this; C will do the multiplication first, so it'll behave like on_time = (period * t) / LOOP_COUNT; and truncation (or rounding) won't be a problem. Sadly, doing the multiplication first solves one problem while possibly causing another problem - period * t might be too big for an int and might cause an overflow (especially on small embedded systems where an int could be 16 bits). You'll have to figure out how big an int is for your computer (for the values you use - changing TOTAL_TIME or PERIOD with change the maximum value that period * t could be) and use something larger (e.g. a long maybe) if an int isn't enough.
You should also be aware that the timing won't be exact, because it ignores time spent executing your code and ignores anything else the OS might be doing (IRQs, other programs using the CPU); so the "10 seconds" might actually be 10.5 seconds (or worse). To fix that you need something more complex than a __delay_us() function (e.g. some kind of __delay_until(absolute_time) maybe).
Also; you might find that the LED doesn't increase brightness linearly (e.g. it might slowly go from off to dull in 8 seconds then go from dull to max. brightness in 2 seconds). If that happens; you might need a lookup table and/or more complex maths to correct it.

Protecting against overflow in a delay function

I have in a project of mine a small delay function that I have written myself by making use of a timer peripheral of my MCU:
static void delay100Us(void)
{
uint_64 ctr = TIMER_read(0); //10ns resolution
uint_64 ctr2 = ctr + 10000;
while(ctr <= ctr2) //wait 100 microseconds(10000)
{
ctr = TIMER_read(0);
}
}
The counter is a freerunning hw counter with 10ns resolution so I wrote that function as to give approximately 100us delay.
I think this should work in principle however there could be the situation where the timer is less than 10000 from overflowing and so ctr2 will get assigned a value which is more than ctr can actually reach and therefore I would end up getting stuck into an infinite loop.
I need to generate a delay using this timer in my project so I need to somehow make sure that I always get the same delay(100us) while at the same time protect myself from getting stuck there.
Is there any way I can do this or is this just a limitation that I can't get passed?
Thank you!
Edit:
ctr_start = TimerRead(); //get initial value for the counter
interval = TimerRead() - ctr_start;
while(interval <= 10000)
{
interval = ( TimerRead() - ctr_start + countersize ) % countersize;
}
Where countersize = 0xFFFFFFFFFFFFFFFF;
It can be dangerous to wait for a specific timer value in case an interrupt happens at just that moment and you miss the required count. So it is better to wait until the counter has reached at least the target value. But as noticed, comparing the timer with a target value creates a problem when the target is lower than the initial value.
One way to avoid this problem is to consider the interval that has elapsed with unsigned variables and arithmetic. Their behaviour is well defined when values wrap.
A hardware counter is almost invariably of size 8, 16, 32 or 64 bits, so choose a variable type to suit. Suppose the counter is 32-bit:
void delay(uint32_t period)
{
uint32_t mark = TIMER_read(0);
uint32_t interval;
do {
interval = TIMER_read(0) - mark; // underflow is well defined
} while(interval < period);
}
Obviously, the required period must be less than the counter's period. If not, either scale the timer's clock, or use another method (such as a counter maintained by interrupt).
Sometimes a one-shot timer is used to count down the required period, but using a free-run counter is easy, and using a one-shot timer means it can't be used by another process at the same time.

How to engineer a power-loss safe RTC time switch?

I'm using a ESP32 with a DS3231 real time clock. The system should automatically switch on and off an output based on a user-programmable time (HH:MM) on a daily basis. The on and off hours/minutes are stored in flash so they are non-volatile. Also, the duration the output stays on is hardcoded.
I'm trying to develop a function which is called each second to check wether the output should be turned on or off based on the current time provided by the DS3231 RTC. This should be safe against misbehaviour if the power fails. So if for example power is temporarily lost in between an on-interval, the output should be set again for the remaining time interval once power is reapplied.
How can I relatively calculate if the current time is in between the on-time interval?
const int8_t light_ontime_h = 2; // Hour interval for how long the output should stay on
const int8_t light_ontime_m = 42; // Minute interval for how long the output should stay on
struct tm currenttime; // Current time is stored in here, refreshed somewhere else in the program from RTC
struct tm ontime; // Hours and minutes to turn on are stored in here. Values are loaded from NVS on each reboot or on change. So struct only holds valid HH:MM info, date etc. is invalid
// This is called each second
void checkTime() {
struct tm offtime;
offtime.tm_hour = ontime.tm_hour + light_ontime_h;
offtime.tm_min = ontime.tm_min + light_ontime_m;
// Normalize time
mktime(&offtime);
// Does not work if power is lost and correct hour/min was missed
if ((currenttime.tm_hour == ontime.tm_hour) && (currenttime.tm_min == ontime.tm_min)) {
// Turn output on
}
if ((currenttime.tm_hour == offtime.tm_hour) && (currenttime.tm_min == offtime.tm_min)) {
// Turn output off
}
}

What does sprintf do? (was: FPS Calculation in OpenGL)

For FPS calculation, I use some code I found on the web and it's working well. However, I don't really understand it. Here's the function I use:
void computeFPS()
{
numberOfFramesSinceLastComputation++;
currentTime = glutGet(GLUT_ELAPSED_TIME);
if(currentTime - timeSinceLastFPSComputation > 1000)
{
char fps[256];
sprintf(fps, "FPS: %.2f", numberOfFramesSinceLastFPSComputation * 1000.0 / (currentTime . timeSinceLastFPSComputation));
glutSetWindowTitle(fps);
timeSinceLastFPSComputation = currentTime;
numberOfFramesSinceLastComputation = 0;
}
}
My question is, how is the value that is calculated in the sprint call stored in the fps array, since I don't really assign it.
This is not a question about OpenGL, but the C standard library. Reading the reference documentation of s(n)printf helps:
man s(n)printf: http://linux.die.net/man/3/sprintf
In short snprintf takes a pointer to a user supplied buffer and a format string and fills the buffer according to the format string and the values given in the additional parameters.
Here's my suggestion: If you have to ask about things like that, don't tackle OpenGL yet. You need to be fluent in the use of pointers and buffers when it comes to supplying buffer object data and shader sources. If you plan on using C for this, get a book on C and thoroughly learn that first. And unlike C++ you can actually learn C to some good degree over the course of a few months.
This function is supposedly called at every redraw of your main loop (for every frame). So what it's doing is increasing a counter of frames and getting the current time this frame is being displayed. And once per second (1000ms), it's checking that counter and reseting it to 0. So when getting the counter value at each second, it's getting its value and displaying it as the title of the window.
/**
* This function has to be called at every frame redraw.
* It will update the window title once per second (or more) with the fps value.
*/
void computeFPS()
{
//increase the number of frames
numberOfFramesSinceLastComputation++;
//get the current time in order to check if it has been one second
currentTime = glutGet(GLUT_ELAPSED_TIME);
//the code in this if will be executed just once per second (1000ms)
if(currentTime - timeSinceLastFPSComputation > 1000)
{
//create a char string with the integer value of numberOfFramesSinceLastComputation and assign it to fps
char fps[256];
sprintf(fps, "FPS: %.2f", numberOfFramesSinceLastFPSComputation * 1000.0 / (currentTime . timeSinceLastFPSComputation));
//use fps to set the window title
glutSetWindowTitle(fps);
//saves the current time in order to know when the next second will occur
timeSinceLastFPSComputation = currentTime;
//resets the number of frames per second.
numberOfFramesSinceLastComputation = 0;
}
}

UTC time stamp on Windows

I have a buffer with the UTC time stamp in C, I broadcast that buffer after every ten seconds. The problem is that the time difference between two packets is not consistent. After 5 to 10 iterations the time difference becomes 9, 11 and then again 10. Kindly help me to sort out this problem.
I am using <time.h> for UTC time.
If your time stamp has only 1 second resolution then there will always be +/- 1 uncertainty in the least significant digit (i.e. +/- 1 second in this case).
Clarification: if you only have a resolution of 1 second then your time values are quantized. The real time, t, represented by such a quantized value has a range of t..t+0.9999. If you take the difference of two such times, t0 and t1, then the maximum error in t1-t0 is -0.999..+0.999, which when quantized is +/-1 second. So in your case you would expect to see difference values in the range 9..11 seconds.
A thread that sleeps for X milliseconds is not guaranteed to sleep for precisely that many milliseconds. I am assuming that you have a statement that goes something like:
while(1) {
...
sleep(10); // Sleep for 10 seconds.
// fetch timestamp and send
}
You will get a more accurate gauge of time if you sleep for shorter periods (say 20 milliseconds) in a loop checking until the time has expired. When you sleep for 10 seconds, your thread gets moved further out of the immediate scheduling priority of the underlying OS.
You might also take into account that the time taken to send the timestamps may vary, depending on network conditions, etc, if you do a sleep(10) -> send ->sleep(10) type of loop, the time taken to send will be added onto the next sleep(10) in real terms.
Try something like this (forgive me, my C is a little rusty):
bool expired = false;
double last, current;
double t1, t2;
double difference = 0;
while(1) {
...
last = (double)clock();
while(!expired) {
usleep(200); // sleep for 20 milliseconds
current = (double)clock();
if(((current - last) / (double)CLOCKS_PER_SEC) >= (10.0 - difference))
expired = true;
}
t1 = (double)clock();
// Set and send the timestamp.
t2 = (double)clock();
//
// Calculate how long it took to send the stamps.
// and take that away from the next sleep cycle.
//
difference = (t2 - t1) / (double)CLOCKS_PER_SEC;
expired = false;
}
If you are not bothered about using the standard C library, you could look at using the high resolution timer functionality of windows such as QueryPerformanceFrequency/QueryPerformanceCounter functions.
LONG_INTEGER freq;
LONG_INTEGER t2, t1;
//
// Get the resolution of the timer.
//
QueryPerformanceFrequency(&freq);
// Start Task.
QueryPerformanceCounter(&t1);
... Do something ....
QueryPerformanceCounter(&t2);
// Very accurate duration in seconds.
double duration = (double)(t2.QuadPart - t1.QuadPart) / (double)freq.QuadPart;

Resources