I'm sorry if this is a basic question.
I am working on an embedded project. Somewhere in the project in some driver files there is a function named _time returnTime() that is said to return a mcu time with a 10 microseconds resolution. I guess it reads the registers for a timer module of the mcu and returns them and this timer has a 10 microseconds resolution.
I also have a user defined function. I want to test how much it takes to execute this function.
So I have decided to use function returnTime for this by calling it before the first and after the last instructions of my function.
In the function that I want to test how long it takes to execute I do something like this:
myFunction()
{
time1 = returnTime();
...
time2 = returnTime();
time_elapsed = time2 - time1;
}
When I do this function time_elapsed is a value of 5 and sometimes 4. Is that possible. Since the returnTime() function returns a time with 10 microsecond resolution shouldn't time_elapsed also be a value which is a multiple of 10?(like 0, 10, 20,...etc)
The timer may have a resolution of 10 microseconds but what are the units of the value returned by returnTime()? Perhaps returnTime() returns a value with units of "ticks" where one tick is equal to 10 microseconds. (The hardware timer module typically counts by one tick.)
Let's assume your function takes 42 microseconds to execute. If the function starts executing one microsecond before the next tick of the timer, then the timer will tick five times before the function finishes. But if the function starts one microsecond after the timer ticks, then the timer will tick only four times during the function execution. So sometimes returnTime() will return 5 and sometimes it will return 4.
The above are the simplest explanations. Also the hardware timer module could be setup incorrectly for the MCU clock speed. Also, if interrupts are enabled, then an interrupt could be occurring during execution of the function which makes the function execution time appear to vary.
Related
I know clock() does not give wall-clock time, and CLOCKS_PER_SEC is system dependent value. Let's say I dont care system dependent issues, can I assume below code always waits 1 second in both GNU/Linux and Windows, because I tested it seemed ok to me. Or should I stick with system dependent sleep()/usleep() functions?
void wait_a_sec()
{
clock_t start = clock();
do
{
// wait 1 sec
/**
* Second call to clock() gives the processor time in clock ticks
* since first call to it.
* CLOCKS_PER_SEC: clock ticks per sec.
*/
} while ( clock() < (start + CLOCKS_PER_SEC) );
}
No. clock() gives you the CPU time used up by your program. Not time passed.
If for example you have one CPU and launch two instances of the program, it will wait until each has burned 1 second of active CPU time, so 2 seconds.
Also busy-waiting is generally a bad idea. Stick with sleep.
I am making an emulator for the 6502 CPU and i want to control the frequency at which it is running.
Without any delay, my max average frequency is about 70-80 MHz. However, when i try to use nanosleep in order to control frequency, it only works until i reach 1 mS delay (1000000 nS).
If i try to set the delay any lower, for example to .1 mS, it does not change the average execution timing. What is strange to me is that if i set the delay to .1 mS, the actual delay turns out to be ~.008 mS and it does not change for any value in range 1-99999 nS.
I have tried compiling the code both in debug and release mode
Am i using nanosleep wrong? It is my first time using nanosleep, i have only used sleep and usleep before, so i am a bit confused.
Is there any better way to control the frequency of code execution?
How to correctly use nanosleep() to run code at specific frequency?
I say it's impossible to use nanosleep to run code at specific frequency (or, it's possible, up to a certain accuracy with some drift?). From man nanosleep emphasis mine, but whole is relevant:
The nanosleep() function shall cause the current thread to be suspended from execution until either the time interval specified
by the rqtp argument has elapsed or a signal is delivered to the calling thread, and its action is to invoke a signal-catching
function or to terminate the process. The suspension time may be longer than requested because the argument value is rounded up
to an integer multiple of the sleep resolution or because of the scheduling of other activity by the system. But, except for the
case of being interrupted by a signal, the suspension time shall not be less than the time specified by rqtp, as measured by the
system clock CLOCK_REALTIME.
Is there any better way to control the frequency of code execution?
Use timers and try to depend on the kernel to properly schedule periodic execution, rather then doing it yourself. Use timer_create to request an interrupt after specified timeout and run your code after that timeout, and you may also consider making your process a real-time process. In pseudocode I think I would:
void timer_thread(union sigval s) {
do_your_code();
}
int main() {
timer_t t;
timer_create(CLOCK_MONOTONIC, &(struct sigevent){
.sigev_notify = SIGEV_THREAD,
.sigev_notify_function = timer_thread
}, &t);
timer_settime(t, &(struct itimerspec){{.tv_sec = 1},{.tv_sec = 1}}, 0);
while (1) {
// just do nothing - code will be triggered in separate thread by timer
pause();
}
}
i am working on STM32F4 and pretty new at it. I know basics of C but with more than 1 day research, i still not found a solution of this.
I simply want to make a delay function myself, processor runs at 168MHz ( HCLK ). So my intuition says that it produces 168x10^6 clock cycles at each seconds. So the method should be something like that,
1-Store current clock count to a variable
2-Time diff = ( clock value at any time - stored starting clock value ) / 168000000
This flow should give me time difference in terms of seconds and then i can use it to convert whatever i want.
But, unfortunately, despite it seems so easy, I just cant implement any methods to MCU.
I tried time.h but it did not work properly. For ex, clock() gave same result over and over, and time( the one returns seconds since 1970 ) gave hexadecimal 0xFFFFFFFF ( -1, I guess means error ) .
Thanks.
Edit : While writing i assumed that some func like clock() will return total clock count since the start of program flow, but now i think after 4Billion/168Million secs it will overflow uint32_t size. I am really confused.
The answer depends on the required precision and intervals.
For shorter intervals with sub-microsecond precision there is a cycle counter. Your suspicion is correct, it would overflow after 232/168*106 ~ 25.5 seconds.
For longer intervals there are timers that can be prescaled to support any possible subdivision of the 168 MHz clock. The most commonly used setup is the SysTick timer set to generate an interrupt at 1 kHz frequency, which increments a software counter. Reading this counter would give the number of milliseconds elapsed since startup. As it is usually a 32 bit counter, it would overflow after 49.7 days. The HAL library sets SysTick up this way, the counter can then be queried using the HAL_GetTick() function.
For even longer or more specialized timing requirements you can use the RTC peripheral which keeps calendar time, or the TIM peripherals (basic, general and advanced timers), these have their own prescalers, and they can be arranged in a master-slave setup to give almost arbitrary precision and intervals.
on this link
https://os.mbed.com/handbook/Ticker
it says, "Note that timers are based on 32-bit int microsecond counters, so can only time up to a maximum of 2^31-1 microseconds i.e. 30 minutes. They are designed for times between microseconds and seconds. For longer times, you should consider the time()/Real time clock."
My question is that this 30 minute limit is only for when an interval of 1 us is being used? something like this
flipper.attach_us(&flip, 1);
In case I have to call an interrupt every 1 ms does this mean the counter can now go upto 30000 minutes? something like this
flipper.attach_us(&flip, 1000);
Also what would happen to the timer after it gets filled does it clear it self on its own and restart or does it through out an error.
This is the function declaration:
void attach_us (Callback< void()> func, us_timestamp_t t)
"t" is the time between calls. The warning you found in the linked page (https://os.mbed.com/handbook/Ticker) says that the max interval time you can set to is about 30 minutes because "t" is a 32-bit int.
(I think it is 64-bit in the latest API though. https://os.mbed.com/docs/latest/reference/ticker.html).
When the timer reaches the value specified by âtâ, it overflows and triggers the Callback function. It repeats that until you detach it.
If your interval is 1ms, you don't need to worry about the 30mins max limitation.
The page you're linking to is old. Timers are no longer 32-bits, but are 64-bits now; so this is no longer an issue. See the latest version of the Ticker docs at https://os.mbed.com/docs/latest/reference/ticker.html .
So I am new to C and have been assigned the task of making a game. I'll be using a Gameboy emulator and have been discouraged from importing any libraries beyond the basics.
I would like to come up with a way to run a second-counter (that will display on screen,) but unable to use the time.h library, I feel a bit stuck.
Is there anyway I could go about doing this?
I was told that Gameboy runs rather slow, and if I could get it caught in a 'busy loop' I could 'approximate a second' and count that way.
I've considered the sleep function to do this, but that's in unistd.h library.
I've also considered perhaps setting up a loop and counting up to 10 thousand (or whatever number may take a second to calculate,) but all of this will be happening simultaneously to the game at hand, and I am afraid things like that will delay the gameplay and other things happening.
Any recommendations?
Edit: I think anything beyond stdlib.h and stdio.h is disallowed.
The gameboy has a hardware timer.
Sometimes it's useful to have a timer that interrupts
at regular intervals for routines that require
periodic or percise updates. The timer in the GameBoy
has a selectable frequency of 4096, 16384, 65536, or
262144 Hertz. This frequency increments the Timer
Counter (TIMA). When it overflows, it generates an
interrupt. It is then loaded with the contents of
Timer Modulo (TMA). The following are examples:
;This interval timer interrupts 4096 times per second
ld a,-1
ld ($FF06),a ;Set TMA to divide clock by 1
ld a,4
ld ($FF07),a ;Set clock to 4096 Hertz
So write your interrupt handler to keep track of the number of interrupts and update the displayed clock every 4096 (=0x1000) interrupts.
Implementations of some (not all) delay functions are blocking, (nothing else in code runs until delay function returns. Also are susceptible to run-time options such as whether in debug or release mode of execution, etc. (i.e. will run at inconsistent times depending on these modes)
an implementation that is not blocking, i.e. that during the delay, system events are given time-slices to continue would likely require use of multi-threading. But since C is not intrinsically a multithreaded language, you would have to use additional non-standard C libs that you are not allowed to use.
Given you understand this, and would be OK with a simple technique (i.e. blocking, with susceptibility to execution mode), then just use a simple time loop. Follow these steps:
1) create a development (test) function that is simply a for loop with a hard-coded index value:
void sec_delay(void)
{
//loop incrementer value
int i = 334000000;//change value here during testing to adjust duration of calling
//test program. eg: for 60 calls, should elapse 60 seconds.
//this value run on PC shown below provided 1 second duration.
while(i-- > 0);
}
2) characterize sec_delay
Run a program that calls sec delay 60 times and time its execution against a clock or stopwatch.
void sec_delay(void);
int main(void)
{
int i;
for(i=0;i<60;i++)//target 60 seconds, change to 600 for 10 min. or 10 for 10 sec.
{
sec_delay();
}
printf("Done");//uses stdio.h
getchar(); //uses stdio.h
return 0;
}
3) Use the execution time for this executable to adjust the loop incrementer value so you can be as close to 1 minute as possible. For higher accuracy, loop the main program 600 times and map the incrementer in sec_loop so that time elapsed is 10 minutes.
Once you have characterized the sec_delay() function as described, you essentially have something you can use for a 1 second timer.
4) Now that you have a a value for 1 second elaplse time, create a new prototype:
void delay(float);
And create a #define:
#define SEC 334000000.0 //enter your value here in float
Finally, define delay() function:
void delay(float secs)//note, fractions of seconds can be called
{
if(secs < 0) break;//leave for negative values;
int i = (int)secs*SEC ;
while(i-- > 0);
}