Understanding the calculation of the time (jiffies) - c

I want to sleep in the kernel for a specific amount of time, and I use time_before and jiffies to calculate the amount of time I should sleep, however I don't understand how the calculation actually works. I know that HZ is 250 and jiffies is a huge dynamic value. I know what them both are and what they are used for.
I calculate the time with jiffies + (10 * HZ).
static unsigned long j1;
static int __init sys_module_init(void)
{
j1 = jiffies + (10 * HZ);
while (time_before(jiffies, j1))
schedule();
printk("Hello World - %d - %ld\n", HZ, jiffies); // Hello World - 250 - 4296485594 (dynamic)
return 0;
}
How does the calculation work and how many seconds will I sleep? I want to know that because in the future I'll probably want to sleep for a specific time.

HZ represents the amount of ticks in a second, and multiplying that by 10 gives the amount of ticks in 10 seconds. So the calculation jiffies + 10 * HZ yields the expected value of jiffies 10 seconds from now.
However, calling schedule() in a loop until that value is hit is not the recommended way to go. If you want to sleep in the kernel, you don't need to reinvent the wheel, there already is a set of APIs just for this purpose, documented here, which will make your life a lot easier. The simplest way to sleep in your specific case would be to use msleep(), passing the number of milliseconds you want to sleep for:
#include <linux/delay.h>
static int __init sys_module_init(void)
{
msleep(10 * 1000);
return 0;
}

Related

Creating a Program that slowly Increases the brightness of an LED as a start-up

I am wanting to create a program using a for-loop that slowly increases the brightness of an LED as "start-up" when I press a button.
I have basically no knowledge of for loops. I've tried messing around by looking at similar programs and potential solutions, but I was unable to do it.
This is my start code, which I have to use PWMperiod to achieve.
if (SW3 == 0) {
for (unsigned char PWMperiod = 255; PWMperiod != 0; PWMperiod --) {
if (TonLED4 == PWMperiod) {
TonLED4 += 1;
}
__delay_us (20);
}
}
How would I start this/do it?
For pulse width modulation, you'd want to turn the LED off for a certain amount of time, then turn the LED on for a certain amount of time; where the amounts of time depend on how bright you want the LED to appear and the total period ("on time + off time") is constant.
In other words, you want a relationship like period = on_time + off_time where period is constant.
You also want the LED to increase brightness slowly. E.g. maybe go from off to max. brightness over 10 seconds. This means you'll need to loop total_time / period times.
How bright the LED should be, and therefore how long the on_time should be, will depend on how much time has passed since the start of the 10 seconds (e.g. 0 microseconds at the start of the 10 seconds and period microseconds at the end of the 10 seconds). Once you know the on_time you can calculate off_time by rearranging that "period = on_time + off_time" formula.
In C it might end up something like:
#define TOTAL_TIME 10000000 // 10 seconds, in microseconds
#define PERIOD 1000 // 1 millisecond, in microseconds
#define LOOP_COUNT (TOTAL_TIME / PERIOD)
int on_time;
int off_time;
for(int t = 0; t < LOOP_COUNT; t++) {
on_time = period * t / LOOP_COUNT;
off_time = period - on_time;
turn_LED_off();
__delay_us(off_time);
turn_LED_on();
__delay_us(on_time);
}
Note: on_time = period * t / LOOP_COUNT; is a little tricky. You can think it as on_time = period * (t / LOOP_COUNT); where t / LOOP_COUNT is a fraction that goes from 0.00000 to 0.999999 representing the fraction of the period that the LED should be turned on, but if you wrote it like that the compiler will truncate the result of t / LOOP_COUNT to an integer (round it towards zero) so the result will be zero. When written like this; C will do the multiplication first, so it'll behave like on_time = (period * t) / LOOP_COUNT; and truncation (or rounding) won't be a problem. Sadly, doing the multiplication first solves one problem while possibly causing another problem - period * t might be too big for an int and might cause an overflow (especially on small embedded systems where an int could be 16 bits). You'll have to figure out how big an int is for your computer (for the values you use - changing TOTAL_TIME or PERIOD with change the maximum value that period * t could be) and use something larger (e.g. a long maybe) if an int isn't enough.
You should also be aware that the timing won't be exact, because it ignores time spent executing your code and ignores anything else the OS might be doing (IRQs, other programs using the CPU); so the "10 seconds" might actually be 10.5 seconds (or worse). To fix that you need something more complex than a __delay_us() function (e.g. some kind of __delay_until(absolute_time) maybe).
Also; you might find that the LED doesn't increase brightness linearly (e.g. it might slowly go from off to dull in 8 seconds then go from dull to max. brightness in 2 seconds). If that happens; you might need a lookup table and/or more complex maths to correct it.

Why udelay and ndelay is not accurate in linux kernel?

I make a function like this
trace_printk("111111");
udelay(4000);
trace_printk("222222");
and the log shows it's 4.01 ms , it'OK
But when i call like this
trace_printk("111111");
ndelay(10000);
ndelay(10000);
ndelay(10000);
ndelay(10000);
....
....//totally 400 ndelay calls
trace_printk("222222");
the log will shows 4.7 ms. It's not acceptable.
Why the error of ndelay is so huge like this?
Look deep in the kernel code i found the implemention of this two functions
void __udelay(unsigned long usecs)
{
__const_udelay(usecs * 0x10C7UL); /* 2**32 / 1000000 (rounded up) */
}
void __ndelay(unsigned long nsecs)
{
__const_udelay(nsecs * 0x5UL); /* 2**32 / 1000000000 (rounded up) */
}
I thought udelay will be 1000 times than ndelay, but it's not, why?
As you've already noticed, the nanosecond delay implementation is quite a coarse approximation compared to the millisecond delay, because of the 0x5 constant factor used. 0x10c7 / 0x5 is approximately 859. Using 0x4 would be closer to 1000 (approximately 1073).
However, using 0x4 would cause the ndelay to be less than the number of nanoseconds requested. In general, delay functions aim to provide a delay at least as long as requested by the user (see here: http://practicepeople.blogspot.jp/2013/08/kernel-programming-busy-waiting-delay.html).
Every time you call it, a rounding error is added. Note the comment 2**32 / 1000000000. That value is really ~4.29, but it was rounded up to 5. That's a pretty hefty error.
By contrast the udelay error is small: (~4294.97 versus 4295 [0x10c7]).
You can use ktime_get_ns() to get high precision time since boot. So you can use it not only as high precision delay but also as high precision timer. There is example:
u64 t;
t = ktime_get_ns(); // Get current nanoseconds since boot
for (i = 0; i < 24; i++) // Send 24 1200ns-1300ns pulses via GPIO
{
gpio_set_value(pin, 1); // Drive GPIO or do something else
t += 1200; // Now we have absolute time of the next step
while (ktime_get_ns() < t); // Wait for it
gpio_set_value(pin, 0); // Do something, again
t += 1300; // Now we have time of the next step, again
while (ktime_get_ns() < t); // Wait for it, again
}

Self-correcting periodic timer using gettimeofday()

I have a loop which runs every X usecs, which consists of doing some I/O then sleeping for the remainder of the X usecs. To (roughly) calculate the sleep time, all I'm doing is taking a timestamp before and after the I/O and subtract the difference from X. Here is the function I'm using for the timestamp:
long long getus ()
{
struct timeval time;
gettimeofday(&time, NULL);
return (long long) (time.tv_sec + time.tv_usec);
}
As you can imagine, this starts to drift pretty fast and the actual time between I/O bursts is usually quite a few ms longer than X.
To try and make it a little more accurate, I thought maybe if I keep a record of the previous starting timestamp, every time I start a new cycle I can calculate how long the previous cycle took (the time between this starting timestamp and the previous one). Then, I know how much longer than X it was, and I can modify my sleep for this cycle to compensate.
Here is how I'm trying to implement it:
long long start, finish, offset, previous, remaining_usecs;
long long delaytime_us = 1000000;
/* Initialise previous timestamp as 1000000us ago*/
previous = getus() - delaytime_us;
while(1)
{
/* starting timestamp */
start = getus();
/* here is where I would do some I/O */
/* calculate how much to compensate */
offset = (start - previous) - delaytime_us;
printf("(%lld - %lld) - %lld = %lld\n",
start, previous, delaytime_us, offset);
previous = start;
finish = getus();
/* calculate to our best ability how long we spent on I/O.
* We'll try and compensate for its inaccuracy next time around!*/
remaining_usecs = (delaytime_us - (finish - start)) - offset;
printf("start=%lld,finish=%lld,offset=%lld,previous=%lld\nsleeping for %lld\n",
start, finish, offset, previous, remaining_usecs);
usleep(remaining_usecs);
}
It appears to work on the first iteration of the loop, however after that things get messed up.
Here's the output for 5 iterations of the loop:
(1412452353 - 1411452348) - 1000000 = 5
start=1412452353,finish=1412458706,offset=5,previous=1412452353
sleeping for 993642
(1412454788 - 1412452353) - 1000000 = -997565
start=1412454788,finish=1412460652,offset=-997565,previous=1412454788
sleeping for 1991701
(1412454622 - 1412454788) - 1000000 = -1000166
start=1412454622,finish=1412460562,offset=-1000166,previous=1412454622
sleeping for 1994226
(1412457040 - 1412454622) - 1000000 = -997582
start=1412457040,finish=1412465861,offset=-997582,previous=1412457040
sleeping for 1988761
(1412457623 - 1412457040) - 1000000 = -999417
start=1412457623,finish=1412463533,offset=-999417,previous=1412457623
sleeping for 1993507
The first line of output shows how the previous cycle time was calculated. It appears that the first two timestamps are basically 1000000us apart (1412452353 - 1411452348 = 1000005). However after this the distance between starting timestamps starts looking not so reasonable, along with the offset.
Does anyone know what I'm doing wrong here?
EDIT: I would also welcome suggestions of better ways to get an accurate timer and be
able to sleep during the delay!
After some more research I've discovered two things wrong here-
Firstly, I'm calculating the timestamp wrong. getus() should return like this:
return (long long) 1000000 * (time.tv_sec + time.tv_usec);
And secondly, I should be storing the timestamp in unsigned long long or uint64_t.
So getus() should look like this:
uint64_t getus ()
{
struct timeval time;
gettimeofday(&time, NULL);
return (uint64_t) 1000000 * (time.tv_sec + time.tv_usec);
}
I won't actually be able to test this until tomorrow, so I will report back.

Trying to get the time of an operation and receiving time 0 seconds

I am trying to see how much does it take for about 10000 names to be inserted into a BST(writing in c).
I am reading these names from a txt file using fscanf. I have declared a file pointer(fp) at the main function. Calling a function that is at another .c file passing the fp through its arguments. I want to count the time needed for 2,4,8,16,32...,8192 names to be inserted saving the time at a long double array. I have included the time.h library at the .c file where the function is located.
Code:
void myfunct(BulkTreePtr *Bulktree, FILE* fp,long double time[])
{
double tstart, tend, ttemp;
TStoixeioyTree datainput;
int error = 0,counter=0,index=0,num=2,i;
tstart = ((double) clock())/CLOCKS_PER_SEC;
while (!feof(fp))
{
counter++;
fscanf(fp,"%s %s", datainput.lname, datainput.fname);
Tree_input(&((*Bulktree)->TreeRoot), datainput, &error);
if (counter == num)
{
ttemp = (double) clock()/CLOCKS_PER_SEC;
time[index] = ttemp-tstart;
num = num * 2;
index++;
}
}
tend = ((double) clock())/CLOCKS_PER_SEC;
printf("Last value of ttemp is %f\n",ttemp-tstart);
time[index] = (tend-tstart);
num = 2;
for(i=0;i<14;i++)
{
printf("Time after %d names is %f sec \n", num, (float)time[i]);
num=num*2;
}
I am getting this:
Last value of ttemp is 0.000000
Time after 2 names is 0.000000 sec
Time after 4 names is 0.000000 sec
Time after 8 names is 0.000000 sec
Time after 16 names is 0.000000 sec
Time after 32 names is 0.000000
ms Time after 64 names is
0.000000 sec Time after 128 names is 0.000000 sec Time after 256
names is 0.000000 sec Time after
512 names is 0.000000 sec Time
after 1024 names is 0.000000 sec
Time after 2048 names is 0.000000 sec
Time after 4096 names is 0.000000
sec Time after 8192 names is
0.000000 sec Time after 16384 names is 0.010000 sec
What am I doing wrong? :S
Use clock_getres() and clock_gettime(). Most likely you will find your system doesn't have a very fast clock. Note that the system might return different numbers when calling gettimeofday or clock_gettime(), but often times (depending on kernel) those numbers at greater than HZ resolution are lies generated to simulate time advancing.
You might find a better test to do fixed time tests. Find out how many inserts you can do in 10 seconds. Or have some kind of fast reset method (memset?) and find out how many groups of inserts of 1024 names you can do in 10 seconds.
[EDIT]
Traditionally, the kernel gets interrupted at HZ frequency by the hardware. Only when it gets this hardware interrupt does it know that time had advanced by 1/HZ of a second. The traditional value for HZ was 1/100 of a second. Surprise, surprise, you saw a 1/100th of a second increment in time. Now some systems and kernels have recently started providing other methods of getting higher resolution time, looking at the RTC device or whatever.
However, you should use the clock_gettime() function I pointed you to along with the clock_getres() function to find out how often you will get accurate time updates. Make sure your test runs many many multiples of clock_getres() unless you want it to be a total lie.
clock() returns the number of "ticks"; there are CLOCKS_PER_SEC ticks per second. For any operation which takes less than 1/CLOCKS_PER_SEC seconds, the return value of clock() will either be unchanged or changed by 1 tick.
From your results it looks like even 16384 insertions take no more than 1/100 seconds.
If you want to know how long a certain number of insertions take, try repeating them many, many times so that the total number of ticks is significant, and then divide that total time by the number of times they were repeated.
clock returns the amount of cpu time used, not the amount of actual time elapsed, but that might be what you want here. Note that Unix standards requires CLOCKS_PER_SECOND to be exactly one million (1000000), but the resolution can be much worse (e.g. it might jump by 10000 at a time). You should be using clock_gettime with the cpu time clock if you want to measure cpu time spent, or otherwise with the monotonic clock to measure realtime spent.
ImageMagick includes stopwatch functions such as these.
#include "magick/MagickCore.h"
#include "magick/timer.h"
TimerInfo *timer_info;
timer_info = AcquireTimerInfo();
<your code>
printf("elapsed=%.6f sec", GetElapsedTime(timer_info));
But that only seems to have a resolution of 1/100 second. Plus it requires installing ImageMagic. I suggest this instead. It's simple and has usec resolution in Linux.
#include <time.h>
double timer(double start_secs)
{
static struct timeval tv;
static struct timezone tz;
gettimeofday(&tv, &tz);
double now_secs = (double)tv.tv_sec + (double)tv.tv_usec/1000000.0;
return now_secs - start_secs;
}
double t1 = timer(0);
<your code>
printf("elapsed=%.6f sec", timer(t1));

C GetTickCount (windows function) to Time (nanoseconds)

I'm testing one code provided from my colleague and I need to measure the time of execution of one routine than performs a context switch (of threads).
What's the best choice to measure the time? I know that is available High Resolution Timers like,
QueryPerformanceCounter
QueryPerformanceFrequency
but how can I translate using that timers to miliseconds or nanoseconds?
LARGE_INTEGER lFreq, lStart;
LARGE_INTEGER lEnd;
double d;
QueryPerformanceFrequency(&lFreq);
QueryPerformanceCounter(&lStart);
/* do something ... */
QueryPerformanceCounter(&lEnd);
d = ((doublel)End.QuadPart - (doublel)lStart.QuadPart) / (doublel)lFreq.QuadPart;
d is time interval in seconds.
As the operation than i am executing is in order of 500 nanos, and the timers doens't have precision, what i made was,
i saved actual time with GetTickCount() - (Uses precision of ~ 12milis) and performs the execution of a route N_TIMES (Number of times than routine executed) than remains until i press something on console.
Calculate the time again, and make the difference dividing by N_TIMES, something like that:
int static counter;
void routine()
{
// Operations here..
counter++;
}
int main(){
start <- GetTickCount();
handle <- createThread(....., routine,...);
resumeThread(handle);
getchar();
WaitForSingleObject(handle, INFINITE);
Elapsed = (GetTickCount() - start) * 1000000.0) / counter;
printf("Nanos: %d", elapsed);
}
:)

Resources