sampling with raspberry in c - c

I need to analogread every 4ms, but I tested my code reading the execution time and it printed this:
it's not 4ms,
my code:
#include <time.h>
clock_t start,end;
double tempo;
for(i=1; i <= 20; i++) {
start=clock();
x = analogRead (BASE + chan);
printf("%d\n", x);
delay(4);
end=clock();
tempo=((double)(end-start))/CLOCKS_PER_SEC;
printf("%f \n", tempo);
}

It does not matter what function you use as the Linux is not a RTOS, so you can actually forget about real time functionality unless you patch the kernel with PREEMPT_RT. There is a lots of information about this topic online.
This is to complex topic for a SO answer but I hope that I will point you into a right direction.

Related

How do I run a timer in C

I am making a trivia game and want to make a countdown timer where the person playing only has a certain time to answer the question. I am fairly new to C and am looking for a basic way to set a timer.
If you don't mind the person playing answering and being told they are too late, you could use something like this.
time.h gives us the ability to track processor clocks to time. It also gives us some nifty functions like double difftime(time_t timer2, time_t timer1) which returns the difference between two timers in seconds.
#include <stdio.h>
#include <time.h>
int main(void) {
time_t start_time;
time_t current_time;
time(&start_time);
time(&current_time);
double delay = 5;
int answer = 0;
double diff = 0;
while (diff < delay) {
diff = difftime(current_time, start_time);
scanf("%d", &answer);
printf("Nope not %d\n", answer);
time(&current_time);
}
printf("Too late\n");
return 0;
}
The only problem is scanf will lock the program and a reply will haft to be given before the loop stops. If this isn't what you are looking for, then you should look into using threads. Which is OS dependent.

Why does the time for this simple program to run double if run quickly in succession?

I have been working through the introductory openmp example and on the first multithreaded example - a numerical integration to pi - I knew the bit about false sharing would be coming and so implemented the following:
#include <stdio.h>
#include <stdlib.h>
#include <stdbool.h>
#include "omp.h"
#define STEPS 100000000.0
#define MAX_THREADS 4
void pi(double start, double end, double **sum);
int main(){
double * sum[MAX_THREADS];
omp_set_num_threads(MAX_THREADS);
double inc;
bool set_inc=false;
double start=omp_get_wtime();
#pragma omp parallel
{
int ID=omp_get_thread_num();
#pragma omp critical
if(!set_inc){
int num_threads=omp_get_num_threads();
printf("Using %d threads.\n", num_threads);
inc=1.0/num_threads;
set_inc=true;
}
pi(ID*inc, (ID+1)*inc, &sum[ID]);
}
double end=omp_get_wtime();
double tot=0.0;
for(int i=0; i<MAX_THREADS; i++){
tot=tot+*sum[i];
}
tot=tot/STEPS;
printf("The value of pi is: %.8f. Took %f secs.\n", tot, end-start);
return 0;
}
void pi(double start, double end, double **sum_ptr){
double *sum=(double *) calloc(1, sizeof(double));
for(double i=start; i<end; i=i+1/STEPS){
*sum=*sum+4.0/(1.0+i*i);
}
*sum_ptr=sum;
}
My idea was that in using calloc, the probability of the pointers returned being contiguous and thus being pulled into the same cache lines was virtually impossible (though I'm a tad unsure as to why there would be false sharing anyways as double is 64 bit here and my cache lines are 8 bytes as well, so if you can enlighten me there as well...). -- now I realize cache lines are typically 64 bytes not bits
In fun, after compiling I ran the program in quick succession and here's a short example of what I got (definitely was pushing arrows and enter in the terminal more than 1 press/.5 secs):
user#user-kubuntu:~/git/openmp-practice$ ./pi_mp.exe
Using 4 threads.
The value of pi is: 3.14159273. Took 0.104703 secs.
user#user-kubuntu:~/git/openmp-practice$ ./pi_mp.exe
Using 4 threads.
The value of pi is: 3.14159273. Took 0.196900 secs.
I thought that maybe something was happening because of the way I tried to avoid the false sharing and since I am still ignorant about the complete happenings amongst the levels of memory I chalked it up to that. So, I followed the prescribed method of the tutorial using a "critical" section like so:
#include <stdio.h>
#include <stdlib.h>
#include <stdbool.h>
#include "omp.h"
#define STEPS 100000000.0
#define MAX_THREADS 4
double pi(double start, double end);
int main(){
double sum=0.0;
omp_set_num_threads(MAX_THREADS);
double inc;
bool set_inc=false;
double start=omp_get_wtime();
#pragma omp parallel
{
int ID=omp_get_thread_num();
#pragma omp critical
if(!set_inc){
int num_threads=omp_get_num_threads();
printf("Using %d threads.\n", num_threads);
inc=1.0/num_threads;
set_inc=true;
}
double temp=pi(ID*inc, (ID+1)*inc);
#pragma omp critical
sum+=temp;
}
double end=omp_get_wtime();
sum=sum/STEPS;
printf("The value of pi is: %.8f. Took %f secs.\n", sum, end-start);
return 0;
}
double pi(double start, double end){
double sum=0.0;
for(double i=start; i<end; i=i+1/STEPS){
sum=sum+4.0/(1.0+i*i);
}
return sum;
}
The doubling in run time is virtually identical. What's the explanation for this? Does it have anything to do with the low level memory? Can you answer my intermediate question?
Thanks a lot.
Edit:
The compiler is gcc 7 on Kubuntu 17.10. options used were -fopenmp -W -o ( in that order).
The system specs include an i5 6500 # 3.2Ghz and 16 gigs of DDR4 RAM (though I forget its clock speed)
As some have asked, the program time does not continue to double if run more than twice in quick succession. After the initial doubling, it remains at around the same time (~.2 secs) for as many successive runs as I have tested (5+). Waiting a second or two, the time to run returns to the lesser amount. However, when the runs are not run manually in succession but rather in one command line such as ./pi_mp.exe;./pi_mp.exe;./pi_mp.exe;./pi_mp.exe;./pi_mp.exe;./pi_mp.exe; I get:
The value of pi is: 3.14159273. Took 0.100528 secs.
Using 4 threads.
The value of pi is: 3.14159273. Took 0.097707 secs.
Using 4 threads.
The value of pi is: 3.14159273. Took 0.098078 secs.
...
Adding gcc optimization options (-O3) had no change on any of the results.

Finding time taken by the code

I am trying to find the time taken by memmove function in c using time.h library. However, When i execute the code, I get the value as zero. Any possible solution to find the time taken by the memmove function?
void main(){
uint64_t start,end;
uint8_t a,b;
char source[5000];
char dest[5000];
uint64_t j=0;
for(j=0;j<5000;j++){
source[j]=j;
}
start=clock();
memmove(dest,source,5000);
end=clock();
printf("%f",((double)end-start));
}
As I write in my comment, memmoving 5000 bytes is far too fast to be mesurable with clock. If you do your memmove 100000 times, then it will get mesurable.
This code below gives an output of 12 on my computer. But this is platform dependent, the number you get on your computer might be quite different.
#include <stdio.h>
#include <stdint.h>
#include <string.h>
#include <time.h>
int main(void) {
uint64_t start, end;
char source[5000];
char dest[5000];
uint64_t j = 0;
for (j = 0; j < 5000; j++) {
source[j] = j;
}
start = clock();
for (int i = 0; i < 100000; i++)
{
memmove(dest, source, 5000);
}
end = clock();
printf("%lld", (end - start)); // no need to convert to double, (end - start)
// is an uint64_t.
}
If you want to know the time it takes on a beagle bone or another device with a GIPO you can toggle a GPIO before and after your routine. You will have to hook up an oscilloscope or something similar that can sample voltage quickly.
I don't know much about beagle bones but it seems the library libpruio allows fast gpio toggling.
Also, what is your exact goal here? Compare speed on different hardware? As someone suggests, you could increase the number of loops so it becomes more easily measurable with time.h.

How does time.h clock() work under Windows?

I am trying to create a simple queue schedule for an embedded System in C.
The idea is that within a Round Robin some functions are called based on the time constraints declared in the Tasks[] array.
#include <time.h>
#include <stdio.h>
#include <windows.h>
#include <stdint.h>
//Constants
#define SYS_TICK_INTERVAL 1000UL
#define INTERVAL_0MS 0
#define INTERVAL_10MS (100000UL / SYS_TICK_INTERVAL)
#define INTERVAL_50MS (500000UL / SYS_TICK_INTERVAL)
//Function calls
void task_1(clock_t tick);
void task_2(clock_t tick);
uint8_t get_NumberOfTasks(void);
//Define the schedule structure
typedef struct
{
double Interval;
double LastTick;
void (*Function)(clock_t tick);
}TaskType;
//Creating the schedule itself
TaskType Tasks[] =
{
{INTERVAL_10MS, 0, task_1},
{INTERVAL_50MS, 0, task_2},
};
int main(void)
{
//Get the number of tasks to be executed
uint8_t task_number = get_NumberOfTasks();
//Initializing the clocks
for(int i = 0; i < task_number; i++)
{
clock_t myClock1 = clock();
Tasks[i].LastTick = myClock1;
printf("Task %d clock has been set to %f\n", i, myClock1);
}
//Round Robin
while(1)
{
//Go through all tasks in the schedule
for(int i = 0; i < task_number; i++)
{
//Check if it is time to execute it
if((Tasks[i].LastTick - clock()) > Tasks[i].Interval)
{
//Execute it
clock_t myClock2 = clock();
(*Tasks[i].Function)(myClock2);
//Update the last tick
Tasks[i].LastTick = myClock2;
}
}
Sleep(SYS_TICK_INTERVAL);
}
}
void task_1(clock_t tick)
{
printf("%f - Hello from task 1\n", tick);
}
void task_2(clock_t tick)
{
printf("%f - Hello from task 2\n", tick);
}
uint8_t get_NumberOfTasks(void)
{
return sizeof(Tasks) / sizeof(*Tasks);
}
The code compiles without a single warning, but I guess I don't understand how the command clock() work.
Here you can see what I get when I run the program:
F:\AVR Microcontroller>timer
Task 0 clock has been set to 0.000000
Task 1 clock has been set to 0.000000
I tried changing Interval and LastTick from float to double just to make sure this was not a precision error, but still it does not work.
%f is not the right formatting specifier to print out myClock1 as clock_t is likely not double. You shouldn't assume that clock_t is double. If you want to print myClock1 as a floating point number you have to manually convert it to double:
printf("Task %d clock has been set to %f\n", i, (double)myClock1);
Alternatively, use the macro CLOCKS_PER_SEC to turn myClock1 into a number of seconds:
printf("Task %d clock has been set to %f seconds\n", i,
(double)myClock1 / CLOCKS_PER_SEC);
Additionally, your subtraction in the scheduler loop is wrong. Think about it: clock() grows larger with the time, so Tasks[i].LastTick - clock() always yields a negative value. I think you want clock() - Tasks[i].LastTick instead.
The behavior of the clock function is depending on the operating system. On Windows it basically runs of the wall clock, while on e.g. Linux it's the process CPU time.
Also, the result of clock by itself is useless, it's only use is in comparison between two clocks (e.g. clock_end - clock_start).
Finally, the clock_t type (which clock returns) is an integer type, you only get floating point values if you cast a difference (as the one above) to e.g. double and divide by CLOCKS_PER_SEC. Attempting to print a clock_t using the "%f" format will lead to undefined behavior.
Reading a clock reference might help.

Measuring processor ticks in C

I wanted to calculate the difference in execution time when executing the same code inside a function. To my surprise, however, sometimes the clock difference is 0 when I use clock()/clock_t for the start and stop timer. Does this mean that clock()/clock_t does not actually return the number of clicks the processor spent on the task?
After a bit of searching, it seemed to me that clock_gettime() would return more fine grained results. And indeed it does, but I instead end up with an abitrary number of nano(?)seconds. It gives a hint of the difference in execution time, but it's hardly accurate as to exactly how many clicks difference it amounts to. What would I have to do to find this out?
#include <math.h>
#include <stdio.h>
#include <time.h>
#define M_PI_DOUBLE (M_PI * 2)
void rotatetest(const float *x, const float *c, float *result) {
float rotationfraction = *x / *c;
*result = M_PI_DOUBLE * rotationfraction;
}
int main() {
int i;
long test_total = 0;
int test_count = 1000000;
struct timespec test_time_begin;
struct timespec test_time_end;
float r = 50.f;
float c = 2 * M_PI * r;
float x = 3.f;
float result_inline = 0.f;
float result_function = 0.f;
for (i = 0; i < test_count; i++) {
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &test_time_begin);
float rotationfraction = x / c;
result_inline = M_PI_DOUBLE * rotationfraction;
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &test_time_end);
test_total += test_time_end.tv_nsec - test_time_begin.tv_nsec;
}
printf("Inline clocks %li, avg %f (result is %f)\n", test_total, test_total / (float)test_count,result_inline);
for (i = 0; i < test_count; i++) {
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &test_time_begin);
rotatetest(&x, &c, &result_function);
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &test_time_end);
test_total += test_time_end.tv_nsec - test_time_begin.tv_nsec;
}
printf("Function clocks %li, avg %f (result is %f)\n", test_total, test_total / (float)test_count, result_inline);
return 0;
}
I am using gcc version 4.8.4 on Linux 3.13.0-37-generic (Linux Mint 16)
First of all: As already mentioned in the comments, clocking a single run of execution one by the other will probably do you no good. If all goes down the hill, the call for getting the time might actually take longer than the actual execution of the operation.
Please clock multiple runs of the operation (including a warm up phase so everything is swapped in) and calculate the average running times.
clock() isn't guaranteed to be monotonic. It also isn't the number of processor clicks (whatever you define this to be) the program has run. The best way to describe the result from clock() is probably "a best effort estimation of the time any one of the CPUs has spent on calculation for the current process". For benchmarking purposes clock() is thus mostly useless.
As per specification:
The clock() function returns the implementation's best approximation to the processor time used by the process since the beginning of an implementation-dependent time related only to the process invocation.
And additionally
To determine the time in seconds, the value returned by clock() should be divided by the value of the macro CLOCKS_PER_SEC.
So, if you call clock() more often than the resolution, you are out of luck.
For profiling/benchmarking, you should --if possible-- use one of the performance clocks that are available on modern hardware. The prime candidates are probably
The HPET
The TSC
Edit: The question now references CLOCK_PROCESS_CPUTIME_ID, which is Linux' way of exposing the TSC.
If any (or both) are available depends on the hardware in is also operating system specific.
After googling a little bit I can see that clock() function can be used as a standard mechanism to find the tome taken for execution , but be aware that the time will be varying at different time depending upon the load of your processor,
You can just use the below code for calculation
clock_t begin, end;
double time_spent;
begin = clock();
/* here, do your time-consuming job */
end = clock();
time_spent = (double)(end - begin) / CLOCKS_PER_SEC;

Resources