I've got some general header where I declare it (in std.h):
static volatile unsigned int timestamp;
I've got Interruption where I increase it (in main.c):
void ISR_Pit(void) {
unsigned int status;
/// Read the PIT status register
status = PIT_GetStatus() & AT91C_PITC_PITS;
if (status != 0) {
/// 1 = The Periodic Interval timer has reached PIV since the last read of PIT_PIVR.
/// Read the PIVR to acknowledge interrupt and get number of ticks
///Returns the number of occurrences of periodic intervals since the last read of PIT_PIVR.
timestamp += (PIT_GetPIVR() >> 20);
//printf(" --> TIMERING :: %u \n\r", timestamp);
}
}
in another module I've got procedure where I must use it (in meta.c):
void Wait(unsigned long delay) {
volatile unsigned int start = timestamp;
unsigned int elapsed;
do {
elapsed = timestamp;
elapsed -= start;
//printf(" --> TIMERING :: %u \n\r", timestamp);
}
while (elapsed < delay);
}
first printf shows correct increasing timestamp but Wait printf always shows 0. Why?
You declare your variable as static, which means its local to the file it is included in. The timestamp in main.c is different than the one in meta.c.
You can fix that by declaring timestamp in main.c like so:
volatile unsigned int timestamp = 0;
and in meta.c like so:
extern volatile unsigned int timestamp;
Related
I am using C to program a microcontroller (NHS3152) to calculate resistance using this formula:
RES = (V1-V2)/I
I want to write a program that:
Updates the values of 3 Floats and 1 integer (V1, V2, I, RES)
Update uint8_t text[] = "Char to store values of V_1, V_2, I, Res"; with a string with the values of these 3 floats and 1 integer
Commits text to memory using the function: "Example1_Creating_an_NDEF..." (provided below)
The issue I am having is with Updating text with the values of the 3 floats and 1 integer. I am able to do it with sprintf, however I cannot use this function with my microcontroller (I dont know why it stops me from flashing it to memory)
The code bellow is an example of what i want to achieve in from point 2 above, in the real code the float and integer values are updated by getting values from sensors on the mircocontroller:
#include <stdio.h>
#include <stdint.h>
static volatile float V_1 = 5.0; // store value of Voltage 1
static volatile float V_2 = 2.0; // store value of voltage 2
static volatile int I = 5; // store current value
static volatile float RES; // calcualte resitance
int i=1;
uint8_t text[] = "Char to store values of V_1, V_2, I, Res";
int main(void)
{
printf( text);
printf("\n");
//updates values for ADC_1, ADC_2, I, Res,
while(i<=2){ // real while loop is infinite
V_1 ++ ; // Usually value updates from microcontroller sensor
V_2 ++ ;
I ++;
RES = (V_1-V_2)/I; // calculating resistance
sprintf((char *)text, "Updated Text V1: %6.2f V2: %6.2f I: %8.d resistance: %e", V_1, V_2,I, RES ); // what i want to do but without sprintf
printf( text);
printf("\n");
i++;
}
printf("END");
}
OUT:
Char to store values of V_1, V_2, I, Res
Updated Text V1: 6.00 V2: 3.00 I: 6 resistance: 5.000000e-01
Updated Text V1: 7.00 V2: 4.00 I: 7 resistance: 4.285714e-01
END
Here is the Code including the function Example1_Creating_an_NDEF.. from point 3. This working code manages to Commit the text to memory (it works). All i need is to be able to update the text without sprintf as i believe these functions aren't allowed when i'm not in debug mode.
#include "board.h"
// From ndeft2t
#include "ndeft2t/ndeft2t.h"
#include <stdlib.h>
#include <string.h>
#include <stdio.h>
uint8_t instance[NDEFT2T_INSTANCE_SIZE] __attribute__((aligned (4)));
uint8_t buffer[NFC_SHARED_MEM_BYTE_SIZE ] __attribute__((aligned (4)));
uint8_t locale[] = "en";
uint8_t text[] = "Char to store values of V_1, V_2, I, Res";
// I want to add these to the text message
static volatile float V_1 = 5.0;
static volatile float V_2 = 2.0;
static volatile int I = 5;
static volatile float Res = ;
static void Example1_Creating_an_NDEF_Message_with_a_single_record_of_type_TEXT (void)
{
Chip_NFC_Init(NSS_NFC); /* Is normally already called during board initialization. */
NDEFT2T_Init();
NDEFT2T_CreateMessage(instance, buffer, NFC_SHARED_MEM_BYTE_SIZE, true);
NDEFT2T_CREATE_RECORD_INFO_T recordInfo = {.shortRecord = true, .pString = locale};
if (NDEFT2T_CreateTextRecord(instance, &recordInfo)) {
/* The payload length to pass excludes the NUL terminator. */
if (NDEFT2T_WriteRecordPayload(instance, text, sizeof(text) - 1)) {
NDEFT2T_CommitRecord(instance);
}
}
NDEFT2T_CommitMessage(instance);
/* The return value of the commit function is ignored here. */
}
int main(void)
{
Board_Init();
//NDEFT2T_Init();
/* Optional feature: send the ARM clock to PIO0_1 */
Chip_IOCON_SetPinConfig(NSS_IOCON, IOCON_PIO0_1, IOCON_FUNC_1);
Chip_Clock_Clkout_SetClockSource(CLOCK_CLKOUTSOURCE_SYSTEM);
/* Blink & attach a message to Memory */
while (1) {
//updates values for V_1, V_2, I, Res,
V_1++;
V_2++;
I++;
RES = (V_1 - V_2)/RES;
// Update the text message to contain "(V_1, V_2, I, RES)"
LED_Toggle(LED_RED);
Example1_Creating_an_NDEF_Message_with_a_single_record_of_type_TEXT();
Chip_Clock_System_BusyWait_ms(500);
}
return 0;
}
EDIT: responding to KamilCuk's comment : Do you really need float? Does your mcu has floating point support? Consider using integers only.
The fucntions i use to get V1,V2 & I, all return an Integer (eg: V1=2931 12 bit converter so between 0 and 4096 ) however to convert the integer value to the real value i need to use the following conversion:
V1_real= (V1* 1.2) / 2825 + 0.09; // CONVERSION added line
Without conversion i Cannot calculate Res.
An acceptable compromise is to commit to memory the values V1, V2, I without converstion to real values. I can then calculate RES in a second momemnt, once i retrieve the data from the MCU ( i am using my phone as an NFC reader, the chip is an NFC tag)
So the question is:
How do i convert the message in text to a message containing the integers (V_1,V_2,I)? :
Something like:
uint8_t text[] = "Char to store values of V_1, V_2, I";
text = "Message with V_1: V_1value, V_2: V_2value, I: IValue
Below is the code i use to extract the value of V_1 & V_2:
void adc (void)
{
Chip_IOCON_SetPinConfig(NSS_IOCON, IOCON_ANA0_1, IOCON_FUNC_1);
Chip_ADCDAC_SetMuxADC(NSS_ADCDAC0, ADCDAC_IO_ANA0_1);
Chip_ADCDAC_SetInputRangeADC(NSS_ADCDAC0, ADCDAC_INPUTRANGE_WIDE);
Chip_ADCDAC_SetModeADC(NSS_ADCDAC0, ADCDAC_SINGLE_SHOT);
Chip_ADCDAC_StartADC(NSS_ADCDAC0);
//Getting the data
while (!(Chip_ADCDAC_ReadStatus(NSS_ADCDAC0) & ADCDAC_STATUS_ADC_DONE)) {
; /* Wait until measurement completes. For single-shot mode only! */
}
//Data V1 stored here
adcInput_1 = Chip_ADCDAC_GetValueADC(NSS_ADCDAC0);
// Usually i use this line to then conver it to the real value of V1
//adcInput_1= (adcInput_1 * 1.2) / 2825 + 0.09; // CONVERSION added line
//! [adcdac_nss_example_3]
}
Maybe someone has a better solution that allows me to actually arrive at having also RES calculated, knowing that:
RES= V1_real-V1_real/I_real
V1_real= (V1* 1.2) / 2825 + 0.09 and 0< V1 < 4000 (12 bit converter for V1)
V2_real= (V2* 1.2) / 2825 + 0.09 and 0< V2 < 4000
I_real = I*1e-12 0<I<4000
Using a 8-bit AVR micro, I arrived to a simple situation which might not be that easy to solve.
Consider the following snippet:
static volatile uint8_t counter;
//fires often and I need all the values of the counter.
void isr(void) {
counter++;
}
int main (void) {
while(1) {
send_uart(counter);
counter = 0;
delay_ms(1000); //1 sec pause
}
return 0;
}
1.) It can happen that send_uart is followed by an isr which increases the counter, and then the next statement zeroes it out.
Therefore I'll miss one data from the counter.
2.) If I use ATOMIC_BLOCK(ATOMIC_RESTORESTATE) in the main fn, I can avoid the problems declared in (1), but it can happen that I miss an ISR because in this case INTs are disabled for a short time.
Is there a better way to pass information from the main fn to ISR?
If the counter is sampled rather than reset, there won't be any timing issues. Increments happening while sending will be accounted for in the next iteration. The unsigned data type of the counter variables will guarantee well-defined overflow behavior.
uint8_t cs = 0; // counter sample at time of sending
uint8_t n = 0; // counter as last reported
while (1) {
cs = counter; // sample the counter
send_uart((uint8_t)(cs - n)); // report difference between sample and last time
n = cs; // update last reported value
delay_ms(1000);
}
To preface, I am on a Unix (linux) system using gcc.
What I am stuck on is how to accurately implement a way to run a section of code for a certain amount of time.
Here is an example of something I have been working with:
struct timeb start, check;
int64_t duration = 10000;
int64_t elapsed = 0;
ftime(&start);
while ( elapsed < duration ) {
// do a set of tasks
ftime(&check);
elapsed += ((check.time - start.time) * 1000) + (check.millitm - start.millitm);
}
I was thinking this would have carried on for 10000ms or 10 seconds, but it didn't, almost instantly. I was basing this off other questions such as How to get the time elapsed in C in milliseconds? (Windows) . But then I thought that if upon the first call of ftime, the struct is time = 1, millitm = 999 and on the second call time = 2, millitm = 01 it would be calculating the elapsed time as being 1002 milliseconds. Is there something I am missing?
Also the suggestions in the various stackoverflow questions, ftime() and gettimeofday(), are listed as deprecated or legacy.
I believe I could convert the start time into milliseconds, and the check time into millseconds, then subtract start from check. But milliseconds since the epoch requires 42 bits and I'm trying to keep everything in the loop as efficient as possible.
What approach could I take towards this?
Code is incorrect calculating elapsed time.
// elapsed += ((check.time - start.time) * 1000) + (check.millitm - start.millitm);
elapsed = ((check.time - start.time) * (int64_t)1000) + (check.millitm - start.millitm);
There is some concern about check.millitm - start.millitm. On systems with struct timeb *tp, it can be expected that the millitm will be promoted to int before subtraction occurs. So the difference will be in the range [-1000 ... 1000].
struct timeb {
time_t time;
unsigned short millitm;
short timezone;
short dstflag;
};
IMO, more robust code would handle ms conversion in a separate helper function. This matches OP's "I believe I could convert the start time into milliseconds, and the check time into millseconds, then subtract start from check."
int64_t timeb_to_ms(struct timeb *t) {
return (int64_t)t->time * 1000 + t->millitm;
}
struct timeb start, check;
ftime(&start);
int64_t start_ms = timeb_to_ms(&start);
int64_t duration = 10000 /* ms */;
int64_t elapsed = 0;
while (elapsed < duration) {
// do a set of tasks
struct timeb check;
ftime(&check);
elapsed = timeb_to_ms(&check) - start_ms;
}
If you want efficiency, let the system send you a signal when a timer expires.
Traditionally, you can set a timer with a resolution in seconds with the alarm(2) syscall.
The system then sends you a SIGALRM when the timer expires. The default disposition of that signal is to terminate.
If you handle the signal, you can longjmp(2) from the handler to another place.
I don't think it gets much more efficient than SIGALRM + longjmp (with an asynchronous timer, your code basically runs undisturbed without having to do any extra checks or calls).
Below is an example for you:
#define _XOPEN_SOURCE
#include <unistd.h>
#include <stdio.h>
#include <signal.h>
#include <setjmp.h>
static jmp_buf jmpbuf;
void hndlr();
void loop();
int main(){
/*sisv_signal handlers get reset after a signal is caught and handled*/
if(SIG_ERR==sysv_signal(SIGALRM,hndlr)){
perror("couldn't set SIGALRM handler");
return 1;
}
/*the handler will jump you back here*/
setjmp(jmpbuf);
if(0>alarm(3/*seconds*/)){
perror("couldn't set alarm");
return 1;
}
loop();
return 0;
}
void hndlr(){
puts("Caught SIGALRM");
puts("RESET");
longjmp(jmpbuf,1);
}
void loop(){
int i;
for(i=0; ; i++){
//print each 100-milionth iteration
if(0==i%100000000){
printf("%d\n", i);
}
}
}
If alarm(2) isn't enough, you can use timer_create(2) as EOF suggests.
I am trying to create a simple queue schedule for an embedded System in C.
The idea is that within a Round Robin some functions are called based on the time constraints declared in the Tasks[] array.
#include <time.h>
#include <stdio.h>
#include <windows.h>
#include <stdint.h>
//Constants
#define SYS_TICK_INTERVAL 1000UL
#define INTERVAL_0MS 0
#define INTERVAL_10MS (100000UL / SYS_TICK_INTERVAL)
#define INTERVAL_50MS (500000UL / SYS_TICK_INTERVAL)
//Function calls
void task_1(clock_t tick);
void task_2(clock_t tick);
uint8_t get_NumberOfTasks(void);
//Define the schedule structure
typedef struct
{
double Interval;
double LastTick;
void (*Function)(clock_t tick);
}TaskType;
//Creating the schedule itself
TaskType Tasks[] =
{
{INTERVAL_10MS, 0, task_1},
{INTERVAL_50MS, 0, task_2},
};
int main(void)
{
//Get the number of tasks to be executed
uint8_t task_number = get_NumberOfTasks();
//Initializing the clocks
for(int i = 0; i < task_number; i++)
{
clock_t myClock1 = clock();
Tasks[i].LastTick = myClock1;
printf("Task %d clock has been set to %f\n", i, myClock1);
}
//Round Robin
while(1)
{
//Go through all tasks in the schedule
for(int i = 0; i < task_number; i++)
{
//Check if it is time to execute it
if((Tasks[i].LastTick - clock()) > Tasks[i].Interval)
{
//Execute it
clock_t myClock2 = clock();
(*Tasks[i].Function)(myClock2);
//Update the last tick
Tasks[i].LastTick = myClock2;
}
}
Sleep(SYS_TICK_INTERVAL);
}
}
void task_1(clock_t tick)
{
printf("%f - Hello from task 1\n", tick);
}
void task_2(clock_t tick)
{
printf("%f - Hello from task 2\n", tick);
}
uint8_t get_NumberOfTasks(void)
{
return sizeof(Tasks) / sizeof(*Tasks);
}
The code compiles without a single warning, but I guess I don't understand how the command clock() work.
Here you can see what I get when I run the program:
F:\AVR Microcontroller>timer
Task 0 clock has been set to 0.000000
Task 1 clock has been set to 0.000000
I tried changing Interval and LastTick from float to double just to make sure this was not a precision error, but still it does not work.
%f is not the right formatting specifier to print out myClock1 as clock_t is likely not double. You shouldn't assume that clock_t is double. If you want to print myClock1 as a floating point number you have to manually convert it to double:
printf("Task %d clock has been set to %f\n", i, (double)myClock1);
Alternatively, use the macro CLOCKS_PER_SEC to turn myClock1 into a number of seconds:
printf("Task %d clock has been set to %f seconds\n", i,
(double)myClock1 / CLOCKS_PER_SEC);
Additionally, your subtraction in the scheduler loop is wrong. Think about it: clock() grows larger with the time, so Tasks[i].LastTick - clock() always yields a negative value. I think you want clock() - Tasks[i].LastTick instead.
The behavior of the clock function is depending on the operating system. On Windows it basically runs of the wall clock, while on e.g. Linux it's the process CPU time.
Also, the result of clock by itself is useless, it's only use is in comparison between two clocks (e.g. clock_end - clock_start).
Finally, the clock_t type (which clock returns) is an integer type, you only get floating point values if you cast a difference (as the one above) to e.g. double and divide by CLOCKS_PER_SEC. Attempting to print a clock_t using the "%f" format will lead to undefined behavior.
Reading a clock reference might help.
I am using the following code to compute execution time in milli-secs.
struct timespec tp;
if (clock_gettime (CLOCK_REALTIME, &tp) == 0)
return ((tp.tv_sec * 1000000000) + tp.tv_nsec);
else
return ;
Can you please tell me whether this is correct?
Let's name this function comptime_nano().
Now, I write the following code in main() to check execution times of following operations.
unsigned long int a, b, s1, s3;
a = (unsigned long int)(1) << 63;
b = (unsigned long int)(1) << 63;
btime = comptime_nano();
s1 = b >> 30;
atime = comptime_nano();
printf ("Time =%ld for %lu\n", (atime - btime), s1);
btime = comptime_nano();
s3 = a >> 1;
atime = comptime_nano();
printf ("Time =%ld for %lu\n", (atime - btime), s3);
To my surprise, the first operation takes about roughly 4 times more time than the second. Again, if I change the relative ordering of these operations, the respective timings change drastically.
Please comment...
clock_gettime is not accurate enough for that kind of measurement. If you need to measure operations like that, do the operation several thousand (or several million times) in a loop before comparison. The two operations above should take the same amount of time but the second in your example code does not have the overhead of loading a, b, s1, and s3 into the processor's cache.
Also, what's going on here?
struct timespec tp;
if (clock_gettime (CLOCK_REALTIME, &tp) == 0)
return ((tp.tv_sec * 1000000000) + tp.tv_nsec);
else
return ;
The first return is illegal if the function returns void, and the second is illegal if it does not return void....
EDIT: 1000000000 also overflows the range of int.
If your resolution isn't good enough, and you are running on an Intel PC, try using the realtime time-stamp counter (RDTSC). I found this code for using it on Umbutu:
#include<sys/time.h>
#include<time.h>
typedef unsigned long long ticks;
static __inline__ ticks getticks(void)
{
unsigned a, d;
asm("cpuid");
asm volatile("rdtsc" : "=a" (a), "=d" (d));
return (((ticks)a) | (((ticks)d) << 32));
}