PIC Microcontroller using C - c

I am trying to get this code to work in MM:SS:FFFFFF, where MM is minutes, SS seconds and FFFFFF micro seconds, but my minutes are bot working properly. Instead of getting anything like 01:05:873098 I get 00:65_873098. Thanks for any tip.
#include <prototype.h>
int16 overflow_count;
#int_timer1
void timer1_isr(){
overflow_count++;
}
void main(){
int32 time;
setup_timer_1(T1_INTERNAL | T1_DIV_BY_1);
enable_interrupts(int_timer1);
while(TRUE){
enable_interrupts(global);
while(input(PUSH_BUTTON)); //Wait for press
set_timer1(0);
overflow_count=0;
while(!input(PUSH_BUTTON)); //WAIT FOR RELEASE
disable_interrupts(global);
time=get_timer1();
time=time+((int32)overflow_count<<16);
time-=15; //substract overhead
printf("Time is %02lu:%02lu.%06lu minutes.\r\n",
time/1000000000, (time/6000000), (time/5)%1000000);
}
}

I would suggest that you introduce some intermediate variables like "ticks", "microsecs", "secs", and "mins". Do the calculations step by step, from smallest unit to largest, remembering to subtract off each part before converting the next larger part. Make sure the units work out at each step of the conversion (e.g. don't add or subtract values that have different units). Think about how you'd do it with pencil and paper: probably not the way you've written it! When you break it down like that, it will be easier to get the logic correct.

you aren't ever subtracting out the whole parts
if time is time in micro's... something like:
time = time % (24 * 60 * 60 * 1000); //mod out any extra days
int hours = time / (60 * 60 * 1000);
time = time % (60 * 60 * 1000); // or time -= hours*(60 * 60 * 1000)
int min = time / (60 * 1000);
time = time % (60 * 1000); // or time-= min*(60*1000)
...

Related

Creating a Program that slowly Increases the brightness of an LED as a start-up

I am wanting to create a program using a for-loop that slowly increases the brightness of an LED as "start-up" when I press a button.
I have basically no knowledge of for loops. I've tried messing around by looking at similar programs and potential solutions, but I was unable to do it.
This is my start code, which I have to use PWMperiod to achieve.
if (SW3 == 0) {
for (unsigned char PWMperiod = 255; PWMperiod != 0; PWMperiod --) {
if (TonLED4 == PWMperiod) {
TonLED4 += 1;
}
__delay_us (20);
}
}
How would I start this/do it?
For pulse width modulation, you'd want to turn the LED off for a certain amount of time, then turn the LED on for a certain amount of time; where the amounts of time depend on how bright you want the LED to appear and the total period ("on time + off time") is constant.
In other words, you want a relationship like period = on_time + off_time where period is constant.
You also want the LED to increase brightness slowly. E.g. maybe go from off to max. brightness over 10 seconds. This means you'll need to loop total_time / period times.
How bright the LED should be, and therefore how long the on_time should be, will depend on how much time has passed since the start of the 10 seconds (e.g. 0 microseconds at the start of the 10 seconds and period microseconds at the end of the 10 seconds). Once you know the on_time you can calculate off_time by rearranging that "period = on_time + off_time" formula.
In C it might end up something like:
#define TOTAL_TIME 10000000 // 10 seconds, in microseconds
#define PERIOD 1000 // 1 millisecond, in microseconds
#define LOOP_COUNT (TOTAL_TIME / PERIOD)
int on_time;
int off_time;
for(int t = 0; t < LOOP_COUNT; t++) {
on_time = period * t / LOOP_COUNT;
off_time = period - on_time;
turn_LED_off();
__delay_us(off_time);
turn_LED_on();
__delay_us(on_time);
}
Note: on_time = period * t / LOOP_COUNT; is a little tricky. You can think it as on_time = period * (t / LOOP_COUNT); where t / LOOP_COUNT is a fraction that goes from 0.00000 to 0.999999 representing the fraction of the period that the LED should be turned on, but if you wrote it like that the compiler will truncate the result of t / LOOP_COUNT to an integer (round it towards zero) so the result will be zero. When written like this; C will do the multiplication first, so it'll behave like on_time = (period * t) / LOOP_COUNT; and truncation (or rounding) won't be a problem. Sadly, doing the multiplication first solves one problem while possibly causing another problem - period * t might be too big for an int and might cause an overflow (especially on small embedded systems where an int could be 16 bits). You'll have to figure out how big an int is for your computer (for the values you use - changing TOTAL_TIME or PERIOD with change the maximum value that period * t could be) and use something larger (e.g. a long maybe) if an int isn't enough.
You should also be aware that the timing won't be exact, because it ignores time spent executing your code and ignores anything else the OS might be doing (IRQs, other programs using the CPU); so the "10 seconds" might actually be 10.5 seconds (or worse). To fix that you need something more complex than a __delay_us() function (e.g. some kind of __delay_until(absolute_time) maybe).
Also; you might find that the LED doesn't increase brightness linearly (e.g. it might slowly go from off to dull in 8 seconds then go from dull to max. brightness in 2 seconds). If that happens; you might need a lookup table and/or more complex maths to correct it.

Self-correcting periodic timer using gettimeofday()

I have a loop which runs every X usecs, which consists of doing some I/O then sleeping for the remainder of the X usecs. To (roughly) calculate the sleep time, all I'm doing is taking a timestamp before and after the I/O and subtract the difference from X. Here is the function I'm using for the timestamp:
long long getus ()
{
struct timeval time;
gettimeofday(&time, NULL);
return (long long) (time.tv_sec + time.tv_usec);
}
As you can imagine, this starts to drift pretty fast and the actual time between I/O bursts is usually quite a few ms longer than X.
To try and make it a little more accurate, I thought maybe if I keep a record of the previous starting timestamp, every time I start a new cycle I can calculate how long the previous cycle took (the time between this starting timestamp and the previous one). Then, I know how much longer than X it was, and I can modify my sleep for this cycle to compensate.
Here is how I'm trying to implement it:
long long start, finish, offset, previous, remaining_usecs;
long long delaytime_us = 1000000;
/* Initialise previous timestamp as 1000000us ago*/
previous = getus() - delaytime_us;
while(1)
{
/* starting timestamp */
start = getus();
/* here is where I would do some I/O */
/* calculate how much to compensate */
offset = (start - previous) - delaytime_us;
printf("(%lld - %lld) - %lld = %lld\n",
start, previous, delaytime_us, offset);
previous = start;
finish = getus();
/* calculate to our best ability how long we spent on I/O.
* We'll try and compensate for its inaccuracy next time around!*/
remaining_usecs = (delaytime_us - (finish - start)) - offset;
printf("start=%lld,finish=%lld,offset=%lld,previous=%lld\nsleeping for %lld\n",
start, finish, offset, previous, remaining_usecs);
usleep(remaining_usecs);
}
It appears to work on the first iteration of the loop, however after that things get messed up.
Here's the output for 5 iterations of the loop:
(1412452353 - 1411452348) - 1000000 = 5
start=1412452353,finish=1412458706,offset=5,previous=1412452353
sleeping for 993642
(1412454788 - 1412452353) - 1000000 = -997565
start=1412454788,finish=1412460652,offset=-997565,previous=1412454788
sleeping for 1991701
(1412454622 - 1412454788) - 1000000 = -1000166
start=1412454622,finish=1412460562,offset=-1000166,previous=1412454622
sleeping for 1994226
(1412457040 - 1412454622) - 1000000 = -997582
start=1412457040,finish=1412465861,offset=-997582,previous=1412457040
sleeping for 1988761
(1412457623 - 1412457040) - 1000000 = -999417
start=1412457623,finish=1412463533,offset=-999417,previous=1412457623
sleeping for 1993507
The first line of output shows how the previous cycle time was calculated. It appears that the first two timestamps are basically 1000000us apart (1412452353 - 1411452348 = 1000005). However after this the distance between starting timestamps starts looking not so reasonable, along with the offset.
Does anyone know what I'm doing wrong here?
EDIT: I would also welcome suggestions of better ways to get an accurate timer and be
able to sleep during the delay!
After some more research I've discovered two things wrong here-
Firstly, I'm calculating the timestamp wrong. getus() should return like this:
return (long long) 1000000 * (time.tv_sec + time.tv_usec);
And secondly, I should be storing the timestamp in unsigned long long or uint64_t.
So getus() should look like this:
uint64_t getus ()
{
struct timeval time;
gettimeofday(&time, NULL);
return (uint64_t) 1000000 * (time.tv_sec + time.tv_usec);
}
I won't actually be able to test this until tomorrow, so I will report back.

converting integers to minutes

I am new to C programming, but experienced in Java. I am creating a simple console application to calculate time between two chosen values. I am storing the chosen values in an int array like this:
static int timeVals[] = {748,800,815,830,845,914,929,942,953,1001,1010,1026,1034,1042,1048};
I am calling a method diff to calculate the time between to values liek this:
int diff (int start, int slut) {
/* slut means end in danish. not using rude words, but I am danish */
int minutes = 0;
int time = 0;
double hest;
hest = (100/60);
printf("hest = %f", hest);
if(timeVals[start] > timeVals[slut])
{
minutes = timeVals[start] - timeVals[slut];
/* printing hest to see what the value is */
printf("t = %f",hest);
time = minutes * (100/60);
printf("minut diff: %d\n", time);
}
else
{
minutes = timeVals[slut] - timeVals[start];
tiem = time + (minutes * (100/60));
}
return time;
}
The weird thing is that when I print out hest value I get 1.000000, which I mean isn't right... I have been struggling with this for hours now, and I can't find the issue.. maybe I'm just bad at math :P
hope you can help me
The issue is
hest = (100/60)
This result will be 1 because 100 / 60 = 1.6666...., but this is integer division, so you will lose your decimals, so hest = 1. Use
hest = (100.0 / 60.0)
Same with
time = minutes * (100/60);
Changed to
time = minutes * (100.0 / 60.0);
In this case again, you will lose your decimals because time is an int.
Some would recommend, if speed is an issue, that you perform all integer calculations and do store all your items as ints in 1/100th's of a second (i.e. 60 minutes in 1/100ths of a second = 60 minutes*60 seconds*100)
EDIT: Just to clarify, the link is for C++, but the same principles apply. But on most x86 based systems this isn't as big of a deal as it is on power limited embedded systems. Here's another link that discusses this issue
Following statement is a NOP
time = minutes * (100/60);
(100 / 60) == 1 because 100 and 60 are integers. You must write this :
time = (minutes * 100) / 60;
For instance if minutes == 123 time will be calculated as (123 * 100) / 60 which is 12300 / 60 which is 205.
As stated, you are mixing floating point and integer arithmetic. When you divide two integers, your result is integer, but you are trying to print that result as float. You might consider using the modulus operator (%) and compute quotient and remainder,
int // you might want to return float, since you are comingling int and float for time
diff (int start, int slut)
{
/* slut means end in danish. not using rude words, but I am danish */
int minutes = 0, seconds = 0, diff;
int time = 0;
double hest = (100.0) / (60.0); //here
printf("hest = %f", hest);
if(timeVals[start] > timeVals[slut])
{
diff = timeVals[start] - timeVals[slut];
time = diff * (100.0/60.0);
minutes = diff/60;
seconds = diff%60; //modulus (remainder after division by 60
}
else
{
diff = timeVals[slut] - timeVals[start];
time = time + diff * (100.0/60.0);
minutes = diff/60;
seconds = diff%60; //modulus (remainder after division by 60
}
/* printing hest to see what the value is */
printf("t = %f",hest);
printf("minut:seconds %d:%02d\n", minutes, seconds );
printf("minut diff: %d\n", time);
return time;
}

Of subtitles & lag times (yet another C overflow doubt)

I'm trying to calculate the time offset to be added to subtitle files to correct the lag. The part shown below is after tokenizing the hh:mm:ss,uuu (uuu stands for microseconds) into the time[] array. I'm converting the time into microseconds then adding the actual & lag time to get the final time.
The program computes the actual & lag time properly. However, it gives the wrong final hour time. Have I hit upon some overflow condition that can't be handled by the code below?
Edit: I have realized the error. I should be dividing rather than taking remainder for hour time.
int i;
int time[4];
unsigned long totalTime,totalLagTime;
...
for(i=0;i<4;i++)
{
printf("time[%d] = %d\n",i,time[i]);
}
for(i=0;i<4;i++)
{
printf("lag time[%d] = %d\n",i,lagTime[i]);
}
totalTime = 1000*(3600*time[0] + 60*time[1] + time[2]) + time[3];
printf("total time is %u in milliseconds\n",totalTime);
totalLagTime = 1000*(3600*lagTime[0] + 60*lagTime[1] + lagTime[2]) + lagTime[3];
printf("total lag time is %u in milliseconds\n",totalLagTime);
totalTime += totalLagTime;
printf("Now, total time is %u in milliseconds\n",totalTime);
time[0] = totalTime % 3600000;
printf("hour time is %d\n",time[0]);
Test case:
00:01:24,320
time[0] = 0
time[1] = 1
time[2] = 24
time[3] = 320
lag time[0] = 10
lag time[1] = 10
lag time[2] = 10
lag time[3] = 10
total time is 84320 in milliseconds
total lag time is 36610010 in milliseconds
Now, total time is 36694330 in milliseconds
hour time is 694330
Shouldn't that be
time[0] = totalTime / 3600000;
You have a logic error: 36694330 mod 3600000 really is 694330.
What are you trying to do, exactly?

are these msec<->timeval functions correct?

I have a bug in this program, and I keep coming back to these two functions, but they look right to me. Anything wrong here?
long visual_time_get_msec(VisTime *time_)
{
visual_log_return_val_if_fail(time_ != NULL, 0);
return time_->tv_sec * 1000 + time_->tv_usec / 1000;
}
int visual_time_set_from_msec(VisTime *time_, long msec)
{
visual_log_return_val_if_fail(time_ != NULL, -VISUAL_ERROR_TIME_NULL);
long sec = msec / 1000;
long usec = 0;
visual_time_set(time_, sec, usec);
return VISUAL_OK;
}
Your first function is rounding down, so that 1.000999 seconds is rounded to 1000ms, rather than 1001ms. To fix that (make it round to nearest millisecond), you could do this:
long visual_time_get_msec(VisTime *time_)
{
visual_log_return_val_if_fail(time_ != NULL, 0);
return time_->tv_sec * 1000 + (time_->tv_usec + 500) / 1000;
}
Fuzz has already pointed out the truncation in your second example - the only thing I would add is that you can simplify it a little using the modulo operator:
long sec = msec / 1000;
long usec = (msec % 1000) * 1000;
(The above all assume that you're not dealing with negative timevals - if you are, it gets more complicated).
visual_time_set_from_msec doesnt look right...
if someone calls visual_time_set_from_msec(time, 999), then your struct will be set to zero, rather the 999,000us.
What you should do is:
// Calculate number of seconds
long sec = msec / 1000;
// Calculate remainding microseconds after number of seconds is taken in to account
long usec = (msec - 1000*sec) * 1000;
it really depends on your inputs, but thats my 2 cents :-)

Resources