I know there are a bunch of questions on here about timers and how to configure and use them, I have looked through all I could find but can't figure out what I am doing wrong.
I need a class that contains basically the same functionality as the Arduino micros() function. I want to stay with straight AVR. Here is what I have so far, I am using Timer4 so I don't step on any toes, this is a 16bit timer and I am using a prescale of 8 which should give me .5us every clock cycle using a Mega2560, wouldn't this equate to TCNT4 = 2 = 1us?
To verify that my timing functions are correct I created a simple program that only contains the Timer and a couple of delays from "util/delay.h". The resulting output is not what I expected. So here is my issue, I am not sure if the _delay_us is actually delaying the right time or if my timer/math is off.
I realize that there are no checks for overflows or anything, I am focusing on simply getting the timer to output the correct values first.
SystemTime:
class SystemTime{
unsigned long ovfCount = 1;
public:
SystemTime();
void Overflow();
uint32_t Micro();
void Reset();
};
/**
* Constructor
*/
SystemTime::SystemTime() {
TCCR4B |= (1 << CS41); //Set Prescale to 8
TIMSK4 |= (1 << TOIE4); //Enable the Overflow Interrupt
}
/**
* Increase the Overflow count
*/
void SystemTime::Overflow(){
this->ovfCount++;
}
/**
* Returns the number of Microseconds since start
*/
uint32_t SystemTime::Micro(){
uint32_t t;
t = (TCNT4 * 2) * this->ovfCount;
return t;
}
/**
* Resets the SystemTimer
*/
void SystemTime::Reset(){
this->ovfCount = 0;
TCNT4 = 0;
}
SystemTime sysTime;
ISR(TIMER4_OVF_vect){
sysTime.Overflow();
}
Main:
#include "inttypes.h"
#include "USARTSerial.h"
#include "SystemTime.h"
#include "util/delay.h"
#define debugSize 50
void setup(){
char debug1[debugSize];
char debug2[debugSize];
char debug3[debugSize];
char debug4[debugSize];
uSerial.Baudrate(57600);
uSerial.Write("Ready ...");
uint32_t test;
sysTime.Reset();
test = sysTime.Micro();
sprintf(debug1, "Time 1: %lu", test);
_delay_us(200);
test = sysTime.Micro();
sprintf(debug2, "Time 2: %lu", test);
_delay_us(200);
test = sysTime.Micro();
sprintf(debug3, "Time 3: %lu", test);
_delay_us(200);
test = sysTime.Micro();
sprintf(debug4, "Time 4: %lu", test);
uSerial.Write(debug1);
uSerial.Write(debug2);
uSerial.Write(debug3);
uSerial.Write(debug4);
}
void loop(){
}
Output:
Ready ...
Time 1: 0
Time 2: 144
Time 3: 306
Time 4: 464
Update:
Thanks for helping me out, I wanted to post the working code just in case someone else is having problems or needs to know how this can be done. One thing to keep in mind is the time it takes to do the Micros calculation. It looks like (at least on my Mega2560) that it takes around 36us to perform the calculation so either the timer prescale needs to be adjusted or the math to eliminate the double multiplications. None the less this class works as is, but is by no means optimized.
#define F_CPU 16000000L
#include <stdio.h>
#include <avr/interrupt.h>
class SystemTime {
private:
unsigned long ovfCount = 0;
public:
SystemTime();
void Overflow();
uint32_t Micro();
void Reset();
};
/*
* Constructor, Initializes the System Timer for keeping track
* of the time since start.
*/
SystemTime::SystemTime() {
TCCR4B |= (1 << CS41); //Set Prescale to 8
TIMSK4 |= (1 << TOIE4); //Enable the Overflow Interrupt
//Enable Interrupts
sei();
}
/**
* Increase the Overflow count
*/
void SystemTime::Overflow() {
this->ovfCount++;
}
/**
* Resets the SystemTimer
*/
void SystemTime::Reset() {
this->ovfCount = 0;
TCNT4 = 0;
}
/**
* Returns the number of Microseconds since start
*/
uint32_t SystemTime::Micro() {
uint32_t t;
t = (TCNT4 * 0.5) + ((this->ovfCount * sizeof(this->ovfCount)) * 0.5);
return t;
}
SystemTime sysTime;
ISR(TIMER4_OVF_vect) {
sysTime.Overflow();
}
Assuming that your MCU really runs on 16 MHz, I would change the following things in your code.
If one timer increment is 0.5 μs, then you should divide TCNT4's value by 2, not multiply. Because it is TCNT4 times 0.5 μs.
Also the this->ovfCount usage is wrong as well. The elapsed microseconds from startup is equal to: TCNT4 * 0.5 + this->ovfCount * 65535 * 0.5. So the current increment number (TCNT4) multiplied by 0.5 μs plus the overflow count (this->ovfCount) multiplied by the max increment count (216-1 = 65535) multiplied by 0.5 μs.
uint32_t SystemTime::Micro(){
uint32_t t;
t = (TCNT4 * 0.5) + this->ovfCount * 65535 * 0.5;
return t;
}
Finally, I cannot see you enabling global interrupts anywhere with sei().
Related
so my problem is related to Timers in atmega32 in general, my problem is that I am using timer0 in my atmega32 as a delay timer with interrupts every unit time specified by the callee function, so for example, if the application user specified that I want an interrupt every 1 second for example, then I initialize the timer0 and based on some equations I can delay for one second then I call the application user ISR.
my problem in the equations itself requires me to use floating variables while the microprocessor on atmega32 doesn't have a floating point unit so the compiler increases the code size.
by the way, I am using my timer in Normal mode and this is the datasheet of timer0 (page 69)
here are the equations I use:
T(tick) = prescalar / freq(CPU) -> where T(tick) is the time needed by one tick for the timer, freq(CPU) is the frequency of the MCU.
T(max_delay) = (2^8) * T(tick) -> where T(max_delay) represents the max delay the timer can provide until the 1st overflow, (2^8) is the maximum number of ticks that timer0 can make before the overflow
Timer(init_value) = (T(max_delay) - T(delay)) / T(tick) -> where Timer(init_value) is the intial value to be inserted into TCNT0 register at the first and every time there is an overflow, T(delay) is the user required delay.
N(of_overflows) = [ceil](T(delay) / T(max_delay)) -> where N(of_overflows) is the number of overflows needed to achieve application user delay if it's greater than T(max_delay)
and this is my code that I wrote for just as a reference:
/*
* #fn -: -calculatInitValueForTimer0
*
* #params[0] -: -a number in milliseconds to delay for
*
* #brief -: -calculate the initial value needed for timer0 to be inserted into the timer
*
* #return -: -the initial value to be in the timer0
*/
static uint8_t calculatInitValueForTimer0(uint32_t args_u32TimeInMilliSeconds, uint16_t args_u8Prescalar)
{
/*local variable for time in seconds*/
double volatile local_f64TimerInSeconds = args_u32TimeInMilliSeconds / 1000.0;
/*local variable that will contain the value for init timer*/
uint8_t volatile local_u8TimerInit = 0;
/*local variable that will contain the time for one tick*/
double volatile local_f64Ttick;
/*local variable that will contain the time for max delay*/
double volatile local_f64Tmaxdelay;
/*get the tick timer*/
local_f64Ttick = args_u8Prescalar / 1000000.0;
/*get the max delay*/
local_f64Tmaxdelay = 256 * local_f64Ttick;
/*see which init time to be used*/
if (local_f64TimerInSeconds == (uint32_t) local_f64Tmaxdelay)
{
/*only one overflow needed*/
global_ValueToReachCount = 1;
/*begin counting from the start*/
local_u8TimerInit = 0;
}
else if (local_f64TimerInSeconds < (uint32_t) local_f64Tmaxdelay)
{
/*only one overflow needed*/
global_ValueToReachCount = 1;
/*begin counting from the start*/
local_u8TimerInit = (uint8_t)((local_f64Tmaxdelay - local_f64TimerInSeconds) / local_f64Ttick);
}
else if (local_f64TimerInSeconds > (uint32_t) local_f64Tmaxdelay)
{
/*many overflow needed*/
global_ValueToReachCount = ((local_f64TimerInSeconds / local_f64Tmaxdelay) == ((uint32_t)local_f64TimerInSeconds / (uint32_t)local_f64Tmaxdelay)) ? (uint32_t)(local_f64TimerInSeconds / local_f64Tmaxdelay) : (uint32_t)(local_f64TimerInSeconds / local_f64Tmaxdelay) + 1;
/*begin counting from the start*/
local_u8TimerInit = 256 - (uint8_t)( (local_f64TimerInSeconds / local_f64Ttick) / global_ValueToReachCount);
}
/*return the calculated value*/
return local_u8TimerInit;
}
currently I am not handling the case when there is only one overflow required, but this isn't the case.
my problem is that in order to calculate the timer initial value and number of overflows needed to achieve a big delay is all calculated using double or float variables and the problem is that the microprocessor in atmega32 doesn't have FPU so it makes the Compiler increase code size to achieve these equations, so is there any other way to calculate the timer initial value and number of overflows needed without double or float variables ?
Coming from this question and I just wonder How to calculate maximum time that an Atmega328 timer can give us before trigger an interrupt? I want it to trigger every hour or so in my project but due to the fact that C integer and OCR1A register has some limitation in size it seems far fetch to get an hour from it.
Is it possible to modify my last question code to get some hour delay?
It should be, depending on the frequency of your microcontroller and the prescaler you're going to use.
Atmega 328P is 20 MHz. So if you take 20,000,000/1024 = 19531. Thats the number of cycles in 1 second.
You can find this in the data sheet for a 16 Bit Timer:
volatile uint8_t count = 0;
void set_up_timer() {
TCCR1B = (1 << WGM12); // from the datasheet
OCR1A = 19531; // number of ticks in a second
TIMSK1 = (1 << OCIE1A); // from the data sheet
TCCR1B |= (1 << CS12) | (1 << CS10);
You can set a global variable, and increment it in the ISR Routine until the desired value is achieved. Something along the lines of:
ISR(TIMER1_COMP1_VECT) {
counter++;
if(counter >= 3600) {
// do whatever needs to be done
}
The comment by Jabberwocky translates to this code (based on the other question to which your have posted the link)
... includes
/* In milliseconds */
const unsigned int ISR_QUANTUM = 1000; // once in a second
/* How much time should pass (1 hours = 3600 seconds = 3600 * 1000 ms */
const unsigned long LONG_TIME_INTERVAL = 1000 * 3600;
volatile unsigned long time_counter;
void once_an_hour() {
... do what needs to be done
}
int main(void) {
... setup your timer interrupt (high-precision, "short-range")
// reset your 'seconds' time counter
time_counter = 0;
while (1)
{
// busy-wait for time counter passing
if (time_counter > LONG_TIME_INTERVAL) {
// reset global timer
time_counter = 0;
once_an_hour();
}
// do other things
...
}
}
ISR (TIMER1_COMPA_vect)
{
// action to be done every Xms - just increment the time_counter
time_counter += ISR_QUANTUM;
}
This way you just increment a "global" time counter in a "local" interrupt handler.
I am using an ADM00308 development board from MicroChip. The board has a PIC16F883 processor. The code example can be downloaded from their website. I am using a nanotec steppermotor ST4118m0706 with a step 1.8 degree. I've calculated the max speed of the stepper motor:
from this website
RPMmax = 24V/(2⋅0,032mh⋅0,5A) = 3,75n/s (3,75⋅60 = 225rpm)
Minimum time per step = (2⋅0,032mh⋅0,5)/24V = 0,00133 seconds.
So in theory, the stepper motor should be able to handle 225rpm without oscillating. Now the software.
The code example provides a variable speed up to around 45rpm. This is too slow, since my target is 130rpm. Here is the original code:
Pulse width, Max speed and scan
// Prescale: Must change both values together - PRESCALE Divisor and BIT MASK */
// 1 = 0b00000000, 2 = 0b00010000, 4 = 0b00100000, 8 = 0b00110000
//#define REF_PWM_PRESCALE 8
//#define REF_PWM_PRESCALE_MASK 0b00110000
// The Rollover Count is the period of the timer.
// The timer period = 0 - Rollover Count.
//#define REF_PWM_PERIOD = ((float) ((1.0 / (((float)_XTAL_FREQ) / 4.0)) * (REF_PWM_PRESCALE * REF_RWM_ROLLOVER_COUNT)))
//#define REF_FREQ ((float) (1.0 / REF_PERIOD) )
// Set minimum speed by (65535 - MAX_SPEED_COUNT) * usec per bit
// 12 msec =
#define MIN_MOTOR_PULSEWIDTH (0.015)
#define MAX_SPEED_COUNT ((unsigned int) (65535.0 - ((float) ((MIN_MOTOR_PULSEWIDTH / (1.0 / (((float)_XTAL_FREQ) / 4.0))) / (ROTATION_PRESCALE * 2)))))
#define SPEED_INPUT_SPAN ((unsigned int) 900)
#define SPEED_INPUT_COUNTS_PER_BIT ((unsigned int) (MAX_SPEED_COUNT / SPEED_INPUT_SPAN))
#define ROTATION_ROLLOVER_COUNT (MAX_SPEED_COUNT + 100)
#define ROTATION_PRESCALE 8
#define ROTATION_PRESCALE_MASK 0b00110000
#define ROTATION_PERIOD = ((float) ((1.0 / (((float)_XTAL_FREQ) / 4.0)) * (ROTATION_PRESCALE * ROTATION_ROLLOVER_COUNT)))
#define ROTATION_FREQ ((float) (1.0 / ROTATION_PERIOD) )
SpeedUpdate
FaultTypeEnum SpeedUpdate(void)
{
FaultTypeEnum Fault;
unsigned int Speed;
Fault = NO_FAULT;
if (SpeedInput < 65)
{
/* open/shorted(GND) speed input */
Speed = 0;
Fault = SPEED_INPUT_LOW;
}
else if (SpeedInput < 100)
{
Speed = 0;
Fault = NO_FAULT;
}
else if (SpeedInput > 950)
{
Speed = 0;
Fault = SPEED_INPUT_HIGH;
}
else if (SpeedInput > 900)
{
/* open/shorted(VDD) speed input */
Speed = MAX_SPEED_COUNT;
Fault = NO_FAULT;
}
else
{
Speed = (SpeedInput - 100) * SPEED_INPUT_COUNTS_PER_BIT;
Fault = NO_FAULT;
}
RotationTimerRolloverCount = MAX_SPEED_COUNT - Speed;
/* setup the next timer reload value */
T1CON = 0b00000000; /* Temporarily pause the PWM timer */
/* use variables to reload timer faster in interrupt routine */
RotationTimerReloadHi = (unsigned )(RotationTimerRolloverCount >> 8);
RotationTimerReloadLo = (unsigned short) RotationTimerRolloverCount;
T1CON = 0b00000001 | ROTATION_PRESCALE_MASK; // Re-enable PWM timer, set prescale
return Fault;
}
Timer
/* Rotation Timer. Must be fast. */
if (PIR1bits.TMR1IF)
{
PIR1bits.TMR1IF = 0;
TMR1H = RotationTimerReloadHi;
TMR1L = RotationTimerReloadLo;
/* Calculate next stepper rotation state */
/* HOLD switch sets min_rotation_state = max_rotation_state */
/* SINGLE STEP switch sets min_rotation_state = max_rotation_state */
if (System.Bits.Stop)
{
/* no current output */
RotationData.All = ROTATION_STOP;
}
else
{
/* update stepper driver with last calculated data */
PORTB = ((PORTB & 0b11000000) | RotationData.All);
I managed to change the speed to 146rpm by changing:
#define MIN_MOTOR_PULSEWIDTH (0.015)
to for testing
#define MIN_MOTOR_PULSEWIDTH (0.006)
The stepper rotates at a speed of 98rpm with the potentio completely rotated to the left. Lowering the value to 0.004 will get a speed of 146rpm with good torque on a 24v supply(torque will suffer on a 12v supply). Lowering the value even more, will oscillate the motor (you can hear the motor but it doesn't rotate anymore). Which is strange since the max rpm is supposed to be 225rpm. However, the main problem is that I can't seem to reach a rpm of 130. Changing the value to 0.005, 0.0045, etc doesn't increase the speed more than 98 until 0.004. It also seems that the potentiometer has some kind of presets. By turning the potentiometer, it changes from 146rpm to 98rpm, to 73rpm, etc. It doesn't change the speeds fluently, if you get my point. Hence I am getting the idea it's programmed in presets, which I also tried to change.
Firmware and information can be downloaded on the original site
I am using STM32F2 controller and I am interfacing with an ST7036 LCD display via 8 bit parallel interface.
The datasheet says there should be a 20 nano second delay between address hold and setup time.
How do I generate a 20 nanosecond delay in C?
Use stopwatch_delay(4) below to accomplish approximately 24ns of delay. It uses the STM32's DWT_CYCCNT register, which is specifically designed to count actual clock ticks, located at address 0xE0001004.
To verify the delay accuracy (see main), you can call STOPWATCH_START, run stopwatch_delay(ticks), then call STOPWATCH_STOP and verify with CalcNanosecondsFromStopwatch(m_nStart, m_nStop). Adjust ticks as needed.
uint32_t m_nStart; //DEBUG Stopwatch start cycle counter value
uint32_t m_nStop; //DEBUG Stopwatch stop cycle counter value
#define DEMCR_TRCENA 0x01000000
/* Core Debug registers */
#define DEMCR (*((volatile uint32_t *)0xE000EDFC))
#define DWT_CTRL (*(volatile uint32_t *)0xe0001000)
#define CYCCNTENA (1<<0)
#define DWT_CYCCNT ((volatile uint32_t *)0xE0001004)
#define CPU_CYCLES *DWT_CYCCNT
#define CLK_SPEED 168000000 // EXAMPLE for CortexM4, EDIT as needed
#define STOPWATCH_START { m_nStart = *((volatile unsigned int *)0xE0001004);}
#define STOPWATCH_STOP { m_nStop = *((volatile unsigned int *)0xE0001004);}
static inline void stopwatch_reset(void)
{
/* Enable DWT */
DEMCR |= DEMCR_TRCENA;
*DWT_CYCCNT = 0;
/* Enable CPU cycle counter */
DWT_CTRL |= CYCCNTENA;
}
static inline uint32_t stopwatch_getticks()
{
return CPU_CYCLES;
}
static inline void stopwatch_delay(uint32_t ticks)
{
uint32_t end_ticks = ticks + stopwatch_getticks();
while(1)
{
if (stopwatch_getticks() >= end_ticks)
break;
}
}
uint32_t CalcNanosecondsFromStopwatch(uint32_t nStart, uint32_t nStop)
{
uint32_t nDiffTicks;
uint32_t nSystemCoreTicksPerMicrosec;
// Convert (clk speed per sec) to (clk speed per microsec)
nSystemCoreTicksPerMicrosec = CLK_SPEED / 1000000;
// Elapsed ticks
nDiffTicks = nStop - nStart;
// Elapsed nanosec = 1000 * (ticks-elapsed / clock-ticks in a microsec)
return 1000 * nDiffTicks / nSystemCoreTicksPerMicrosec;
}
void main(void)
{
int timeDiff = 0;
stopwatch_reset();
// =============================================
// Example: use a delay, and measure how long it took
STOPWATCH_START;
stopwatch_delay(168000); // 168k ticks is 1ms for 168MHz core
STOPWATCH_STOP;
timeDiff = CalcNanosecondsFromStopwatch(m_nStart, m_nStop);
printf("My delay measured to be %d nanoseconds\n", timeDiff);
// =============================================
// Example: measure function duration in nanosec
STOPWATCH_START;
// run_my_function() => do something here
STOPWATCH_STOP;
timeDiff = CalcNanosecondsFromStopwatch(m_nStart, m_nStop);
printf("My function took %d nanoseconds\n", timeDiff);
}
The first specification I found of Stm32f2 assumes a clock frequency of 120 MHz. That's about 8ns per clock cycle. You would need about three single cycle instructions between successive write or read/write operations. In C, a++; will probably do (if a is located in stack).
You should look into the FSMC peripheral available in your chip. While the configuration might be complicated, especially if you're not dropping in a memory part that it was designed for, you might find that your parallel interfaced device maps pretty well to one of the memory interface modes.
These sorts of external memory controllers must have a bunch of configurable timing options to support the range of different memory chips out there so you'll be able to guarantee the timings required by your datasheet.
The nice benefit of being able to do this is your LCD will then seem like any old memory mapped peripheral, abstracting away the lower level interfacing details.
I am using MSP430F5418 with IAR EW 5.10.
I want to use Timer B in up mode.
I want to use two interrupts.
TimerB0(1 ms) and TimerB1(1 second).
My Configuration is
TBCTL = MC__UP + TBSSEL__ACLK + TBCLR;
TB0CCTL0 = CCIE;
TB0CCR0 = 32;
TB0CCTL1 = CCIE;
TB0CCR1 = 32768;
On the ISR I just toggled two pins.
But only the pin for TB0CCR0 is toggling.
My Pin Configurations are correct.
Can anyone tell me why??
I suppose your problem is the timer period.
MC__UP Up mode: Timer counts up to TBxCL0
So your timer TBxR will reset to 0 when it reaches TBxCL0 which seems to be the value TB0CCR0.
So it can never reach the value of 32768.
You could switch TB0CCR0 with TB0CCR1, so your period will be 1 second.
And to get your 1ms interrupt you need to increment your TB0CCR1 each time.
INTERRUPT ISR_1MS()
{
TB0CCR1 = (TB0CCR1 + 32) & 0x7FFF;
}
But normally you don't need a second timer for a second intervall.
You could simply count 1000 times your 1ms intervall.
INTERRUPT ISR_1MS()
{
ms_count++;
if (ms_count >= 1000)
{
ms_count=0;
// Do your second stuff
}
}
And if you need more and different intervalls, you could change to another model.
To a system clock time and check only against this time.
volatile unsigned int absolute_time=0;
INTERRUPT ISR_1MS()
{
absolute_time++;
}
unsigned int systime_now(void)
{
unsigned int result;
di();
result = absolute_time;
ei();
return result;
}
uint8_t systime_reached(unsigned int timeAt)
{
uint8_t result;
result = (systime_now() - timeAt ) < 0x1000;
return result;
}