I want to configure timer1 of PIC24F16KA102 to count it. The clock source must be the internal clock of 8 MHz. I configured the register T1CON and set on high level the bit TON to start the timer. Timer1 is set to go in overflow every 100 us, then with a while cycle I wille increase the variable count. I'am not understanding because timer1 don't work, I observed that it does not increase. Why?
#include <xc.h>
#include "config.h"
int count = 0;
void main(void) {
TRISB = 0;
T1CON = 0; //TRM1 stopped, internal clock source, prescaler 1:1
_TON = 1;
TMR1 = 65135; //overflow of TM1 every 100 us (400 counts)
while (1) {
if (TMR1 == 65535) {
count++; // increase every 100 us
TMR1 = 65135;
}
}
}
Try setting the Timer 1 period register (PR1) and using an interrupt rather than trying to catch and reload TMR1 on its final count. You're trying to catch TMR1 on EXACTLY 65535, and that will almost never work because once TMR1 hits 65535, it's just going to overflow and begin counting from 0 again.
EDIT: Of course, this assumes it counts at all. I don't know what the behavior of a timer is when you leave the period register at 0. It may simply count to it's maximum of 65535 then reset to 0, or it may never count at all and continuously load PRx into TMRx since they match at 0
PRx is meant to define the period you want for a given timer, in this case 100uS. PR1 = 400. Once TMR1 = PR1, the timer will reset itself automatically and raise an interrupt to alert you that the timer has elapsed.
volatile unsigned int count = 0; //Vars that change in an ISR should be volatile
PR1 = 400; //Set Period for Timer1 (100us)
T1CON = 0x8000; //Enable Timer1
IEC0bits.T1IE = 1; //Enable Timer1 Interrupt
IPC0bits.T1IP = 0b011;
Pair this with an ISR function to increment count whenever the timer elapses:
void __attribute__ ((interrupt,no_auto_psv)) _T1Interrupt (void)
{
count++;
IFS0bits.T1IF = 0; //Make sure to clear the interrupt flag
}
You could also try something like this without any interrupts:
void main(void){
unsigned int count = 0;
TMR1 = 0;
T1CON = 0x8000; //TON = 1
while(1){
if (TMR1 >= 400){
count++;
TMR1=0;
}
}
}
However I would recommend using the PR register and an ISR. This is what it's meant to do.
EDIT: I would also recommend reading the PIC24F Reference Manual on timers:
Here
Related
I have to write a C code so that the RGB LED on the board breaths. My code is blinking not breathing. My teacher said that varying brightness is achieved by varying duty-cycle so in that case I can't use pwm. Please help me to understand this code.
#include <stdint.h>
#include <stdlib.h>
#define SYSCTL_RCGC2_R (*((volatile unsigned long *)0x400FE108))
#define SYSCTL_RCGC2_GPIOF 0x00000020 //port F clock gating control
#define GPIO_PORTF_DATA_R (*((volatile unsigned long *)0x400253FC))
#define GPIO_PORTF_DIR_R (*((volatile unsigned long *)0x40025400))
#define GPIO_PORTF_DEN_R (*((volatile unsigned long *)0x4002551C))
void delay (double sec);
int cond;
int main(void){
SYSCTL_RCGC2_R = SYSCTL_RCGC2_GPIOF;
GPIO_PORTF_DIR_R=0x0E;
GPIO_PORTF_DEN_R=0x0E;
cond=0;
while(1){
GPIO_PORTF_DATA_R = 0x02;
delay(12.5);
GPIO_PORTF_DATA_R = 0x00;
delay(0);
GPIO_PORTF_DATA_R = 0x02;
delay(2.5);
GPIO_PORTF_DATA_R = 0x00;
delay(10);
GPIO_PORTF_DATA_R = 0x02;
delay(5);
GPIO_PORTF_DATA_R = 0x00;
delay(7.5);
GPIO_PORTF_DATA_R = 0x02;
delay(7.5);
GPIO_PORTF_DATA_R = 0x00;
delay(5);
GPIO_PORTF_DATA_R = 0x02;
delay(12.5);
GPIO_PORTF_DATA_R = 0x00;
delay(0);
GPIO_PORTF_DATA_R = 0x02;
delay(7.5);
GPIO_PORTF_DATA_R = 0x00;
delay(5);
GPIO_PORTF_DATA_R = 0x02;
delay(5);
GPIO_PORTF_DATA_R = 0x00;
delay(7.5);
}
return 0;
}
void delay(double sec){
int c=1, d=1;
for(c=1;c<=sec;c++)
for(d=1;d<= 4000000;d++){}
}
There are two ways you can drive LEDs: either with constant current through some general-purpose I/O, or with repeated duty cycle from PWM. PWM meaning pulse-width modulation and it will happen with pulses that are too fast for the human eye to notice, could be anywhere from some 100Hz up to 10kHz or so.
The main advantage of PWM is that you easily can control current. Which is case of RGB means color intensity of the 3 individual LEDs. Most smaller LEDs are rated at 20mA so that's usually the maximum current you are aiming for, corresponding to 100% duty cycle.
The correct way to achieve this is to use PWM.
But what your current code does is to "bit bang" simulate PWM by pulling GPIO pins. That's very crude and inefficient. Normally microcontrollers have a timer and/or PWM hardware peripheral built in, where you just provide a duty cycle and the hardware takes care of everything from there. In this case you would set up 3 PWM hardware channels which should ideally be clocked at the same time.
LEDs are diodes with different forward voltage depending on chemistry. So you very likely have different forward voltages per each of the 3 colors. You have to check the datasheet of the RGB and look for luminous intensity experessed in candela. In this case very likely millicandela, mcd. Lets assume that your green led has 300mcd but the red and blue have 100mcd. They are somewhat linear, or you can probably get away with assuming they are. So a crude equation in this case is to give the green LED 3 times less current than the others, in order to get an even mix of colors. Once you have compensated for that, you can give your 3 PWM channels a RGB code and hopefully get the corresponding color.
As a side note, the delay function in your code is completely broken in many ways. The loop iterator for such busy-delays must be volatile or any half-decent compiler will simply remove the delay when optimizations are enabled. And there is no reason to use floating point either.
If you are doing it with your delay function and your delay resolution is in seconds as suggested in the code of course it will "blink" - the frequency needs to be faster than human visual perception - say for example about 50Hz, then to get a smooth variation you might divide that up into say 20 levels, requiring a millisecond delay.
In any case your delay() function defeats itself by taking a floating point number of seconds but comparing it with an integer loop counter - it will only ever work in whole seconds.
So given a function delayms( unsigned millisec ) (which I discuss later) then:
#define BREATHE_UPDATE_MS 100
#define BREATHE_MINIMUM 0
#define PWM_PERIOD_MS 20
unsigned tick = 0 ;
unsigned duty_cycle = 0 ;
unsigned cycle_start_tick= 0 ;
unsigned breath_update_tick = 0 ;
int breathe_dir = 1 ;
for(;;)
{
// If in PWM "mark"...
if( tick - cycle_start_tick < duty_cycle )
{
// LED on
GPIO_PORTF_DATA_R |= 0x02 ;
}
// else PWM "space"
else
{
// LED off
GPIO_PORTF_DATA_R &= ~0x02 ;
}
// Update tick counter
tick++ ;
// If PWM cycle complete, restart
if( tick - cycle_start_tick >= PWM_PERIOD_MS )
{
cycle_start_tick = tick ;
}
// If time to update duty-cycle...
if( tick - breath_update_tick > BREATHE_UPDATE_MS )
{
breath_update_tick = tick ;
duty_cycle += breathe_dir ;
if( duty_cycle >= PWM_PERIOD_MS )
{
// Breathe in
breathe_dir = -1 ;
}
else if( duty_cycle == BREATHE_MINIMUM )
{
// Breathe out
breathe_dir = 1 ;
}
}
delayms( 1 ) ;
}
Change BREATHE_UPDATE_MS to breathe faster, change BREATHE_MINIMUM to "shallow breathe" - i.e. not dim to off.
If your delay function truly results in a delay resolution in seconds then approximately and rather crudely:
void delayms( unsigned millisec )
{
for( int c = 0; c < millisec; c++ )
{
for( volatile int d = 0; d < 4000; d++ ) {}
}
}
However that suggests to me a rather low core clock rate, so you may need to adjust that. Note the use of volatile to prevent the removal of the empty loop by the optimiser. The problem with this delay is that you will need to calibrate it to the clock speed of your target and its timing is likely to differ in any case depending on what compiler you use and what compiler options you use. It is generally a poor solution.
In practice using a "busy-loop" delay for this is ill-advised and crude and it would be better to use the Cortex-M SYSTICK:
volatile uint32_t tick = 0 ;
void SysTick_Handler(void)
{
tick++ ;
}
... removing the tick and tick++ from the original; code. Then you don't need a delay in the loop above because all the timing is pegged to the value of tick. However should you want a delay for other reasons then:
delayms( uint32_t millisec )
{
uint32_t start = tick ;
while( tick - start < millisec ) ;
}
Then you would initialise the SYSTICK at start-up thus:
int main (void)
{
SysTick_Config(SystemCoreClock / 1000) ;
...
}
This assumes that you are using the CMSIS, but your code suggests that you are not doing that (or even using a vendor supplied register header). You will in that case need to get down and dirty with the SYSTICK and NVIC registers if you (or your tutor) insists on that. The source for SysTick_Config() is as follows:
__STATIC_INLINE uint32_t SysTick_Config(uint32_t ticks)
{
if ((ticks - 1UL) > SysTick_LOAD_RELOAD_Msk)
{
return (1UL); /* Reload value impossible */
}
SysTick->LOAD = (uint32_t)(ticks - 1UL); /* set reload register */
NVIC_SetPriority (SysTick_IRQn, (1UL << __NVIC_PRIO_BITS) - 1UL); /* set Priority for Systick Interrupt */
SysTick->VAL = 0UL; /* Load the SysTick Counter Value */
SysTick->CTRL = SysTick_CTRL_CLKSOURCE_Msk |
SysTick_CTRL_TICKINT_Msk |
SysTick_CTRL_ENABLE_Msk; /* Enable SysTick IRQ and SysTick Timer */
return (0UL); /* Function successful */
}
There is Attiny85, with an internal clock source at 8 MHz.
I am trying to implement a microsecond timer based on the hardware timer timer0.
What is my logic:
Since the clock frequency is 8 MHz and the prescaler is off, the time of one clock cycle will be about 0.1us (1/8000000).
Initially, the timer overflows and causes interruptions when passing 0 ... 255, it takes more than 0.1us and is inconvenient for calculating 1μs.
To solve this, I thought about the option to change the initial value of the timer instead of 0 to 245. It turns out that in order to get to the interruption, you need to go through 10 clock cycles, which takes about 1us in time.
I load this code, but the Attiny LED obviously does not switch for about 5 seconds, although the code indicates 1 second (1000000us).
Code:
#include <avr/io.h>
#undef F_CPU
#define F_CPU 8000000UL
#include <avr/interrupt.h>
// Timer0 init
void timer0_Init() {
cli();
//SREG &= ~(1 << 7);
// Enable interrupt for timer0 overflow
TIMSK |= (1 << 1);
// Enabled timer0 (not prescaler) - CS02..CS00 = 001
TCCR0B = 0;
TCCR0B |= (1 << 0);
// Clear timer0 counter
TCNT0 = 245;
sei();
//SREG |= (1 << 7);
}
// timer0 overflow interrupt
// 1us interval logic:
// MCU frequency = 8mHz (8000000Hz), not prescaler
// 1 tick = 1/8000000 = 100ns = 0.1us, counter up++ after 1 tick (0.1us)
// 1us timer = 10 tick's => 245..255
static unsigned long microsecondsTimer;
ISR(TIMER0_OVF_vect) {
microsecondsTimer++;
TCNT0 = 245;
}
// Millis
/*unsigned long timerMillis() {
return microsecondsTimer / 1000;
}*/
void ledBlink() {
static unsigned long blinkTimer;
static int ledState;
// 10000us = 0.01s
// 1000000us = 1s
if(microsecondsTimer - blinkTimer >= 1000000) {
if(!ledState) {
PORTB |= (1 << 3); // HIGH
} else {
PORTB &= ~(1 << 3); // LOW
}
ledState = !ledState;
blinkTimer = microsecondsTimer;
}
}
int main(void)
{
// Set LED pin to OUTPUT mode
DDRB |= (1 << 3);
timer0_Init();
while (1)
{
ledBlink();
}
}
Attiny85 Datasheet
What could be the mistake? I have not yet learned how to work with fuses, so I initially loaded the fuses at 8 MHz through the Arduino IDE, and after that I already downloaded the main code (without changing the fuses) through AVRDUDE and Atmel Studio.
And another question, should I check the maximum value when updating my microsecond counter? I know that in Arduino, the micro and millis counters are reset when they reach the maximum value. For example, if I do not clear the TimerMicrosecond variables variable and it exceeds the size of the unsigned long, will it crash?
As pointed out by #ReAI, your ISR does not have enough time to run. Your ISR will take more than 1 microsecond to execute and return, so you always are missing interrupts.
There are other problems here too. For example, your microsecondsTimer variable is accessed in both the ISR and the foreground and is a long. long variables are 4 bytes wide and so are not updated atomically. It is possible, for example, that your foreground could start reading the value for microsecondsTimer and then in the middle of the read, the ISR could update some of the unread bytes, and then when the foreground starts again it will end up with a mangled value. Also, you should avoid messing with the count register since updating it can miss ticks unless you are very careful.
So how could you implement a working uSec timer? Firstly you'd like to call the ISR as infrequently as possible, so maybe pick the largest prescaller you can get get the resolution that you want and only ISR on overflow. In the case of the ATTINY85 Timer0, you can pick /8 prescaller which gets you one tick of the timer per microsecond with an 8Mhz system clock. Now your ISR only runs once every 256 microseconds and when it runs, it need only increment a "microseconds * 256" counter in each call.
Now to read the current microseconds in the foreground, you can get the number of microseconds mod 256 by directly reading the count register, and then read the "microseconds * 256" counter and multiply this by 256 and add that the counter and you'll have the full count. Note that you will need take special precautions to make sure your reads are atomic. You can do this either by carefully turning off the interrupts, quickly reading the values, and then turning the interrupts back on (save all the math for when interrupts are back on), or looping on the read values to make sure you get two full reads in a row that are the same (time means that have not updated while you were reading them).
Note that you can check out the source code to Arduino timer ISR for some insights, but note that theirs is more complicated because it can handle a wide range of tick speeds whereas you are able to keep things simple by specifically picking a 1us period.
why you didn't use pre-scaler ?!
your code need a relly relly big delay intervall(1 sec it's huge time according to cpu speed) .... so it's not wisdom choose to interrupt microcontroller every 1 us !!.. so it will be great if we could slow down your microcontroller clock and make interrupt for example every 1 ms
calculation
the microcontroller clock speed is 8 mega Hz so if we chose the preScaller to 64 then the timer clock will be 8MHz/64=125 KHz so that mean each tik (timer clock) time will be 1/125KHZ=8 us
so if we like to have inturrpt every 1ms then we need 125 tik
modify code
try this code it's more clear to understand
#undef F_CPU
#define F_CPU 8000000UL
#include <avr/io.h>
#include <avr/interrupt.h>
volatile int millSec;
void timer0_Init();
void toggleLed();
int main(void)
{
// Set LED pin to OUTPUT mode
DDRB |= (1 << 3);
timer0_Init();
millSec = 0; // init the millsecond
sei(); // set Global Interrupt Enable
while (1)
{
if(millSec >= 1000){
// this block of code will run every 1 sec
millSec =0; // start count for the new sec
toggleLed(); // just toggle the led state
}
// Do other backGround jobs
}
}
//#####Helper functions###########
void timer0_Init() {
// Clear timer0 counter
TCNT0 = 130; //255-125=130
// Enable interrupt for timer0 overflow
TIMSK = (1 << 1);
// set prescaler to 64 and start the timer
TCCR0B = (1<<CS00)|(1<<CS01);
}
void toggleLed(){
PORTB ^= (1 << 3); // toggle led output
}
ISR(TIMER0_OVF_vect) {
// this interrupt will happen every 1 ms
millSec++;
// Clear timer0 counter
TCNT0 = 130;
}
Sorry, i am late but i have got some suggestions. If you calculate the Timer0 with prescaler 1, the timer is counting up every 125ns. It is not possible to reach 1 us without a small divergence. But if you use prescaler 8 you reach exactly 1 us. I actually do not have your hardware but give this a try:
#ifndef F_CPU
#define F_CPU 8000000UL
#else
#error "F_CPU already defined"
#endif
#include <avr/io.h>
#include <avr/interrupt.h>
volatile unsigned int microsecondsTimer;
// Interrupt for Timer0 Compare Match A
ISR(TIMER0_COMPA_vect)
{
microsecondsTimer++;
}
// Timer0 init
void timer0_Init()
{
// Timer0:
// - Mode: CTC
// - Prescaler: /8
TCCR0A = (1<<WGM01);
TCCR0B = (1<<CS01);
OCR0A = 1;
TIMSK = (1<<OCIE0A)
sei();
}
void ledBlink() {
static unsigned int blinkTimer;
if(microsecondsTimer >= 1000)
{
microsecondsTimer = 0;
blinkTimer++;
}
if(blinkTimer >= 1000)
{
PORTB ^= (1<<PINB3);
blinkTimer = 0;
}
}
int main(void)
{
// Set LED pin to OUTPUT mode
DDRB |= (1 << PINB3);
timer0_Init();
while (1)
{
ledBlink();
}
}
If you are using internal clock of attiny it may be divied by 8. To disable the clock division you have to disable the prescaler within 4 clock cycles (atomic operation):
int main(void)
{
// Reset clock prescaling
CLKPR = (1<<CLKPR);
CLKPR = 0x00;
// ...
Please try this solution an give feedback if it is working. Maybe you can verify it with an oscilloscope...
Notice that operations with unsigned long needs more than 1 clock cycle to handle on an 8 bit microcontroller. Maybe it would be better to use unsigned int or unsigned char. The main loop also should not contain lots if instructions. Otherwise error correction of microsecond timer has to be implemented.
Afternoon all
I'm looking for some assistance please with something that has been confusing me whilst trying to learn timer interrupts
You'd be best treating me as a novice. I have no specific goal here other than learning something that I think would be useful feather to add to my cap!
I have written the below sketch as a stab at a rigid framework for executing different fcns at different rates. I've done something similar using millis() and whilst that worked I found it inelegant that a) there was no obvious way to check for task overruns and backing-up the execution rate and b) the processor is bunged up by checking millis() every program cycle.*
Essentially what I think should be a 1ms timer interrupt on Timer2 (16MHz/64 prescaler /250 compare register =1000hz) is coming out around 0.5ms. I've been confused for hours on this but I'm prepared to accept it could be something fundamental/basic!
What's also throwing a spanner in the works is that using serial comms to try and debug the faster task rates seems to slow things down considerably, so I'm inferring the problem by counting up 1ms tasks to call 10,100 and 1000ms tasks and debugging at the slower level. I suppose chewing through a few characters at 9600baud probably is quite slow.**
I've pasted the code below. Any pointers highly appreciated. Be as harsh as you like :)
cheers
Al
*Whilst not what I'm confused about - any comments on my logic here also welcome
** Although I don't get how Serial.println manages to slow the program down. It's driven from interrupts so it should surely just drop the comms and perform the next ISR - effectively a task overrun. Any comments here also welcome
//NOTES
//https://www.robotshop.com/letsmakerobots/arduino-101-timers-and-interrupts
//https://sites.google.com/site/qeewiki/books/avr-guide/timers-on-the-atmega328
//http://ww1.microchip.com/downloads/en/DeviceDoc/Atmel-42735-8-bit-AVR-Microcontroller-ATmega328-328P_Datasheet.pdf
//
//INITIALISE EVERYTHING
const int ledPin = 13;
volatile int nStepTask1ms = 0; // init 0 - to be used for counting number of 1ms tasks run and hence calling 10ms task
volatile int nStepTask10ms = 0;
volatile int nStepTask100ms = 0;
volatile int nStepTask1000ms = 0;
volatile int LEDFlashState = 0; // init 0 - variable to flip when LED on
volatile int millisNew = 0; // to store up to date time
volatile int millisOld = 0; // to store prev time
volatile int millisDelta = 0; // to store deltas
//
void setup()
{
Serial.begin(9600); //set up serial comms back to PC
pinMode(ledPin,OUTPUT); //to flash the embedded LED
noInterrupts(); //turn off interrupts while we set the registers
//set up TIMER first
TCCR2A = 0; //sets TCCR1A byte to zero, bits to be later individually mod'd
TCCR2B = 0; //sets TCCR1B byte to zero, bits to be later individually mod'd
TCNT2 = 0; //ensures counter value starting from zero
Serial.println("Timer1 vars reset");
TCCR2B |= (1<<WGM12); // bitwise or between itself and WGM12. TCCR2B = TCCR2B | 00001000. Sets WGM12 high. (CTC mode so WGM12=1, WGM 13,11,10 all 0) https://stackoverflow.com/questions/141525/what-are-bitwise-shift-bit-shift-operators-and-how-do-they-work
Serial.println("Mode 4 CTC set");
Serial.println("TCCR2B=");
Serial.println(TCCR2B,BIN);
TCCR2B |= (1<<CS11); // sets CS11 high
TCCR2B |= (1<<CS10); // sets CS10 high (i.e. this and above give /64 prescaler)
Serial.println("Prescaler set to 64");
OCR2A = 250; //compare match register for timer2
Serial.println("Compare Register set");
Serial.println("OCR2A=");
Serial.println(OCR2A);
TIMSK2 |= (1 << OCIE2A); //enables interrupts - https://playground2014.wordpress.com/arduino/basics-timer-interrupts/
Serial.println("Interrupt Mask Register Set");
Serial.println("TIMSK2=");
Serial.println(TIMSK2);
interrupts(); //enable interrupts again - not sure if this is required given OCIE1A being set above?
}
//set up ISR for Timer2 - timer structure called every interrump (1ms) that subsequently calls 1,10,100 and 1000msec task fcns
ISR(TIMER2_COMPA_vect)
{
TASK_1ms();
if (nStepTask1ms>9)
{
TASK_10ms();
if (nStepTask10ms>9)
{
TASK_100ms();
if (nStepTask100ms>9)
{
TASK_1000ms();
}
}
}
}
void TASK_1ms()
{
// 1ms tasks here
nStepTask1ms++;
}
void TASK_10ms()
{
//10ms tasks here
nStepTask1ms=0;
nStepTask10ms++;
}
void TASK_100ms()
{
//100ms tasks here
nStepTask10ms=0;
nStepTask100ms++;
//Serial.println(nStepTask100ms);
}
void TASK_1000ms()
{
//1000ms tasks here
nStepTask100ms=0;
//do something
changeLEDFlashState();
//check timing tick of this task
millisNew=millis();
millisDelta=millisNew-millisOld;
Serial.println(millisDelta);
millisOld=millisNew;
nStepTask1000ms++;
}
void changeLEDFlashState()
{
if(LEDFlashState==0)
{
digitalWrite(ledPin,HIGH);
LEDFlashState=1;
//Serial.println("LED Turned On");
}
else
{
digitalWrite(ledPin,LOW);
LEDFlashState=0;
//Serial.println("LED Turned Off");
}
}
void loop()
{
// empty
}
You have two lines here:
TCCR2B |= (1<<CS11); // sets CS11 high
TCCR2B |= (1<<CS10); // sets CS10 high (i.e. this and above give /64 prescaler)
These two lines set the lower three bits of TCCR2B to 011, which is a /32 prescaler.
Note that for Timer1 and Timer2, the prescaler settings are different than Timer0.
For Timer0, the settings above would give you a /64 prescaler.
Im trying to create an embedded c code to control a dc motor with the PIC32MX460F512L microcontroller. Ive Configured the system clock at 80MHz, and the peripheral clock at 10MHz, Am using Timer 1 for pulsing the PWM with a given duty cycle, and Timer 2 for measuring the motor run time. I have a header file(includes.h) that contains system configuration information eg clock. Ive created most of the functions but some are a bit challenging. For example, initializing the LEDS and the functions for forward, backward movements and stop, I wanted the dc motor to run in forward direction for 4 sec at 70% duty cycle, then stop for 1 sec then reverse for 3 sec at 50% duty cycle and then stop for 1 sec and then forward again for 3 sec at 40% duty cycle, stop for 1 sec and finally forward for 5 sec at 20% duty cycle. Any suggestions for the forward, stop, and reverse functions
#include <stdio.h>
#include <stdlib.h>
#include <includes.h>
void main()
{
// Setting up PIC modules such as Timers, IOs OCs,Interrupts, ...
InitializeIO();
InitializeLEDs();
InitializeTimers();
while(1) {
WaitOnBtn1();
Forward(4.0,70);
Stop(1.0);
Backward(3.0,50);
Stop(2);
Forward(3.0,40);
Stop(1.0);
Backward(2.0,20);
LEDsOFF();
}
return;
}
void InitializeIO(){
TRISAbits.TRISA6 = 1;
TRISAbits.TRISA7 = 1;
TRISGbits.TRISG12 = 0;
TRISGbits.TRISB13 = 0;
LATGbits.LATB12 = 0;
LATGbits.LATB13 = 0;
return;
}
void InitializeLEDs(){
//code to initialize LEDS
}
void InitializeTimers(){
// Initialize Timer1
T1CON = 0x0000; // Set Timer1 Control to zeros
T1CONbits.TCKPS=3; // prescale by 256
T1CONbits.ON = 1; // Turn on Timer
PR1= 0xFFFF; // Period of Timer1 to be full
TMR1 = 0; // Initialize Timer1 to zero
// Initialize Timer2
T2CON = 0;
T2CONbits.TCKPS = 7; // prescale by 256
T2CONbits.T32 = 1; // use 32 bits timer
T2CONbits.ON = 1;
PR2 = 0xFFFFFFFF; // Period is set for 32 bits
TMR2 = 0;
}
void WaitOnBtn1(){
// wait on Btn1 indefinitely
while(PORTAbits.RA6 == 0);
// Turn On LED1 indicating it is Btn1 is Pushed
LATBbits.LATB10 = 1;
return;
}
void Forward(float Sec, int D){
int RunTime = (int)(Sec*39000); // convert the total
time to number of Tics
TMR2 = 0;
//LEDs
LATGbits.LATG12 = 1; // forward Direction
LATBbits.LATB12 = 0;
LATBbits.LATB13 = 0;
LATBbits.LATB11 = 1;
// Keep on firing the PWM as long as Run time is not
elapsed
while (TMR2 < RunTime){
PWM(D);
}
return;
}
void PWM(int D){
TMR1 = 0;
int Period = 400;
while (TMR1< Period) {
if (TMR1 < Period*D/100){
LATGbits.LATG13 = 1;
}
else{
LATGbits.LATG13 = 0;
}
}
Functions, not methods, to be precise.
So what is exactly the question?
What I can say from a quick look on a source code:
LEDs initialisation should be done as you did in InitializeIO() function. Simply set proper TRISx bits to 0 to configure LED pins as output.
For the PWM and motor control functions you should take some time and try to understand how builtin PWM peripheral works. It is a part of OC (Output Compare) and it is very easy to use. Please, take look on following link http://ww1.microchip.com/downloads/en/DeviceDoc/61111E.pdf
and this one for the minimal implementation using builtin peripheral libraries https://electronics.stackexchange.com/questions/69232/pic32-pwm-minimal-example
Basically you need to set up OC registers to "make" OC module acts like PWM. You need to allocate one of the timers to work with OC module (it will be used for base PWM frequency) and that's it.
All you need after that is to set PWM duty cycle value by setting timer PRx register, you don't need to swap bits like in your PWM routine.
To stop it simple stop it simply disable the timer.
To run it again run the timer.
To change direction (it depends of your driver for the motor) I guess you need just to toggle direction pin.
I hope it helps...
I am using MSP430F5418 with IAR EW 5.10.
I want to use Timer B in up mode.
I want to use two interrupts.
TimerB0(1 ms) and TimerB1(1 second).
My Configuration is
TBCTL = MC__UP + TBSSEL__ACLK + TBCLR;
TB0CCTL0 = CCIE;
TB0CCR0 = 32;
TB0CCTL1 = CCIE;
TB0CCR1 = 32768;
On the ISR I just toggled two pins.
But only the pin for TB0CCR0 is toggling.
My Pin Configurations are correct.
Can anyone tell me why??
I suppose your problem is the timer period.
MC__UP Up mode: Timer counts up to TBxCL0
So your timer TBxR will reset to 0 when it reaches TBxCL0 which seems to be the value TB0CCR0.
So it can never reach the value of 32768.
You could switch TB0CCR0 with TB0CCR1, so your period will be 1 second.
And to get your 1ms interrupt you need to increment your TB0CCR1 each time.
INTERRUPT ISR_1MS()
{
TB0CCR1 = (TB0CCR1 + 32) & 0x7FFF;
}
But normally you don't need a second timer for a second intervall.
You could simply count 1000 times your 1ms intervall.
INTERRUPT ISR_1MS()
{
ms_count++;
if (ms_count >= 1000)
{
ms_count=0;
// Do your second stuff
}
}
And if you need more and different intervalls, you could change to another model.
To a system clock time and check only against this time.
volatile unsigned int absolute_time=0;
INTERRUPT ISR_1MS()
{
absolute_time++;
}
unsigned int systime_now(void)
{
unsigned int result;
di();
result = absolute_time;
ei();
return result;
}
uint8_t systime_reached(unsigned int timeAt)
{
uint8_t result;
result = (systime_now() - timeAt ) < 0x1000;
return result;
}