I am building a simple Timer/Counter application, that generates a delay using the normal mode in Atmel's ATmega48PA, using Timer1, toggling an LED in a constant time interval. What happens is when using the interrupt, the LED toggles for a definite amount of time, then the toggling effect halts, keeping the LED always ON!
I believe there is something with the sei() function or enabling the global interrupt in SREG, as I had experienced such behavior with the same microcontroller before when using interrupts.
Here is a code snippet provided with my question, although anybody will see this code as very normal and have to be working correctly!
#include <avr/io.h>
#include <atmel_start.h>
#include <util/delay.h>
#include <math.h>
#include <clock_config.h>
#include "avr/iom48pa.h"
#define LOAD_VALUE 49911UL
#define SET_BIT( REG, BIT ) REG |= ( 1 << BIT )
#define CLR_BIT( REG, BIT ) REG &= ~( 1 << BIT )
#define TOG_BIT( REG, BIT ) REG ^= ( 1 << BIT )
void Timer16_Init( void );
void Timer16_DelayMS( unsigned short delayMS );
unsigned short delayMS = 50;
ISR( TIMER1_OVF_vect ){
Disable_global_interrupt();
TOG_BIT( PORTC, 2 );
TOG_BIT( PORTC, 3 );
TCNT1 = ( ( 4194304 - delayMS ) * 1000 ) / 64;
Enable_global_interrupt();
}
int main( void ){
/* Initializes MCU, drivers and middle ware */
atmel_start_init();
/* configure pin 2 and pin 3 in PORTC as output */
SET_BIT( DDRC, 3 );
SET_BIT( DDRC, 2 );
Enable_global_interrupt();
Timer16_Init();
TCNT1 = ( ( 4194304 - delayMS ) * 1000 ) / 64;
while( 1 ){
}
}
void Timer16_Init( void ){
SET_BIT( TCCR1B, CS10 );
SET_BIT( TCCR1B, CS12 );
SET_BIT( TIMSK1, TOIE1 );
}
I just want to know, what in the world happens right here ?!
Well at the first look, there is no special problem in your code. So let's check the possibilities:
First you have done some 32-bit long calculations and put the result in the 16-bit register:
TCNT1 = ( ( 4194304 - delayMS ) * 1000 ) / 64;
it results in an unpredictable value that has been entered in the register. so I recommend you to use appropriate values or using (long) and (int) to your code to prevent data overflow.
Second You have not entered the correct data line order:
Enable_global_interrupt();
Timer16_Init();
TCNT1 = ( ( 4194304 - delayMS ) * 1000 ) / 64;
you enabled the interrupts, then initialized the timer, and then applied the value of timer. this is incorrect since the timer runs and is able to make an interrupt, but its value has not been set yet. the order must be like this:
TCNT1 = ( ( 4194304 - delayMS ) * 1000 ) / 64;
Enable_global_interrupt();
Timer16_Init();
Third you have used the timer in normal mode, entering overflow interrupt and setting the timer value inside of interrupt routine. I highly recommend you to use the compare mode instead, since it does not require the timer value setting in the interrupt routine.
Related
I have to write a C code so that the RGB LED on the board breaths. My code is blinking not breathing. My teacher said that varying brightness is achieved by varying duty-cycle so in that case I can't use pwm. Please help me to understand this code.
#include <stdint.h>
#include <stdlib.h>
#define SYSCTL_RCGC2_R (*((volatile unsigned long *)0x400FE108))
#define SYSCTL_RCGC2_GPIOF 0x00000020 //port F clock gating control
#define GPIO_PORTF_DATA_R (*((volatile unsigned long *)0x400253FC))
#define GPIO_PORTF_DIR_R (*((volatile unsigned long *)0x40025400))
#define GPIO_PORTF_DEN_R (*((volatile unsigned long *)0x4002551C))
void delay (double sec);
int cond;
int main(void){
SYSCTL_RCGC2_R = SYSCTL_RCGC2_GPIOF;
GPIO_PORTF_DIR_R=0x0E;
GPIO_PORTF_DEN_R=0x0E;
cond=0;
while(1){
GPIO_PORTF_DATA_R = 0x02;
delay(12.5);
GPIO_PORTF_DATA_R = 0x00;
delay(0);
GPIO_PORTF_DATA_R = 0x02;
delay(2.5);
GPIO_PORTF_DATA_R = 0x00;
delay(10);
GPIO_PORTF_DATA_R = 0x02;
delay(5);
GPIO_PORTF_DATA_R = 0x00;
delay(7.5);
GPIO_PORTF_DATA_R = 0x02;
delay(7.5);
GPIO_PORTF_DATA_R = 0x00;
delay(5);
GPIO_PORTF_DATA_R = 0x02;
delay(12.5);
GPIO_PORTF_DATA_R = 0x00;
delay(0);
GPIO_PORTF_DATA_R = 0x02;
delay(7.5);
GPIO_PORTF_DATA_R = 0x00;
delay(5);
GPIO_PORTF_DATA_R = 0x02;
delay(5);
GPIO_PORTF_DATA_R = 0x00;
delay(7.5);
}
return 0;
}
void delay(double sec){
int c=1, d=1;
for(c=1;c<=sec;c++)
for(d=1;d<= 4000000;d++){}
}
There are two ways you can drive LEDs: either with constant current through some general-purpose I/O, or with repeated duty cycle from PWM. PWM meaning pulse-width modulation and it will happen with pulses that are too fast for the human eye to notice, could be anywhere from some 100Hz up to 10kHz or so.
The main advantage of PWM is that you easily can control current. Which is case of RGB means color intensity of the 3 individual LEDs. Most smaller LEDs are rated at 20mA so that's usually the maximum current you are aiming for, corresponding to 100% duty cycle.
The correct way to achieve this is to use PWM.
But what your current code does is to "bit bang" simulate PWM by pulling GPIO pins. That's very crude and inefficient. Normally microcontrollers have a timer and/or PWM hardware peripheral built in, where you just provide a duty cycle and the hardware takes care of everything from there. In this case you would set up 3 PWM hardware channels which should ideally be clocked at the same time.
LEDs are diodes with different forward voltage depending on chemistry. So you very likely have different forward voltages per each of the 3 colors. You have to check the datasheet of the RGB and look for luminous intensity experessed in candela. In this case very likely millicandela, mcd. Lets assume that your green led has 300mcd but the red and blue have 100mcd. They are somewhat linear, or you can probably get away with assuming they are. So a crude equation in this case is to give the green LED 3 times less current than the others, in order to get an even mix of colors. Once you have compensated for that, you can give your 3 PWM channels a RGB code and hopefully get the corresponding color.
As a side note, the delay function in your code is completely broken in many ways. The loop iterator for such busy-delays must be volatile or any half-decent compiler will simply remove the delay when optimizations are enabled. And there is no reason to use floating point either.
If you are doing it with your delay function and your delay resolution is in seconds as suggested in the code of course it will "blink" - the frequency needs to be faster than human visual perception - say for example about 50Hz, then to get a smooth variation you might divide that up into say 20 levels, requiring a millisecond delay.
In any case your delay() function defeats itself by taking a floating point number of seconds but comparing it with an integer loop counter - it will only ever work in whole seconds.
So given a function delayms( unsigned millisec ) (which I discuss later) then:
#define BREATHE_UPDATE_MS 100
#define BREATHE_MINIMUM 0
#define PWM_PERIOD_MS 20
unsigned tick = 0 ;
unsigned duty_cycle = 0 ;
unsigned cycle_start_tick= 0 ;
unsigned breath_update_tick = 0 ;
int breathe_dir = 1 ;
for(;;)
{
// If in PWM "mark"...
if( tick - cycle_start_tick < duty_cycle )
{
// LED on
GPIO_PORTF_DATA_R |= 0x02 ;
}
// else PWM "space"
else
{
// LED off
GPIO_PORTF_DATA_R &= ~0x02 ;
}
// Update tick counter
tick++ ;
// If PWM cycle complete, restart
if( tick - cycle_start_tick >= PWM_PERIOD_MS )
{
cycle_start_tick = tick ;
}
// If time to update duty-cycle...
if( tick - breath_update_tick > BREATHE_UPDATE_MS )
{
breath_update_tick = tick ;
duty_cycle += breathe_dir ;
if( duty_cycle >= PWM_PERIOD_MS )
{
// Breathe in
breathe_dir = -1 ;
}
else if( duty_cycle == BREATHE_MINIMUM )
{
// Breathe out
breathe_dir = 1 ;
}
}
delayms( 1 ) ;
}
Change BREATHE_UPDATE_MS to breathe faster, change BREATHE_MINIMUM to "shallow breathe" - i.e. not dim to off.
If your delay function truly results in a delay resolution in seconds then approximately and rather crudely:
void delayms( unsigned millisec )
{
for( int c = 0; c < millisec; c++ )
{
for( volatile int d = 0; d < 4000; d++ ) {}
}
}
However that suggests to me a rather low core clock rate, so you may need to adjust that. Note the use of volatile to prevent the removal of the empty loop by the optimiser. The problem with this delay is that you will need to calibrate it to the clock speed of your target and its timing is likely to differ in any case depending on what compiler you use and what compiler options you use. It is generally a poor solution.
In practice using a "busy-loop" delay for this is ill-advised and crude and it would be better to use the Cortex-M SYSTICK:
volatile uint32_t tick = 0 ;
void SysTick_Handler(void)
{
tick++ ;
}
... removing the tick and tick++ from the original; code. Then you don't need a delay in the loop above because all the timing is pegged to the value of tick. However should you want a delay for other reasons then:
delayms( uint32_t millisec )
{
uint32_t start = tick ;
while( tick - start < millisec ) ;
}
Then you would initialise the SYSTICK at start-up thus:
int main (void)
{
SysTick_Config(SystemCoreClock / 1000) ;
...
}
This assumes that you are using the CMSIS, but your code suggests that you are not doing that (or even using a vendor supplied register header). You will in that case need to get down and dirty with the SYSTICK and NVIC registers if you (or your tutor) insists on that. The source for SysTick_Config() is as follows:
__STATIC_INLINE uint32_t SysTick_Config(uint32_t ticks)
{
if ((ticks - 1UL) > SysTick_LOAD_RELOAD_Msk)
{
return (1UL); /* Reload value impossible */
}
SysTick->LOAD = (uint32_t)(ticks - 1UL); /* set reload register */
NVIC_SetPriority (SysTick_IRQn, (1UL << __NVIC_PRIO_BITS) - 1UL); /* set Priority for Systick Interrupt */
SysTick->VAL = 0UL; /* Load the SysTick Counter Value */
SysTick->CTRL = SysTick_CTRL_CLKSOURCE_Msk |
SysTick_CTRL_TICKINT_Msk |
SysTick_CTRL_ENABLE_Msk; /* Enable SysTick IRQ and SysTick Timer */
return (0UL); /* Function successful */
}
my microcontroller is attiny85.Actually I have a few questions.
I simply turn on the LED 8 seconds later with the code below.
1)Should I turn the interrupts off and on while reading the counter value? I've seen something like this in the wiring.c file, the millis function.
2)How can I safely set the counter variable to 0 whenever I want? Do I have to turn the interrupts off and on here? Should I set the variables TCCR0A, TCCR0A, TCNT0 to zero?How should a safe reset function be?
Actually, all my purpose is to make a safe counter in the main function that can count 8 seconds whenever I want and start from zero whenever I want.
My basic code in below :
#define F_CPU 1000000UL
#include <avr/io.h>
#include <avr/interrupt.h>
#include <stdlib.h>
#include <stdio.h>
volatile unsigned int counter = 0;
ISR(TIM0_COMPA_vect){
//interrupt commands for TIMER 0 here
counter++;
}
void timerprogram_init()
{
// TIMER 0 for interrupt frequency 1000 Hz:
cli(); // stop interrupts
TCCR0A = 0; // set entire TCCR0A register to 0
TCCR0B = 0; // same for TCCR0B
TCNT0 = 0; // initialize counter value to 0
// set compare match register for 1000 Hz increments
OCR0A = 124; // = 1000000 / (8 * 1000) - 1 (must be <256)
// turn on CTC mode
TCCR0A |= (1 << WGM01);
// Set CS02, CS01 and CS00 bits for 8 prescaler
TCCR0B |= (0 << CS02) | (1 << CS01) | (0 << CS00);
// enable timer compare interrupt
TIMSK0 |= (1 << OCIE0A);
sei(); // allow interrupts
}
int main(void)
{
/* Replace with your application code */
timerprogram_init();
DDRA|=(1<<7);
PORTA &=~ (1<<7);
while (1)
{
if(counter>8000)
PORTA |= (1<<7);
}
}
First, there is no need to check the "counter" value in the super loop; In fact, this variable changes when an interrupt is created, so checking the variable should be done in the interrupt itself and if necessary, take only one flag from it.
Secondly, a safe way to reset the counter is to first deactivate the frequency divider(TCCR0B) of the timer section (the counter timer is practically turned off) and then set the TCNT0 value to zero to reset the timer; And if necessary, you can safely force the counter timer to count by returning the divider value.
Good luck.
There is Attiny85, with an internal clock source at 8 MHz.
I am trying to implement a microsecond timer based on the hardware timer timer0.
What is my logic:
Since the clock frequency is 8 MHz and the prescaler is off, the time of one clock cycle will be about 0.1us (1/8000000).
Initially, the timer overflows and causes interruptions when passing 0 ... 255, it takes more than 0.1us and is inconvenient for calculating 1μs.
To solve this, I thought about the option to change the initial value of the timer instead of 0 to 245. It turns out that in order to get to the interruption, you need to go through 10 clock cycles, which takes about 1us in time.
I load this code, but the Attiny LED obviously does not switch for about 5 seconds, although the code indicates 1 second (1000000us).
Code:
#include <avr/io.h>
#undef F_CPU
#define F_CPU 8000000UL
#include <avr/interrupt.h>
// Timer0 init
void timer0_Init() {
cli();
//SREG &= ~(1 << 7);
// Enable interrupt for timer0 overflow
TIMSK |= (1 << 1);
// Enabled timer0 (not prescaler) - CS02..CS00 = 001
TCCR0B = 0;
TCCR0B |= (1 << 0);
// Clear timer0 counter
TCNT0 = 245;
sei();
//SREG |= (1 << 7);
}
// timer0 overflow interrupt
// 1us interval logic:
// MCU frequency = 8mHz (8000000Hz), not prescaler
// 1 tick = 1/8000000 = 100ns = 0.1us, counter up++ after 1 tick (0.1us)
// 1us timer = 10 tick's => 245..255
static unsigned long microsecondsTimer;
ISR(TIMER0_OVF_vect) {
microsecondsTimer++;
TCNT0 = 245;
}
// Millis
/*unsigned long timerMillis() {
return microsecondsTimer / 1000;
}*/
void ledBlink() {
static unsigned long blinkTimer;
static int ledState;
// 10000us = 0.01s
// 1000000us = 1s
if(microsecondsTimer - blinkTimer >= 1000000) {
if(!ledState) {
PORTB |= (1 << 3); // HIGH
} else {
PORTB &= ~(1 << 3); // LOW
}
ledState = !ledState;
blinkTimer = microsecondsTimer;
}
}
int main(void)
{
// Set LED pin to OUTPUT mode
DDRB |= (1 << 3);
timer0_Init();
while (1)
{
ledBlink();
}
}
Attiny85 Datasheet
What could be the mistake? I have not yet learned how to work with fuses, so I initially loaded the fuses at 8 MHz through the Arduino IDE, and after that I already downloaded the main code (without changing the fuses) through AVRDUDE and Atmel Studio.
And another question, should I check the maximum value when updating my microsecond counter? I know that in Arduino, the micro and millis counters are reset when they reach the maximum value. For example, if I do not clear the TimerMicrosecond variables variable and it exceeds the size of the unsigned long, will it crash?
As pointed out by #ReAI, your ISR does not have enough time to run. Your ISR will take more than 1 microsecond to execute and return, so you always are missing interrupts.
There are other problems here too. For example, your microsecondsTimer variable is accessed in both the ISR and the foreground and is a long. long variables are 4 bytes wide and so are not updated atomically. It is possible, for example, that your foreground could start reading the value for microsecondsTimer and then in the middle of the read, the ISR could update some of the unread bytes, and then when the foreground starts again it will end up with a mangled value. Also, you should avoid messing with the count register since updating it can miss ticks unless you are very careful.
So how could you implement a working uSec timer? Firstly you'd like to call the ISR as infrequently as possible, so maybe pick the largest prescaller you can get get the resolution that you want and only ISR on overflow. In the case of the ATTINY85 Timer0, you can pick /8 prescaller which gets you one tick of the timer per microsecond with an 8Mhz system clock. Now your ISR only runs once every 256 microseconds and when it runs, it need only increment a "microseconds * 256" counter in each call.
Now to read the current microseconds in the foreground, you can get the number of microseconds mod 256 by directly reading the count register, and then read the "microseconds * 256" counter and multiply this by 256 and add that the counter and you'll have the full count. Note that you will need take special precautions to make sure your reads are atomic. You can do this either by carefully turning off the interrupts, quickly reading the values, and then turning the interrupts back on (save all the math for when interrupts are back on), or looping on the read values to make sure you get two full reads in a row that are the same (time means that have not updated while you were reading them).
Note that you can check out the source code to Arduino timer ISR for some insights, but note that theirs is more complicated because it can handle a wide range of tick speeds whereas you are able to keep things simple by specifically picking a 1us period.
why you didn't use pre-scaler ?!
your code need a relly relly big delay intervall(1 sec it's huge time according to cpu speed) .... so it's not wisdom choose to interrupt microcontroller every 1 us !!.. so it will be great if we could slow down your microcontroller clock and make interrupt for example every 1 ms
calculation
the microcontroller clock speed is 8 mega Hz so if we chose the preScaller to 64 then the timer clock will be 8MHz/64=125 KHz so that mean each tik (timer clock) time will be 1/125KHZ=8 us
so if we like to have inturrpt every 1ms then we need 125 tik
modify code
try this code it's more clear to understand
#undef F_CPU
#define F_CPU 8000000UL
#include <avr/io.h>
#include <avr/interrupt.h>
volatile int millSec;
void timer0_Init();
void toggleLed();
int main(void)
{
// Set LED pin to OUTPUT mode
DDRB |= (1 << 3);
timer0_Init();
millSec = 0; // init the millsecond
sei(); // set Global Interrupt Enable
while (1)
{
if(millSec >= 1000){
// this block of code will run every 1 sec
millSec =0; // start count for the new sec
toggleLed(); // just toggle the led state
}
// Do other backGround jobs
}
}
//#####Helper functions###########
void timer0_Init() {
// Clear timer0 counter
TCNT0 = 130; //255-125=130
// Enable interrupt for timer0 overflow
TIMSK = (1 << 1);
// set prescaler to 64 and start the timer
TCCR0B = (1<<CS00)|(1<<CS01);
}
void toggleLed(){
PORTB ^= (1 << 3); // toggle led output
}
ISR(TIMER0_OVF_vect) {
// this interrupt will happen every 1 ms
millSec++;
// Clear timer0 counter
TCNT0 = 130;
}
Sorry, i am late but i have got some suggestions. If you calculate the Timer0 with prescaler 1, the timer is counting up every 125ns. It is not possible to reach 1 us without a small divergence. But if you use prescaler 8 you reach exactly 1 us. I actually do not have your hardware but give this a try:
#ifndef F_CPU
#define F_CPU 8000000UL
#else
#error "F_CPU already defined"
#endif
#include <avr/io.h>
#include <avr/interrupt.h>
volatile unsigned int microsecondsTimer;
// Interrupt for Timer0 Compare Match A
ISR(TIMER0_COMPA_vect)
{
microsecondsTimer++;
}
// Timer0 init
void timer0_Init()
{
// Timer0:
// - Mode: CTC
// - Prescaler: /8
TCCR0A = (1<<WGM01);
TCCR0B = (1<<CS01);
OCR0A = 1;
TIMSK = (1<<OCIE0A)
sei();
}
void ledBlink() {
static unsigned int blinkTimer;
if(microsecondsTimer >= 1000)
{
microsecondsTimer = 0;
blinkTimer++;
}
if(blinkTimer >= 1000)
{
PORTB ^= (1<<PINB3);
blinkTimer = 0;
}
}
int main(void)
{
// Set LED pin to OUTPUT mode
DDRB |= (1 << PINB3);
timer0_Init();
while (1)
{
ledBlink();
}
}
If you are using internal clock of attiny it may be divied by 8. To disable the clock division you have to disable the prescaler within 4 clock cycles (atomic operation):
int main(void)
{
// Reset clock prescaling
CLKPR = (1<<CLKPR);
CLKPR = 0x00;
// ...
Please try this solution an give feedback if it is working. Maybe you can verify it with an oscilloscope...
Notice that operations with unsigned long needs more than 1 clock cycle to handle on an 8 bit microcontroller. Maybe it would be better to use unsigned int or unsigned char. The main loop also should not contain lots if instructions. Otherwise error correction of microsecond timer has to be implemented.
I am learning FreeRTOS on a STM32F103C8T6 (on a Blue-Pill board).
I am trying to use queues and tasks.
#include <FreeRTOS.h>
#include <libopencm3/stm32/gpio.h>
#include <libopencm3/stm32/rcc.h>
#include <queue.h>
#include <task.h>
static QueueHandle_t queue;
static void
task_receive(void *args __attribute__((unused)))
{
bool nothing;
while (1)
{
if (xQueueReceive(queue, ¬hing, 10) == pdPASS)
gpio_set(GPIOC, GPIO13); // Turn off
else
taskYIELD(); // Yeld so that other taks can run
}
}
static void
task_send(void *args __attribute__((unused)))
{
bool nothing = false;
while (1)
{
gpio_clear(GPIOC, GPIO13); // Turn on
vTaskDelay(pdMS_TO_TICKS(100));
xQueueSendToBack(queue, ¬hing, portMAX_DELAY);
vTaskDelay(pdMS_TO_TICKS(1000));
}
}
int
main(void)
{
rcc_clock_setup_in_hse_8mhz_out_72mhz();
// Blue-Pill led
rcc_periph_clock_enable(RCC_GPIOC);
gpio_set_mode(
GPIOC,
GPIO_MODE_OUTPUT_2_MHZ,
GPIO_CNF_OUTPUT_PUSHPULL,
GPIO13);
gpio_set(GPIOC, GPIO13); // Turn off (polarity of the led is inversed!)
queue = xQueueCreate(32, sizeof(bool));
if (queue == 0)
{
while (1)
{
gpio_toggle(GPIOC, GPIO13);
for (uint32_t i = 0; i < 80000; ++i)
__asm__("nop");
};
}
xTaskCreate(task_receive, "RECEIVE", 200, NULL, configMAX_PRIORITIES-1, NULL);
xTaskCreate(task_send, "SEND", 200, NULL, configMAX_PRIORITIES-2, NULL);
vTaskStartScheduler();
while(1);
return 0;
}
Expected behavior:
main
Configures the clocks
Configures GPIO for Blue Pill LED
Turns off the led
Creates a queue
Checks the queue was correctly created: if not blink the LED fast and forever.
Schedule two tasks
Runs the scheduler
task_send (loops indefinetely)
Turn on the LED
Wait 100 ms
Push a message in the queue (content does not matter here)
Wait 1 sec
task_receive (loops indefinetely)
Check if a message is in the queue
Yes: turn off the led
No: yeld
I expect the led to be turned on for 100 ms and then turned off for 900 ms.
Real behavior: The led is always on, the execution of the program seems to be blocking at xQueueSendToBack.
Why is the call blocking?
FreeRTOSConfig.h
#define configUSE_PREEMPTION 1
#define configUSE_IDLE_HOOK 0
#define configUSE_TICK_HOOK 0
#define configCPU_CLOCK_HZ ( ( unsigned long ) 72000000 )
#define configSYSTICK_CLOCK_HZ ( configCPU_CLOCK_HZ / 8 ) /* fix for vTaskDelay() */
#define configTICK_RATE_HZ ( ( TickType_t ) 1000 )
#define configMAX_PRIORITIES ( 5 )
#define configMINIMAL_STACK_SIZE ( ( unsigned short ) 128 )
#define configTOTAL_HEAP_SIZE ( ( size_t ) ( 17 * 1024 ) )
#define configMAX_TASK_NAME_LEN ( 16 )
#define configUSE_TRACE_FACILITY 0
#define configUSE_16_BIT_TICKS 0
#define configIDLE_SHOULD_YIELD 1
#define configUSE_MUTEXES 1
/* Co-routine definitions. */
#define configUSE_CO_ROUTINES 0
#define configMAX_CO_ROUTINE_PRIORITIES ( 2 )
/* Set the following definitions to 1 to include the API function, or zero
to exclude the API function. */
#define INCLUDE_vTaskPrioritySet 1
#define INCLUDE_uxTaskPriorityGet 1
#define INCLUDE_vTaskDelete 1
#define INCLUDE_vTaskCleanUpResources 0
#define INCLUDE_vTaskSuspend 1
#define INCLUDE_vTaskDelayUntil 1
#define INCLUDE_vTaskDelay 1
/* This is the raw value as per the Cortex-M3 NVIC. Values can be 255
(lowest) to 0 (1?) (highest). */
#define configKERNEL_INTERRUPT_PRIORITY 255
/* !!!! configMAX_SYSCALL_INTERRUPT_PRIORITY must not be set to zero !!!!
See http://www.FreeRTOS.org/RTOS-Cortex-M3-M4.html. */
#define configMAX_SYSCALL_INTERRUPT_PRIORITY 191 /* equivalent to 0xb0, or priority 11. */
/* This is the value being used as per the ST library which permits 16
priority values, 0 to 15. This must correspond to the
configKERNEL_INTERRUPT_PRIORITY setting. Here 15 corresponds to the lowest
NVIC value of 255. */
#define configLIBRARY_KERNEL_INTERRUPT_PRIORITY 15
Your task_receive priority is higher than the task_send priority. taskYIELD will run the same calling task over and over again if there are no higher priority tasks.
To achieve what you want, try changing the task_receive in the following manner.
static void
task_receive(void *args __attribute__((unused)))
{
bool nothing;
while (1)
{
if (xQueueReceive(queue, ¬hing, portMAX_DELAY) == pdPASS)
gpio_set(GPIOC, GPIO13); // Turn off
}
}
For more information on taskYIELD please refer the following.
https://www.freertos.org/a00020.html#taskYIELD
The problem was solved by updating the compiler to the latest version.
Kubuntu 18.04 ships with arm-none-eabi-gcc (15:6.3.1+svn253039-1build1) 6.3.1 20170620, with this compiler the code does not work.
memcpy seems to be the problematic function call in the code, it is called by FreeRTOS when adding an element to the queue.
If I use the Version 8-2018-q4-major Linux 64-bit compiler then the code executes fine. It can be downloaded here: https://developer.arm.com/open-source/gnu-toolchain/gnu-rm/downloads
I also have a problem with DRDY. I need to include DRDY. The pins for DRDY are RD2 and RD5. They are both inputs.
Here is the information for DRDY.
DRDY Pin
DRDY is an open-drain output (in SPI mode) or bidirectional pin (in UART mode) with an internal 20 k – 50 k pullup
resistor.
Most communications failures are the result of failure to properly observe the DRDY timing.
Serial communications pacing is controlled by this pin. Use of DRDY is critical to successful communications with the
QT1481. In either UART or SPI mode, the host is permitted to perform a data transfer only when DRDY has returned
high. Additionally, in UART mode, the QT1481 delays responses to the host if DRDY is being held low by the host.
After each byte transfer, DRDY goes low after a short delay and remains low until the QT1481 is ready for another
transfer. A short delay occurs before DRDY is driven low because the QT1481 may otherwise be busy and requires
a finite time to respond.
DRDY may go low for a microsecond only. During the period from the end of one transfer until DRDY goes low and
back high again, the host should not perform another transfer. Therefore, before each byte transmission the host
should first check that DRDY is high again.
If the host wants to perform a byte transfer with the QT1481 it should behave as follows:
1. Wait at least 100 µs after the previous transfer (time S5 in Figure 3-2 on page 23: DRDY is guaranteed to go
low before this 100 µs expires).
2. Wait until DRDY is high (it may already be high).
3. Perform the next transfer with the QT1481.
In most cases it takes up to 3 ms for DRDY to return high again. However, this time is longer with some commands
or if the STS_DEBUG setup is enabled, as follows:
0x01 (Setups load): <20 ms
0x02 (Low Level Cal and Offset): <20 ms
Add 15 ms to the above times if the STS_DEBUG setup is enabled.
Other DRDY specifications:
Min time DRDY is low: 1 µs
Max time DRDY is low after reset: 100 ms
The timing diagram is this:
How can implement that?
The code I have written with my friend is written here:
#include <xc.h>
#include "PIC.h"
#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
//#include <pic18f45k80.h>
#define MSB 1
#define LSB 0
// SPI PIN CONFIGURATION
#define SCK_TRIS TRISCbits.TRISC3 = 0 ;
#define SDO_TRIS TRISCbits.TRISC5 = 0 ;
#define SDI_TRIS TRISCbits.TRISC4 = 1 ;
#define QTA_SS_TRIS TRISDbits.TRISD4 = 0 ;
#define QTB_SS_TRIS TRISEbits.TRISE2 = 0 ;
#define QTA_SS_LAT_LOW LATDbits.LATD4 = 0 ;
#define QTA_SS_LAT_HIGH LATDbits.LATD4 = 1 ;
#define QTB_SS_LAT_LOW LATEbits.LATE2 = 0 ;
#define QTB_SS_LAT_HIGH LATEbits.LATE2 = 1 ;
#define QTA_DRDY_TRIS TRISDbits.TRISD5 = 1 ;
#define QTB_DRDY_TRIS TRISDbits.TRISD2 = 1 ;
#define QTA_DRDY_LAT_LOW LATDbits.LATD5 = 0 ;
#define QTA_DRDY_LAT_HIGH LATDbits.LAT52 = 1 ;
#define QTB_DRDY_LAT_LOW LATDbits.LAT25 = 0 ;
#define QTB_DRDY_LAT_HIGH LATDbits.LATD2 = 1 ;
#define QTB_DRDY PORTDbits.RD2 ;
#define QTA_DRDY PORTDbits.RD5 ;
// FREQUENCY SELECT
#define _XTAL_FREQ 16000000
// PIN SETUP
void PIN_MANAGER_Initialize(void)
{
/**
LATx registers
*/
LATE = 0x00;
LATD = 0x00;
LATA = 0x00;
LATB = 0b00010000;
LATC = 0x00;
/**
TRISx registers
*/
TRISE = 0x00;
TRISA = 0x08;
TRISB = 0x01;
TRISC = 0b00010000;
TRISD = 0xEF;
PORTC = 0b00010010 ;
/**
ANSELx registers
*/
ANCON0 = 0x00;
ANCON1 = 0x00;
/**
WPUx registers
*/
WPUB = 0x00;
INTCON2bits.nRBPU = 1;
}
// SPI
void SPI_Initialize(void)
{
// SMP Middle; CKE Idle to Active;
SSPSTAT = 0b00000000;
// SSPEN enabled; WCOL no_collision; CKP Idle:High, Active:Low; SSPM FOSC/4; SSPOV no_overflow;
SSPCON1 = 0b00111010;
// SSPADD 0;
SSPADD = 0x00;
ADCON0 = 0 ;
ADCON1 = 0x0F ; //Makes all I/O digital
SCK_TRIS ;
SDO_TRIS ;
SDI_TRIS ;
QTA_SS_TRIS ;
QTB_SS_TRIS ;
QTA_DRDY_TRIS ;
QTB_DRDY_TRIS ;
}
signed char WriteSPI( unsigned char data_out )
{
unsigned char TempVar;
TempVar = SSPBUF; // Clears BF
PIR1bits.SSPIF = 0; // Clear interrupt flag
SSPCON1bits.WCOL = 0; //Clear any previous write collision
SSPBUF = data_out; // write byte to SSPBUF register
if ( SSPCON1 & 0x80 ) // test if write collision occurred
return ( -1 ); // if WCOL bit is set return negative #
else
while( !PIR1bits.SSPIF ); // wait until bus cycle complete
return ( 0 ); // if WCOL bit is not set return non-negative#
}
unsigned char ReadSPI( void )
{
unsigned char TempVar;
TempVar = SSPBUF; // Clear BF
PIR1bits.SSPIF = 0; // Clear interrupt flag
SSPBUF = 0x00; // initiate bus cycle
while(!PIR1bits.SSPIF); // wait until cycle complete
return ( SSPBUF ); // return with byte read
}
unsigned char DataRdySPI( void )
{
if ( SSPSTATbits.BF )
return ( +1 ); // data in SSPBUF register
else
return ( 0 ); // no data in SSPBUF register
}
// SOFTWARE EUART
void out_char(char character, char bit_order){
uint8_t i = 0;
RSOUT = 1 ; // MSB
__delay_ms(1);
RSOUT = 0 ; // START
__delay_us(100);
for (i = 8; i>0; --i){
if (bit_order){ // Bit order determines how you will put the bits, from left to right (MSB) or right to left (LSB)
RSOUT = (character & 0x80) ? 1:0; // in MSB you compare the left-most bit doing an AND with 0x80, and put 1 if true, 0 elsewhere.
character <<= 1; // Shift the character to the left, discrading the bit just sent
} else {
RSOUT = (character & 0x01); // in LSB you compare the right-most bit doing an AND with 0x01, and put 1 if true, 0 else.
character >>= 1; // Shift the character to the right, discrading the bit just sent
}
__delay_us(100);
}
RSOUT = 1 ; // STOP
}
void out_str(char * string, uint8_t len, char bit_order){
uint8_t i = 0;
for (i = 0; i< len; i++){
out_char(string[i], bit_order);
}
}
void SYSTEM_Initialize(void)
{
PIN_MANAGER_Initialize() ;
SPI_Initialize() ;
}
void main(void)
{
SYSTEM_Initialize() ;
while (1)
{
QTB_SS_LAT_LOW ; // Transmit data
char temp ;
WriteSPI(0x0F) ; // Send a byte
while(!DataRdySPI()) ; // wait for a data to arrive
temp = ReadSPI(); // Read a byte from the
QTB_SS_LAT_HIGH ; // Stop transmitting data
__delay_us(100) ;
}
}
No. Do not just write a bunch of code, then see what it does. That kind of shotgun (or, if you prefer, spaghetti-to-the-wall) approach is a waste of effort.
First, drop all those macros. Instead, write comments that describe the purpose of each chunk of code, like the first three assignments in your SPI_Initialize() function.
Next, convert your specification to pseudocode. The format does not matter much, just use something that lets you keep your mind focused on what the purpose is, rather than on the details on how to do it.
The datasheet says that with SPI, there are three outputs from the PIC (^SS, SCK, MOSI on the QT1481), and two inputs (^DRDY and MISO on the QT1481). I'll use those names for the data lines, and for their respective I/O pin names on the PIC.
The setup phase on the PIC should be simple:
Make ^DRDY an input
Make ^SS an output, set it HIGH
Make SCK an output, set it LOW
Make MOSI an output, set it LOW
Make MISO an input
Set up SPI using SCK, MOSI, MISO
Each transfer is a bidirectional one. Whenever you send data, you also receive data. The zero command is reserved for receiving multiple data, says the datasheet. So, you only need a function that sends a byte, and at the same time receives a byte:
Function SPITransfer(command):
Make sure at least 0.1ms has passed since the previous transfer.
Do:
Nothing
While (^DRDY is LOW)
Set ^SS LOW
response = Transfer(command)
Set ^SS HIGH
Return response
End Function
As I understand it, for PICs and properly initialized hardware SPI the response = Transfer(command) line is in C
SSPBUF = command;
while (!DataRdySPI())
;
response = SSPBUF;
You can also bit-bang it, in which case it is (in pseudocode):
response = 0
For bit = 7 down to 0, inclusive:
If (command & 128):
Set MOSI high
Else:
Set MOSI low
End If
Set SCK low
Sleep for a half period
command = command / 2
response = response * 2
If MISO high:
response = response + 1
End If
Set SCK high
Sleep for a half period
End For
but obviously the hardware SPI approach is better.
(When you get this working, you can use the hardware SPI without a wait loop from a timer interrupt, making the communications essentially transparent to the "main operation" of the PIC microcontroller. That requires a slightly different approach, with a command and response queues (of a few bytes), but will make it much easier for the PIC to do actual work, other than just scan the QT1481.)
After a reset, you essentially send 0x0F until you get 0xF0 back:
while (SPITransfer(0x0F) != 0xF0)
;
At this point, you have the steps you need to implement in C. OP also has the hardware (an oscilloscope) to verify their code works.