I have been developing a control software in C language. I have decided to spent
some time with development general function blocks usable in other programs which I will develop in future (some type of library). My software is divided into two main parts from application point of view. One part is the control part
(PI controller, modulator, phase locked loop etc.) and the second one is logic
part (finite state machine of the application etc.). The logic part works with logic signals. These signals are procesed by software implementation of logic gates and flip-flops. My goal is to also implement some software version of an oscillator. I mean some function block which is able to produce oscillations between 0 and 1 with prescribed period and duty cycle at its output. In principle I can have several such oscillators in one program. So I have
decided to implement the oscillator in following manner. The oscillations production is done by a function and each instance of the oscillator is implemented by an instance of previously defined structure. The structure contains as main items the oscillations period, duty cycle and oscillator execution period i.e. execution period of the RTOS task in which the instance of the oscillator is placed. Based on the oscillations period and task execution period produces the oscillator function the oscillations
void OSC(uint32_t output, Osc_t *p){
// p->T is the task execution period in ms
// p->period is the desired oscillations period in ms
// p->counter is the internal state of the oscillator
p->delay_count = (uint16_t)((p->period)/(2*p->T));
if(++(p->counter) >= p->delay_count){
p->counter = 0;
NegLogicSignal(output);
}
}
}
Below is the structure which contains the state of individual oscillator instances
typedef struct{
float period; // period of the oscillations (ms)
float T; // function execution period (ms)
uint16_t counter; // initial number of execution periods
uint16_t delay_count; // number of execution periods corresponding to half
// period
}Osc_t;
and the usage of oscillator function is following
Debug_LED_Oscillator_state.T = 1; // T = 1 ms
Debug_LED_Oscillator_state.period = 500; // period 0.5 s = 500 ms
OSC(LDbgLedOsc, &Debug_LED_Oscillator_state);
My problem is that I had to place one of my oscillators in the fastest task (task with execution period 1 ms)
because I wasn't able to achieve the period 500 ms (half period 250 ms) in another tasks because the rest of my tasks have inapproprite execution periods for 500 ms (20 ms and 100 ms i.e. 250/20 = 12.5 - not an integer value and 250/100 = 2.5 - also not an integer value). The problem is that I have the rest of application logic in another task.
The intended use is to produce logic signal for LEDs blinking pattern. The problem is that I had to move the oscillator to logically different task only because I am not able to achieve desired timing accuracy in the logicaly appropriate task (because of non integer values of the quotient half_period/execution_period). I am thinking about different implementation which enables place the oscillator to logicaly appropriate task and achieve the desired timing accuracy.
I have thought following solution. I will define set of global variables (uint16_t Timer_1ms, uint16_t Timer_5ms etc.). These global variables will be incremented in highest priority task (in my case Task_1ms). I will redefine OSC function
void OSC(uint32_t output, Osc_t *p){
float act_time;
taskENTER_CRITICAL();
switch(p->timer_type){
case TMR_1MS:
act_time = Timer_1ms;
break;
case TMR_5MS:
act_time = Timer_5ms;
break;
}
taskEXIT_CRITICAL();
if(p->init){
SetLogicSignal(output);
switch(p->timer_type){
case TMR_1MS:
p->delta = ((p->period)/(2*1*p->T));
break;
case TMR_5MS:
p->delta = ((p->period)/(2*5*p->T));
break;
}
p->stop_time = (act_time + p->delta);
p->init = FALSE;
}
if(act_time >= (p->stop_time)){
NegLogicSignal(output);
p->stop_time = (act_time + p->delta);
}
}
and oscillator structure
// oscillator state
typedef struct{
float period; // period of the oscillations (ms)
float T; // function execution period (ms)
float delta; // half_period in timer counts
float stop_time; // timer counts when to negate the output
BOOL init; // force initialization of the timer
timer_e timer_type;
}Osc_t;
I have tried this solution and it seems to be not functional. Does anybody have an idea why? Thanks in advance for any sugestions.
Threads of RTOS are very useful when you perform tasks with soft deadline. Soft deadline means that if the execution delays the system will not fail. In your case, the oscillator has a hard deadline e.g. if the duty cycle of PWM is not precise and stable the motor will not be having the desirable constant angular velocity. In this case you have to use the available Timer/Counter peripheral of the MCU.
You can set a timer to interrupt the CPU on a given interval and toggle the output in the ISR (Interrupt Service Routine). Using this approach the load of time-counting is moved from the CPU to the timer peripheral, allowing both precise oscillating output and smooth execution of other tasks.
You do not make clear which RTOS or MCU/CPU you are using, but all of them have timer/counter peripherals. If not supported directly from the RTOS, look in the CPU vendor datasheet.
Related
I'm curious about the delay time between the title mentioned, I toggled an IO when I wrote data into UART->DR, the delay time varies from 3 micro seconds to 10x micro seconds
int main(void)
{
/* initial code generated by STMCubeMX */
HAL_Init();
SystemClock_Config();
MX_GPIO_Init();
MX_USART1_UART_Init();
while (1)
{
HAL_Delay(50);
if (USART_GetFlagStatus(&huart1, USART_SR_TXE) == SET)
{
USART_SendData(&huart1, 'F');
}
}
}
void USART_SendData(UART_HandleTypeDef *huart, uint16_t Data)
{
assert_param(IS_USART_ALL_PERIPH(USARTx));
assert_param(IS_USART_DATA(Data));
GPIOB->BSRR = GPIO_PIN_1; // Tick an IO pin for debugging
GPIOB->BSRR = (uint32_t)GPIO_PIN_1 << 16u; // reset bit
huart->Instance->DR = (uint8_t)(Data & (uint8_t)0x00FF); // send data (write DR)
}
I'm not sure whether the time jitters is related with BAUD rate 9600(104 micro seconds/bit),
Isn't the data should be showed immediately when DR register written????
And why isn't the delay time all the same(or close)?
Isn't the data should be showed immediately when DR register written????
Not necessarily.
You are only showing us high-level language source code.
Have you looked at the actual instruction trace to determine the instruction time between these operations?
How do you ensure that no interrupt is serviced between these operations?
And why isn't the delay time all the same(or close)?
Apparently that depends on the design of the UART.
You report that the baudrate is 9600, and (as expected) the intervals for each bit appear to be slightly longer than 100microsec.
The fact that the observed latency is less than one bit interval is significant.
The typical UART uses a clock (aka the baudrate generator) that is 16 times faster than the configured baudrate.
This faster-than-necessary clock is needed to oversample the receiving signal, which can arrive at anytime, i.e. it's asynchronous communication after all.
For the transmit clock, the baudrate generator is divided down to the nominal baudrate.
So for transmission, that clock quantizes in time when each bit (of the frame) will start (and end) its transmission.
Since the write to the UART TxD data register is performed by the CPU, and that operation is not synchronized with the transmit clock, you should therefore expect a random delay of up to one bit interval before the start bit of the frame appears on the wire.
i would like to create a PWM signal. And i want the frequency to be close to 38 khz. My theoretical calculation for period is 26.3 microseconds. So i choose 26 microseconds. And i can observe my signal.
But i don't understand how my code works properly :)
(My clock frequency is 1MHz so my clock signal is 1 microseconds )
if((P1IN & BIT3)!=BIT3) { // if button is pressed
for(i=0;i<692;i++){ // pwm signal's duration is 9ms
P2OUT^=0x01; // switch from 1 to 0 or vice versa
__delay_cycles(4);
}
P2OUT=0x00;
}
my calculation is:
i <692,i++,P2OUT^=0x01; // total 3 cycles
__delay_cycles(4); //total 4 cycles
so 4+3=7. but i'm confused because i think it should be 13 not 7
(here is my signal)
https://e2e.ti.com/cfs-file/__key/communityserver-discussions-components-files/166/f0fd36b0_2D00_bebd_2D00_4a31_2D00_b564_2D00_98962cf4749e-_2800_1_2900_.jpg
You can not calculate cycles based on C or C++ code. you need to check the assembly file(s) generated during the compilation of the program. Based on your compiler (which you did not mention) you can pass some some compiler parameters/switches to ask the compiler to leave the generated assembly file(s) in place for you to check the generated assembly instructions. but basically the for loop would have a jump instruction which may take 2/3 cycles and you did not calculate that.
I recommend that you later check the number of cycles of each instruction from the micro controller datasheet.
The posted code (per your calculations) switches the output every ~7cycles. and does this 692 times. For a total of 346 total cycles, however the total pulse ON time is only ~7cycles. Suggest:
if((P1IN & BIT3)!=BIT3)
{ // if button is pressed
// start pwm signal
P2OUT = 0x01;
for(int i=0; i< (9*1000);i++) // may need to be adjusted
{ // so pwm signal's duration is 9ms
_delay( 1 );
}
// stop pwm signal
P2OUT=0x00;
// wait for button to be released
while( P1IN & BIT3)!=BIT3 ){;}
}
I'm not familiar with your microcontroller's PWM details, However, most have an initialization to set how fast the PWM timer counts and its start/termination count and if it repeats and if the output is a square wave or a step up or a step down signal and the percentage of ON .vs.OFF time.
However, the posted code indicates the PWM is only a regular GPIO bit.
The posted code indicates the PWM on percentage is to be 50percent. Is this what you want?
I'm trying to generate note for example Do , do's frequency is 523.
I wrote some codes but, i did not work
Systick 8 mhz
void note1(void){ // Note Do
for (int i = 0; i < 523; i++){
GPIOE->ODR = 0x4000;
delay_ms(1);
GPIOE->ODR = 0x0000;
delay_ms(1);
}
}
How can we solve this problem ?
EasyMx Pro v7
I'm calling the function like that
void button_handler(void)
{
note1();
// Clear pending bit depending on which one is pending
if (EXTI->PR & (1 << 0)){
EXTI->PR = (1 << 0);
}
else if (EXTI->PR & (1 << 1)){
EXTI->PR = (1 << 1);
}
}
523 times sending 1 and 0 and delay_ms 1 = 1 ms
1000 = 1 sec
On STM32 (as I can see you have it) you have timers which can be configured as PWM output.
So use timer, set period and prescaler values according to your needed frequency and set duty cycle on channel to 50%.
If you need 523Hz PWM output, then set your timer PWM to 523Hz using prescaler and period value:
timer_overflow_frequency = timer_input_clock /
(prescaler_value + 1) /
(period_value + 1) ;
Then, for your output channel set value half of timer period value.
For standard peripheral library, tutorial can be used from here:
https://stm32f4-discovery.net/2014/05/stm32f4-stm32f429-discovery-pwm-tutorial/
Link from unwind for Cube https://electronics.stackexchange.com/questions/179546/getting-pwm-to-work-on-stm32f4-using-sts-hal-libraries
You appear to have a fundamental misunderstanding. In your code note1(), the value 523 will affect only the duration of the note, nit its frequency. With 1ms high, 1ms low repeated 523 times you will generate a tone of approximately 500Hz for approximately 1.43 seconds. I say "approximately" because there will be some small overhead in the loop other then the time delays.
A time delay resolution 1ms is insufficient to generate an accurate tone in that manner. To do it in the manner you have, each delay would need to be 1/2f seconds, so for 523Hz approximately 956ms. The loop iteration count would need to be ft, so for say .25 seconds, 131 iterations.
However if button_handler() is as it appears to be an interrupt handler, you really should not be spending 1.46 seconds in an interrupt handler!
In any event this is an extraordinarily laborious, CPU intensive and inaccurate method of generating a specific frequency. The STM32 on your board is well endowed with hardware timers with direct GPIO output that will generate the frequency you need accurately with zero software over head. Even if none of the timers map to a suitable GPIO output that you need to use, ou can still get one to generate an interrupt at 1/2f and toggle the pin in the interrupt handler. Either way that will leave the processor free to do useful stuff while the tone is being output.
I am having a hard time understanding some code I found for using a timer and interrupts on an ARM board I have. The timer basically toggles an LED every interrupt between on and off to make it flash.
void main(void) {
/* Pin direction */
led_init();
/* timer setup */
/* CTRL */
#define COUNT_MODE 1 /* Use rising edge of primary source */
#define PRIME_SRC 0xf /* Peripheral clock with 128 prescale (for 24 MHz = 187500 Hz)*/
#define SEC_SRC 0 /* Don't need this */
#define ONCE 0 /* Keep counting */
#define LEN 1 /* Count until compare then reload with value in LOAD */
#define DIR 0 /* Count up */
#define CO_INIT 0 /* Other counters cannot force a re-initialization of this counter */
#define OUT_MODE 0 /* OFLAG is asserted while counter is active */
*TMR_ENBL = 0; /* TMRS reset to enabled */
*TMR0_SCTRL = 0;
*TMR0_CSCTRL = 0x0040;
*TMR0_LOAD = 0; /* Reload to zero */
*TMR0_COMP_UP = 18750; /* Trigger a reload at the end */
*TMR0_CMPLD1 = 18750; /* Compare one triggered reload level, 10 Hz maybe? */
*TMR0_CNTR = 0; /* Reset count register */
*TMR0_CTRL = (COUNT_MODE<<13) |
(PRIME_SRC<<9) |
(SEC_SRC<<7) |
(ONCE<<6) |
(LEN<<5) |
(DIR<<4) |
(CO_INIT<<3) |
(OUT_MODE);
*TMR_ENBL = 0xf; /* Enable all the timers --- why not? */
led_on();
enable_irq(TMR);
while(1) {
/* Sit here and let the interrupts do the work */
continue;
};
}
Right now, the LED flashes at a rate that I cannot determine per second. I'd like it to flash once per second. However, I do not understand the whole comparison and reloading.
Could somebody better explain this code?
As timers are a vendor- and part-specific feature (not a part of the ARM architecture), I can only give general guidance unless you mention which CPU or microcontroller you are dealing with.
Timers have several features:
A size, for instance 16 bits, which means they can count up or down to/from 65535.
A clock input, given as a clock frequency (perhaps from the CPU clock or an external crystal), and a prescaler which divides this clock frequency to another value (or divide by 1).
An interrupt on overflow - when the timer wraps back to 0, there is usually an option to trigger an interrupt.
A compare interrupt - when the timer meets a set value it will issue an interrupt.
In your case, I can see that you are using the compare feature of your timer. By determining your timer clock input, and calculating new values for the prescalers and compare register, you should be able to achieve a 1 Hz rate.
Before trying to understand the code you found, please do understand how a Timer Peripheral Unit works, then understand how you can configure it's registers to get the desired output.
How a Timer Peripheral Unit works?
This is hardware module which is embedded into micro controller along with CPU and other peripherals. Every peripheral modules inside micro controller are synchronized with common clock source. With reference to the code, Timer peripheral clock is 24 MHz which is then pre-scaled by 128 which means it will work at 187500 Hz. Now this frequency will depend upon clock configuration and oscillator.
Now Timer unit has a counter register which can count up to it's bit-size which could be 8,16 or 32 generally. Once you enable counting, this counter starts up-counting or down-counting the rising or falling or on both edges. Now you have choices whether you want to up-count (from 0 towards 255, for 8-bit) or down count (from 255 towards 0) and you want to count on which clock edge.
Now, at 187500 Hz, 1 cycle = 5.333333 us, if you are counting once in 1 cycle either at rising or at falling edge and e.g, if counter value = 100 (Up counting), total time elapsed is 5.33333*100=533us. Now you have to set a compare value value for the counter to set this period which will depend upon your flash rate. This compare value will be compared against your counter value in by comparator of Timer and Once it matches it will send an interrupt signal if you have enabled interrupt generation on compare match, where you can toggle you LED.
I hope you have understood How a Timer works.
In your sample code, Timer is configured to obtain a compare match event at the rate of 10Hz. so compare value is 187500/10 = 18750., for 1sec you can keep it 187500/1.
you have Timer Control Register TMR0_CTRL, where you can configure whether you want to count up or down, count on falling/rising/both edges, count only once/continuous, count upto compare value and then reset or keep counting till it's limit. Refer to micro controller manual for details of each bit fields.
I'm using C with the BoostC compiler. I'm worried how accurate my code is. The configuration below ticks at more or less 1Hz, (tested with an LED to the naked eye). (It uses an external watch crystal 32kHz for Timer1 on the 16f74).
I'm hoping someone can tell me...
Does my code below need any adjustments? What would be the easiest way of measuring the accuracy to the nearest CPU clock period? Will I need to get dirty with assembly to reliably ensure accuracy of the 1Hz signal?
I'm hoping the time taken to execute the timer handler (and others) doesn't even come into the picture, since the timer is always counting. So long as the handlers never take longer to execute than 1/32kHz seconds, will the 1Hz signal have essentially the accuracy of the 32kHz Crystal?
Thanks
#define T1H_DEFAULT 0x80
#define T1L_DEFAULT 0
volatile char T1H = T1H_DEFAULT;
volatile char T1L = T1L_DEFAULT;
void main(void){
// setup
tmr1h = T1H;
tmr1l = T1L;
t1con = 0b00001111; // — — T1CKPS1 T1CKPS0 T1OSCEN NOT_T1SYNC TMR1CS TMR1ON
// ...
// do nothing repeatedly while no interrupt
while(1){}
}
interrupt(void) {
// Handle Timer1
if (test_bit(pir1, TMR1IF) & test_bit(pie1, TMR1IE)){
// reset timer's 2x8 bit value
tmr1h = T1H;
tmr1l = T1L;
// do things triggered by this time tick
//reset T1 interrupt flag
clear_bit(pir1, TMR1IF);
} else
... handle other interrupts
}
I can see some improvements...
Your timer initiation inside interrupt isn't accurate.
When you set the timer counter in interrupt...
tmr1h = T1H;
tmr1l = T1L;
... then you override the current value what isn't good for accuracy.
... just use:
tmr1h = T1H; //tmr1h must be still 0!
Or even better, just set the 7th bit of tmr1h register.
The compiler must compile this order to single asm instruction like...
bsf tmr1h, 7
...to avoid losing data in tmr1 register. Because if this is made with more than one instructions the hardware can increment the counter value between execution of: read-modify-write.