I have a timer which at each interrupt first stops itself and then set its own next delay:
HAL_TIM_Base_Stop_IT(&htim);
// do stuff
__HAL_TIM_SET_PRESCALER(&htim, new_p);
__HAL_TIM_SET_AUTORELOAD(&htim, new_a);
__HAL_TIM_CLEAR_FLAG(&htim, TIM_FLAG_UPDATE);
HAL_TIM_Base_Start_IT(&htim);
everything works well. But if I stop the timer before it overflows and interrupts, and then fill it with new values, it keeps finishing its last delay. For example, I stop it a second into a 15-second delay and fill it with values for 1 second delay, after I start it again, it takes ~13 seconds to interrupt instead of 1 second.
New value of the prescaler, written in the PSC register, is loaded in the actual prescaler register only at the next update event. If you don't want to wait for next counter overflow - generate a software update event by writing the UG bit in TIMx_EGR register.
New auto-reload value is loaded in the actual auto-reload register immediately, unless "auto-reload preload" option is enabled with the ARPE bit in TIMx_CR1 register. In this case the actual auto-reload register is also loaded with an update event.
I'm struggeling on setting up a STM32-F429ZI MCU (Nucleo 144 board) to generate following PWM pattern:
First channel with a variable frequency and 50% duty cycle - at least I've got this working -
Second channel giving a pulse with each 20th pulse of channel 1.
This is not primary a coding problem but a understanding problem of timer settings i guess (working with STM32CubeIDE). I would bet there is real simple solution...
I Think there is no way to do this with only 1 timer?
I'm pleased for any suggestion...
I already tried with TIM2 as clock source for TIM3 and 4 but couldn't manage to make them synchonize.
For better illustration of what I want to do:
I't not sure, if it could be done with two timer synchronization (the pitfall is variable frequency of the first timer). But you can use single timer+DMA to rewrite configuration of the second channel:
Create in-memory array of 20 values, where one value enables the second channel and other 19 disables it
Setup DMA to trigger on the timer update and to move data from the array to CCMRx (or CCRx) register. Of cource, DMA should be in circular mode
Has anyone heard of a hardware-timer which can count by different values with one timer tick?
Normally a timer of a µC counts up or down by one. But I have a challenge where I need to add e.g. 500 each timer tick.
There are multiple options for your question. Depending on your microcontroller and timer you could:
Use the interrupt generation of the timer to manually up a variable by a set amount. 500 in your case.
Change the timer prescalers such that instead of 500 times in an expected period, the timer only triggers once during the expected period.
I personally don't know of a timer that has a variable increase amount but that doesn't mean that it doesn't exist. Maybe creating such a timer in VHDL or verilog may be a option.
I came across this code by Ganssle regarding switch debouncing. The code seems pretty efficient, and the few questions I have maybe very obvious, but I would appreciate clarification.
Why does he check 10 msec for button press and 100 msec for button release. Can't he just check 10 msec for press and release?
Is polling this function every 5 msec from main the most effecient way to execute it or should I check for an interrupt in the pin and when there is a interrupt change the pin to GPI and go into the polling routine and after we deduce the value switch the pin back to interrupt mode?
#define CHECK_MSEC 5 // Read hardware every 5 msec
#define PRESS_MSEC 10 // Stable time before registering pressed
#define RELEASE_MSEC 100 // Stable time before registering released
// This function reads the key state from the hardware.
extern bool_t RawKeyPressed();
// This holds the debounced state of the key.
bool_t DebouncedKeyPress = false;
// Service routine called every CHECK_MSEC to
// debounce both edges
void DebounceSwitch1(bool_t *Key_changed, bool_t *Key_pressed)
{
static uint8_t Count = RELEASE_MSEC / CHECK_MSEC;
bool_t RawState;
*Key_changed = false;
*Key_pressed = DebouncedKeyPress;
RawState = RawKeyPressed();
if (RawState == DebouncedKeyPress) {
// Set the timer which will allow a change from the current state.
if (DebouncedKeyPress) Count = RELEASE_MSEC / CHECK_MSEC;
else Count = PRESS_MSEC / CHECK_MSEC;
} else {
// Key has changed - wait for new state to become stable.
if (--Count == 0) {
// Timer expired - accept the change.
DebouncedKeyPress = RawState;
*Key_changed=true;
*Key_pressed=DebouncedKeyPress;
// And reset the timer.
if (DebouncedKeyPress) Count = RELEASE_MSEC / CHECK_MSEC;
else Count = PRESS_MSEC / CHECK_MSEC;
}
}
}
Why does he check 10 msec for button press and 100 msec for button release.
As the blog post says, "Respond instantly to user input." and "A 100ms delay is quite noticeable".
So, the main reason seems to be to emphasize that the make-debounce should be kept short so that the make is registered "immediately" by human sense, and that the break debounce is less time sensitive.
This is also supported by a paragraph near the end of the post: "As I described in the April issue, most switches seem to exhibit bounce rates under 10ms. Coupled with my observation that a 50ms response seems instantaneous, it's reasonable to pick a debounce period in the 20 to 50ms range."
In other words, the code in the example is much more important than the example values, and that the proper values to be used depends on the switches used; you're supposed to decide those yourself, based on the particulars of your specific use case.
Can't he just check 10 msec for press and release?
Sure, why not? As he wrote, it should work, even though he wrote (as quoted above) that he prefers a bit longer debounce periods (20 to 50 ms).
Is polling this function every 5 msec from main the most effecient way to execute it
No. As the author wrote, "All of these algorithms assume a timer or other periodic call that invokes the debouncer." In other words, this is just one way to implement software debouncing, and the shown examples are based on a regular timer interrupt, that's all.
Also, there is nothing magical about the 5 ms; as the author says, "For quick response and relatively low computational overhead I prefer a tick rate of a handful of milliseconds. One to five milliseconds is ideal."
or should I check for an interrupt in the pin and when there is a interrupt change the pin to GPI and go into the polling routine and after we deduce the value switch the pin back to interrupt mode?
If you implement that in code, you'll find that it is rather nasty to have an interrupt that blocks the normal running of the code for 10 - 50ms at a time. It is okay if checking the input pin state is the only thing being done, but if the hardware does anything else, like update a display, or flicker some blinkenlights, your debouncing routine in the interrupt handler will cause noticeable jitter/stutter. In other words, what you suggest, is not a practical implementation.
The way the periodic timer interrupt based software debouncing routines (shown in the original blog post, and elsewhere) work, they take only a very short amount of time, just a couple of dozen cycles or so, and do not interrupt other code for any significant amount of time. This is simple, and practical.
You can combine a periodic timer interrupt and an input pin (state change) interrupt, but since the overhead of many of the timer-interrupt-only -based software debounces is tiny, it typically is not worth the effort trying to combine the two -- the code gets very, very complicated, and complicated code (especially on an embedded device) tends to be hard/expensive to maintain.
The only case I can think of (but I'm only a hobbyist, not an EE by any means!) is if you wanted to minimize power use for e.g. battery powered operation, and used the input pin interrupt to bring the device to partial or full power mode from sleep, or similar.
(Actually, if you also have a millisecond or sub-millisecond counter (not necessarily based on an interrupt, but possibly a cycle counter or similar), you can use the input pin interrupt and the cycle counter to update the input state on the first change, then desensitize it for a specific duration afterwards, by storing the cycle counter value at the state change. You do need to handle counter overflow, though, to avoid the situation where a long ago event seems to have happened just a short time ago, due to counter overflowing.)
I found Lundin's answer quite informative, and decided to edit my answer to show my own suggestion for software debouncing. This might be especially interesting if you have very limited RAM, but lots of buttons multiplexed, and you want to be able to respond to key presses and releases with minimum delay.
Do note that I do not wish to imply this is "best" in any sense of the world; I only want you to show one approach I haven't seen often used, but which might have some useful properties in some use cases. Here, too, the number of scan cycles (milliseconds) the input changes are ignored (10 for make/off-to-ON, 10 for break/on-to-OFF) are just example values; use an oscilloscope or trial-and-error to find the best values in your use case. If this is an approach you find more suitable to your use case than the other myriad alternatives, that is.
The idea is simple: use a single byte per button to record the state, with the least significant bit describing the state, and the seven other bits being the desensitivity (debounce duration) counter. Whenever a state change occurs, the next change is only considered a number of scan cycles later.
This has the benefit of responding to changes immediately. It also allows different make-debounce and break-debounce durations (during which the pin state is not checked).
The downside is that if your switches/inputs have any glitches (misreadings outside the debounce duration), they show up as clear make/break events.
First, you define the number of scans the inputs are desensitized after a break, and after a make. These range from 0 to 127, inclusive. The exact values you use depend entirely on your use case; these are just placeholders.
#define ON_ATLEAST 10 /* 0 to 127, inclusive */
#define OFF_ATLEAST 10 /* 0 to 127, inclusive */
For each button, you have one byte of state, variable state below; initialized to 0. Let's say (PORT & BIT) is the expression you use to test that particular input pin, evaluating to true (nonzero) for ON, and false (zero) for OFF. During each scan (in your timer interrupt), you do
if (state > 1)
state -= 2;
else
if ( (!(PORT & BIT)) != (!state) ) {
if (state)
state = OFF_ATLEAST*2 + 0;
else
state = ON_ATLEAST*2 + 1;
}
At any point, you can test the button state using (state & 1). It will be 0 for OFF, and 1 for ON. Furthermore, if (state > 1), then this button was recently turned ON (if state & 1) or OFF (if state & 0) and is therefore not sensitive to changes in the input pin state.
In addition to the accepted answer, if you just wish to poll a switch from somewhere every n ms, there is no need for all of the obfuscation and complexity from that article. Simply do this:
static bool prev=false;
...
/*** execute every n ms ***/
bool btn_pressed = (PORT & button_mask) != 0;
bool reliable = btn_pressed==prev;
prev = btn_pressed;
if(!reliable)
{
btn_pressed = false; // btn_pressed is not yet reliable, treat as not pressed
}
// <-- here btn_pressed contains the state of the switch, do something with it
This is the simplest way to de-bounce a switch. For mission-critical applications, you can use the very same code but add a simple median filter for the 3 or 5 last samples.
As noted in the article, the electro-mechanical bounce of switches is most often less than 10ms. You can easily measure the bouncing with an oscilloscope, by connecting the switch between any DC supply and ground (in series with a current-limiting resistor, preferably).
Im using C language for a PIC18F to produce tones such that each of them plays at certain time-interval. I used PWM to produce a tone. But I don't know how to create the intervals. Here is my attempt.
#pragma code //
void main (void)
{
int i=0;
// set internal oscillator to 1MHz
//OSCCON = 0b10110110; // IRCFx = 101
//OSCTUNEbits.PLLEN = 0; //PLL disabled
TRISDbits.TRISD7 = 0;
T2CON = 0b00000101; //timer2 on
PR2 = 240;
CCPR1L = 0x78;
CCP1CON= 0b01001100;
LATDbits.LATD7=0;
Delay1KTCYx(1000);
while(1);
}
When I'm doing embedded programming, I find it extremely useful to add comments explaining exactly what I'm intending when I'm set configuration registers. That way I don't have to go back to the data sheets to figure out what 0x01001010 does when I'm trying to grok the code the next time I have to change it. (Just be sure to keep the comments in sync with the changes).
From what I can decode, it looks like you've got the PWM registers set up, but no way to change the frequency at your desired intervals. There are a few ways to do it, here are 2 ideas:
You could read a timer on startup, add the desired interval to get a target time, and poll the timer in the while loop. When the timer hits the target, set a new PWM duty cycle, and add the next interval to your target time. This will work fine, until you need to start doing other things in the background loop.
You could set timer0's count to 0xFFFF-interval, and set it to interrupt on rollover. In the ISR, set the new PWM duty cycle, and reset timer0 count to the next interval.
One common way of controlling timing in embedded processes looks like this:
int flag=0;
void main()
{
setup_interrupt(); //schedule interrupt for desired time.
while (1)
{
if (flag)
{
update_something();
flag = 0;
}
}
Where does flag get set? In the interrupt handler:
void InterruptHandler()
{
flag = 1;
acknowledge_interupt_reg = 0;
}
You've got all the pieces in your example, you just need to put them together in the right places. In your case, update_something() would update the PWM. The logic would look like: "If it's on, turn it off; else turn it on. Update the tone (duty cycle) if desired"
There should be no need for additional delays or pauses in the main while loop. The goal is that it just runs over and over again, waiting for something to do. If the program needs to do something else at a different rate, you can add another flag, which is triggered completely independently, and the timing of the two tasks won't interfere with each other.
EDIT:
I'm now confused about what you are trying to accomplish. Do you want a series of pulses of the same tone (on-off-on-off)? Or do you want a series of different notes without pauses (do-re-me-fa-...)? I had been assuming the latter, but now I'm not sure.
After seeing your updated code, I'm not sure exactly how your system is configured, so I'm just going to ask some questions I hope are helpful.
Is the PWM part working? Do you get the initial tone? I'm assuming yes.
Does your hardware have some sort of clock pulse hooked up to the RA4/T0CKI pin? If not, you need T0 to be clock mode, not counter mode.
Is the interrupt being called? You are setting INT0IE, which enables the external interrupt, not the timer interrupt
What interval do you want between tone updates? Right now, you are getting 0xFFFF / (clock_freq/8) You need to set the TMR0H/L registers if you want something different.
What's on LATD7? and why are you clearing it? Are you saying it enables PWM output?
Why the call to Delay()? The timer itself should be providing the needed delay. There seems to be a disconnect about how to do timing. I'll expand my other answer
Where are you trying to change the PWM frequency? Don't you need to write to PR2? You will need to give it a different value for every different tone.
"Halting build on first failure as requested."
This just says you have some syntax error, without showing us the error report, we can't really help.
In Windows you can use the Beep function in kernel32:
[DllImport("kernel32.dll")]
private static extern bool Beep(int frequency, int duration);