Change timer period while running application STM32F4 [C] - c

I want change my timer period while running program
I make different measures requiring different timer periods.
After initialization:
TIM_TimeBaseInitStructure.TIM_Period = period - 1;
TIM_TimeBaseInitStructure.TIM_Prescaler = 8399+1;
TIM_TimeBaseInitStructure.TIM_ClockDivision = TIM_CKD_DIV1;
TIM_TimeBaseInitStructure.TIM_CounterMode = TIM_CounterMode_Up;
TIM_TimeBaseInit(TIM3, &TIM_TimeBaseInitStructure);
In main function I set: period = 10000;
Then, I receive new value via UART and try to set another value:
arr3[0] = received_str[11];
arr3[1] = received_str[12];
arr3[2] = received_str[13];
arr3[3] = received_str[14];
arr3[4] = received_str[15];
arr3[5] = '\0';
per = atoi(arr3);
period = per;
But timer period don't changes. How can I do it?

This is the problem with HAL libraries. People who use them have no clue what is behind it.
What is the timer period?
It is the combination of the PCS (prescaller) and ARR (auto reload register). The period is calculated as (ARR + 1) * (PSC + 1) / TimerClockFreq.
When you try to change the period when the timer is running you need to make sure that it is done in the safe moment to prevent glitches. The safest moment is then the UG event happens.
You have two ways of achieving it:
UG interrupt. In the interrupt routine if the ARR or PSC have changed - you should update the register. Bare in mind that the change may happen in the next cycle if the registers are shadowed.
Using the timers DMA burst more. It is more complicated to config - but the hardware take care of the registers update on the selected event. The change is instant and register shadowing does not affect it. More details read RM chapter about the timers DMA burst mode.
If you want to use more advanced hardware features forget about the HAL and program it using the bare registers having the full control.

At run time by updating auto reload register we can change the timer value.
I have done this practically.
TIM5->ARR = Value; //This is for timer5

Related

Issue with timer Input Capture mode in STM32

I would like to trigger a timer with an external signal which happens every 1ms. Then, the timer has to count up to 90us at every rising edge of the external signal. The question is, can I do that using a general purpose timer configured as Input Compare? I don’t understand which callBack to use for this purpose.
I’m using the HAL library and TIM2 peripheral in STM32F446 microcotnroller.
This is how I configured my timer peripheral
void TIMER2_Config(void)
{
TIM_IC_InitTypeDef timer2IC_Config;
htimer2.Instance = TIM2;
htimer2.Init.CounterMode = TIM_COUNTERMODE_UP;
htimer2.Init.Period = 89; //Fck=50MHz, Timer period = 90us
htimer2.Init.Prescaler = 49;
if ( HAL_TIM_IC_Init(&htimer2) != OK)
Error_handler();
timer2IC_Config.ICPolarity = TIM_ICPOLARITY_RISING;
timer2IC_Config.ICPrescaler = TIM_ICPSC_DIV1;
if (HAL_TIM_IC_ConfigChannel(&htimer2, &timer2IC_Config, TIM_CHANNEL_1) != OK)
Error_handler();
}
What you are asking for is well within the features of this peripheral, but you must remember that the HAL library is not capable of using the full features of the chip. Sometimes you have to use access the registers directly (the LL library is another way to do this).
To have the external signal start the timer you need to use trigger mode, not input capture. Input capture means record the value of the timer which is already started. You need to set the field TIMx_CCMRx_CCxS to 0b11 (3) to make the input a trigger, then set the field TIMx_SMCR_TS to select the channel you are using, and TIMx_SMCR_SMS to 0b110 (6) to select start on trigger mode.
Next set up the prescaler and reload register to to count for the 90 microsecond delay that you want, and set TIMx_CR1_OPM to 1 to stop the counter wrapping when it reaches the limit.
Next set TIMx_CR2_MMS to 0b010 to output a trigger on the update event.
Finally you can set the ADCx_CR2_EXTSEL bits to 0b00110 to trigger on TIM2_TRGO trigger output.
This is all a bit complicated, but the reference manual is very thorough and you should read the whole chapter through and check every field in the register description section. I would recommend not mixing the HAL library with direct register access, it will probably interfere with what you are trying to do.

ASF4 Microchip API timer driver reset function

I'm using ASF4 API hal_timer for a ARM Cortex M4. I'm using the timer driver to timing a data sequence.
Why does no reset function exist? I'm using the timer on a TIMER_TASK_ONE_SHOT mode and want to reset it when ever I need to.
I thought a simple
timer_start(&TIMER_0);
timer_stop(&TIMER_0);
would do the trick but does not seem to work.
Is it necessary to re-initialize the timer for each timing event?
I'm probably missing something obvious. Am I approaching this problem incorrectly reason being why the method timer_reset() doesn't exist?
I have no experience of this API, but looking at the documentation it is apparent that a single timer can have multiple tasks on different periods, so resetting TIMER_0 makes little semantic sense; rather you need to reset the individual timer task attached to the timer - of which there may be more than one.
From the documentation (which is poor and contains errors), and the source code which is more reliable:
timer_task_instance.time_label = TIMER_0.time ;
where the timer_task_instance is the struct timer_task instance you want to reset. This sets the start time to the current time.
Probably best to wrap that in a function:
// Restart current interval, return interval.
uint32_t timer_restart( struct timer_descriptor* desc, struct timer_task* tsk )
{
tsk->time_label = desc->time
return tsk->interval ;
}
Then:
timer_restart( &TIMER_0, &timer_task_instance ) ;
Assuming you're using the (edited) example from the ASF4 Reference Manual:
/* TIMER_0 example */
static struct timer_task TIMER_0_task;
static void TIMER_0_task_cb(const struct timer_task *const timer_task)
{
// task you want to delay using non-existent reset function.
}
void TIMER_0_example(void)
{
TIMER_0_task.interval = 100;
TIMER_0_task.cb = TIMER_0_task_cb;
TIMER_0_task.mode = TIMER_TASK_ONE_SHOT;
timer_add_task(&TIMER_0, &TIMER_0_task);
timer_start(&TIMER_0);
}
Instead of resetting, which isn't supported by the API, you could use:
timer_remove_task(&TIMER_0, &TIMER_0_task);
timer_add_task(&TIMER_0, &TIMER_0_task);
which will effectively restart the delay associated with TIMER_0_task.
Under the hood, timer tasks are maintained as an ordered list, in order of when each task will expire, and using the functions provided by the API maintains the list order.

How do I implement a timer which turn a signal on and off (PWG) every few seconds on an Atmega324a microcontroller?

I'm programming an Atmega324a Microcontroller and I'm trying to implement a timer (in this case Timer1) which supposed to make a second led connected to my board blink.
I also need to know how to identify the pin the led is attached to
I've found the data sheet:
http://ww1.microchip.com/downloads/en/DeviceDoc/ATmega164A_PA-324A_PA-644A_PA-1284_P_Data-Sheet-40002070A.pdf
but the details are too technical for me to understand and I don't know where to start looking and most importantly, get to the result, which is the code itself.
Also, What does the ISR function do?
Down below is the current Init_timer function for Timer 0. Is it possible for me to enable both timers at the same time?
static void init_timer(void)
{
// Configure Timer0 for CTC mode, 64x prescaler for 1 ms interval
TCCR0A = _BV(WGM01);
TCCR0B = _BV(CS01) | _BV(CS00);
OCR0A = 124;
TIMSK0 = _BV(OCIE0A);
}
int main(void){
MCUSR = 0;
wdt_disable();
init_pins(); // Reset all pins to default state
init_timer(); // Initialize 1 msec timer interrupt
configure_as_output(LOAD_ON);
configure_as_output(LED1);
configure_as_output(LED2);
sei();
.
.
.
}
ISR(TIMER0_COMPA_vect)
{
static uint16_t ms_count = 0;
ms_count++; // milliseconds counter
if (ms_count == TMP107_POLL_PERIOD)
{
tmp107_command(); // send command to temperature sensor
toggle(LED1); // blink status led
ms_count = 0;
}
}
First of all: StackOverflow is a site to ask questions around source code, it is not a service delivering solutions. Please take the tour, it will help you to get satisfactory answers.
But nevermind, because you're new:
For example, you can implement a timer for a pulse width generator in these steps:
Learn to read data sheets. Nobody can relieve you of this burden.
Learn how to use output pins.
Write some tests to make sure you understand output pins.
Select a timer to measure the clock cycles. Apparently you did that already.
Learn to use this timer. Some timers can generate PWM (pulse width modulated) signals in hardware. However, the output pin is likely to be in a fixed location and the range of possible periods may not meet your requirements.
Write some tests to make sure you understand timers and interrupts.
If the required pulse period is too long for the timer, you can add an extra variable to scale down, for example.
Implement the rest of it.
Also, What does the ISR function do?
This function is called "magically" by hardware when the conditions for the interrupt are met. In the case shown, tmp107_command() and toggle(LED1) are called only every TMP107_POLL_PERIOD times.
Is it possible for me to enable both timers at the same time?
Sure.

Realtime sine tone generation with Core Audio

I want to create a realtime sine generator using apples core audio framework. I want do do it low level so I can learn and understand the fundamentals.
I know that using PortAudio or Jack would probably be easier and I will use them at some point but I would like to get this to work first so I can be confident to understand the fundamentals.
I literally searched for days now on this topic but no one seems to have ever created a real time wave generator using core audio trying to optain low latency while using C and not Swift or Objective-C.
For this I use a project I set up a while ago. It was first designed to be a game. So after the Application starts up, it will enter a run loop. I thought this would perfectly fit as I can use the main loop to copy samples into the audio buffer and process rendering and input handling as well.
So far I get sound. Sometimes it works for a while then starts to glitch, sometimes it glitches right away.
This is my code. I tried to simplify if and only present the important parts.
I got multiple questions. They are located in the bottom section of this post.
Applications main run loop. This is where it all starts after the window is created and buffers and memory is initialized:
while (OSXIsGameRunning())
{
OSXProcessPendingMessages(&GameData);
[GlobalGLContext makeCurrentContext];
CGRect WindowFrame = [window frame];
CGRect ContentViewFrame = [[window contentView] frame];
CGPoint MouseLocationInScreen = [NSEvent mouseLocation];
BOOL MouseInWindowFlag = NSPointInRect(MouseLocationInScreen, WindowFrame);
CGPoint MouseLocationInView = {};
if (MouseInWindowFlag)
{
NSRect RectInWindow = [window convertRectFromScreen:NSMakeRect(MouseLocationInScreen.x, MouseLocationInScreen.y, 1, 1)];
NSPoint PointInWindow = RectInWindow.origin;
MouseLocationInView= [[window contentView] convertPoint:PointInWindow fromView:nil];
}
u32 MouseButtonMask = [NSEvent pressedMouseButtons];
OSXProcessFrameAndRunGameLogic(&GameData, ContentViewFrame,
MouseInWindowFlag, MouseLocationInView,
MouseButtonMask);
#if ENGINE_USE_VSYNC
[GlobalGLContext flushBuffer];
#else
glFlush();
#endif
}
Through using VSYNC I can throttle the loop down to 60 FPS. The timing is not super tight but it is quite steady. I also have some code to throttle it manually using mach timing which is even more imprecise. I left it out for readability.
Not using VSYNC or using mach timing to get 60 iterations a second also makes the audio glitch.
Timing log:
CyclesElapsed: 8154360866, TimeElapsed: 0.016624, FPS: 60.155666
CyclesElapsed: 8174382119, TimeElapsed: 0.020021, FPS: 49.946926
CyclesElapsed: 8189041370, TimeElapsed: 0.014659, FPS: 68.216309
CyclesElapsed: 8204363633, TimeElapsed: 0.015322, FPS: 65.264511
CyclesElapsed: 8221230959, TimeElapsed: 0.016867, FPS: 59.286217
CyclesElapsed: 8237971921, TimeElapsed: 0.016741, FPS: 59.733719
CyclesElapsed: 8254861722, TimeElapsed: 0.016890, FPS: 59.207333
CyclesElapsed: 8271667520, TimeElapsed: 0.016806, FPS: 59.503273
CyclesElapsed: 8292434135, TimeElapsed: 0.020767, FPS: 48.154209
What is important here is the function OSXProcessFrameAndRunGameLogic. It is called 60 times a second and it is passed a struct containing basic information like a buffer for rendering, keyboard state and a sound buffer which looks like this:
typedef struct osx_sound_output
{
game_sound_output_buffer SoundBuffer;
u32 SoundBufferSize;
s16* CoreAudioBuffer;
s16* ReadCursor;
s16* WriteCursor;
AudioStreamBasicDescription AudioDescriptor;
AudioUnit AudioUnit;
} osx_sound_output;
Where game_sound_output_buffer is:
typedef struct game_sound_output_buffer
{
real32 tSine;
int SamplesPerSecond;
int SampleCount;
int16 *Samples;
} game_sound_output_buffer;
These get set up before the application enters its run loop.
The size for the SoundBuffer itself is SamplesPerSecond * sizeof(uint16) * 2 where SamplesPerSecond = 48000.
So inside OSXProcessFrameAndRunGameLogic is the sound generation:
void OSXProcessFrameAndRunGameLogic(osx_game_data *GameData, CGRect WindowFrame,
b32 MouseInWindowFlag, CGPoint MouseLocation,
int MouseButtonMask)
{
GameData->SoundOutput.SoundBuffer.SampleCount = GameData->SoundOutput.SoundBuffer.SamplesPerSecond / GameData->TargetFramesPerSecond;
// Oszi 1
OutputTestSineWave(GameData, &GameData->SoundOutput.SoundBuffer, GameData->SynthesizerState.ToneHz);
int16* CurrentSample = GameData->SoundOutput.SoundBuffer.Samples;
for (int i = 0; i < GameData->SoundOutput.SoundBuffer.SampleCount; ++i)
{
*GameData->SoundOutput.WriteCursor++ = *CurrentSample++;
*GameData->SoundOutput.WriteCursor++ = *CurrentSample++;
if ((char*)GameData->SoundOutput.WriteCursor >= ((char*)GameData->SoundOutput.CoreAudioBuffer + GameData->SoundOutput.SoundBufferSize))
{
//printf("Write cursor wrapped!\n");
GameData->SoundOutput.WriteCursor = GameData->SoundOutput.CoreAudioBuffer;
}
}
}
Where OutputTestSineWave is the part where the buffer is actually filled with data:
void OutputTestSineWave(osx_game_data *GameData, game_sound_output_buffer *SoundBuffer, int ToneHz)
{
int16 ToneVolume = 3000;
int WavePeriod = SoundBuffer->SamplesPerSecond/ToneHz;
int16 *SampleOut = SoundBuffer->Samples;
for(int SampleIndex = 0;
SampleIndex < SoundBuffer->SampleCount;
++SampleIndex)
{
real32 SineValue = sinf(SoundBuffer->tSine);
int16 SampleValue = (int16)(SineValue * ToneVolume);
*SampleOut++ = SampleValue;
*SampleOut++ = SampleValue;
SoundBuffer->tSine += Tau32*1.0f/(real32)WavePeriod;
if(SoundBuffer->tSine > Tau32)
{
SoundBuffer->tSine -= Tau32;
}
}
}
So when the Buffers are created at start up also Core audio is initialized which I do like this:
void OSXInitCoreAudio(osx_sound_output* SoundOutput)
{
AudioComponentDescription acd;
acd.componentType = kAudioUnitType_Output;
acd.componentSubType = kAudioUnitSubType_DefaultOutput;
acd.componentManufacturer = kAudioUnitManufacturer_Apple;
AudioComponent outputComponent = AudioComponentFindNext(NULL, &acd);
AudioComponentInstanceNew(outputComponent, &SoundOutput->AudioUnit);
AudioUnitInitialize(SoundOutput->AudioUnit);
// uint16
//AudioStreamBasicDescription asbd;
SoundOutput->AudioDescriptor.mSampleRate = SoundOutput->SoundBuffer.SamplesPerSecond;
SoundOutput->AudioDescriptor.mFormatID = kAudioFormatLinearPCM;
SoundOutput->AudioDescriptor.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsNonInterleaved | kAudioFormatFlagIsPacked;
SoundOutput->AudioDescriptor.mFramesPerPacket = 1;
SoundOutput->AudioDescriptor.mChannelsPerFrame = 2; // Stereo
SoundOutput->AudioDescriptor.mBitsPerChannel = sizeof(int16) * 8;
SoundOutput->AudioDescriptor.mBytesPerFrame = sizeof(int16); // don't multiply by channel count with non-interleaved!
SoundOutput->AudioDescriptor.mBytesPerPacket = SoundOutput->AudioDescriptor.mFramesPerPacket * SoundOutput->AudioDescriptor.mBytesPerFrame;
AudioUnitSetProperty(SoundOutput->AudioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
0,
&SoundOutput->AudioDescriptor,
sizeof(SoundOutput->AudioDescriptor));
AURenderCallbackStruct cb;
cb.inputProc = OSXAudioUnitCallback;
cb.inputProcRefCon = SoundOutput;
AudioUnitSetProperty(SoundOutput->AudioUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Global,
0,
&cb,
sizeof(cb));
AudioOutputUnitStart(SoundOutput->AudioUnit);
}
The initialization code for core audio sets the render callback to OSXAudioUnitCallback
OSStatus OSXAudioUnitCallback(void * inRefCon,
AudioUnitRenderActionFlags * ioActionFlags,
const AudioTimeStamp * inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList * ioData)
{
#pragma unused(ioActionFlags)
#pragma unused(inTimeStamp)
#pragma unused(inBusNumber)
//double currentPhase = *((double*)inRefCon);
osx_sound_output* SoundOutput = ((osx_sound_output*)inRefCon);
if (SoundOutput->ReadCursor == SoundOutput->WriteCursor)
{
SoundOutput->SoundBuffer.SampleCount = 0;
//printf("AudioCallback: No Samples Yet!\n");
}
//printf("AudioCallback: SampleCount = %d\n", SoundOutput->SoundBuffer.SampleCount);
int SampleCount = inNumberFrames;
if (SoundOutput->SoundBuffer.SampleCount < inNumberFrames)
{
SampleCount = SoundOutput->SoundBuffer.SampleCount;
}
int16* outputBufferL = (int16 *)ioData->mBuffers[0].mData;
int16* outputBufferR = (int16 *)ioData->mBuffers[1].mData;
for (UInt32 i = 0; i < SampleCount; ++i)
{
outputBufferL[i] = *SoundOutput->ReadCursor++;
outputBufferR[i] = *SoundOutput->ReadCursor++;
if ((char*)SoundOutput->ReadCursor >= (char*)((char*)SoundOutput->CoreAudioBuffer + SoundOutput->SoundBufferSize))
{
//printf("Callback: Read cursor wrapped!\n");
SoundOutput->ReadCursor = SoundOutput->CoreAudioBuffer;
}
}
for (UInt32 i = SampleCount; i < inNumberFrames; ++i)
{
outputBufferL[i] = 0.0;
outputBufferR[i] = 0.0;
}
return noErr;
}
This is mostly all there is to it. This is quite long but I did not see a way to present all needed information in a more compact way. I wanted to show all because I am by no means a professional programmer. If there is something you feel is missing, pleas tell me.
My feeling tells me that there is something wrong with the timing. I feel the function OSXProcessFrameAndRunGameLogic sometimes needs more time so that the core audio callback is already pulling samples out of the buffer before it is fully written by OutputTestSineWave.
There is actually more stuff going on in OSXProcessFrameAndRunGameLogic which I did not show here. I am "software rendering" very basic stuff into a framebuffer which is then displayed by OpenGL and I also do keypress checks in there because yeah, its the main function of functionality. In the future this is the place where I would like to handle the controls for multiple oscillators, filters and stuff.
Anyway even if I stop the Rendering and Input handling from being called every iteration I still get audio glitches.
I tried pulling all the sound processing in OSXProcessFrameAndRunGameLogic into an own function void* RunSound(void *GameData) and changed it to:
pthread_t soundThread;
pthread_create(&soundThread, NULL, RunSound, GameData);
pthread_join(soundThread, NULL);
However I got mixed results and was not even sure if multithreading is done like that. Creating and destroying threads 60 times a second didn't seem the way to go.
I also had the idea to let sound processing happen on a completely different thread before the application actually runs into the main loop. Something like two simultaneously running while loops where the first processes audio and the latter UI and input.
Questions:
I get glitchy audio. Rendering and input seem to work correctly but audio sometimes glitches, sometimes it doesn't. From the code I provided, can you maybe see me doing something wrong?
Am I using the core audio technology in a wrong way in order to achieve real time low latency signal generation?
Should I do sound processing in a separate thread like I talked about above? How would threading in this context be done correctly? It would make sense to have a thread only dedicated for sound am I right?
Am I right that the basic audio processing should not be done in the render callback of core audio? Is this function only for outputting the provided sound buffer?
And if sound processing should be done right here, how can I access information like the keyboard state from inside the callback?
Are there any resources you could point me to that I maybe missed?
This is the only place I know where I can get help with this project. I would really appreciate your help.
And if something is not clear to you please let me know.
Thank you :)
In general when dealing with low-latency audio you want to achieve the most deterministic behaviour possible.
This, for example, translates to:
Don't hold any locks on the audio thread (priority inversion)
No memory allocation on the audio thread (takes often too much time)
No file/network IO on the audio thread (takes often too much time)
Question 1:
There are indeed some problems with your code for when you want to achieve continuous, realtime, non-glitching audio.
1. Two different clock domains.
You are providing audio data from a (what I call) a different clock domain than the clock domain asking for data. Clock domain 1 in this case is defined by your TargetFramesPerSecond value, clock domain 2 defined by Core Audio. However, due too how scheduling works you have no guarantee that you thread is finishing in time and on time. You try to target your rendering to n frames per second, but what happens when you don't make it time wise? As far as I can see you don't compensate for the deviation a render cycle took compared to the ideal timing.
The way threading works is that ultimately the OS scheduler decides when your thread is active. There are never guarantees and this causes you render cycles to be not very precise (in term of precision you need for audio rendering).
2. There is no synchronisation between the render thread and the Core Audio rendercallback thread.
The thread where the OSXAudioUnitCallback runs is not the same as where your OSXProcessFrameAndRunGameLogic and thus OutputTestSineWave run. You are providing data from your main thread, and data is being read from the Core Audio render thread. Normally you would use some mutexes to protect you data, but in this case that's not possible because you would run into the problem of priority inversion.
A way of dealing with race conditions is to use a buffer which uses atomic variables to store the usage and pointers of the buffer and let only 1 producer and 1 consumer use this buffer.
Good examples of such buffers are:
https://github.com/michaeltyson/TPCircularBuffer
https://github.com/andrewrk/libsoundio/blob/master/src/ring_buffer.h
3. There are a lot of calls in you audio render thread which prevent deterministic behaviour.
As you wrote you are doing a lot more inside the same audio render thread. Changes are quite high that there will be stuff going on (under the hood) which prevents your thread from being on time. Generally, you should avoid calls which take either too much time or are not deterministic. With all the OpenGL/keypres/framebuffer rendering there is no way to be certain you thread will "arrive on time".
Below are some resources worth looking into.
Question 2:
AFAICT generally speaking, you are using the Core Audio technology correctly. The only problem I think you have is on the providing side.
Question 3:
Yes. Definitely! Although, there are multiple ways of doing this.
In your case you have a normal-priority thread running to do the rendering and a high-performance, realtime thread on which the audio render callback is being called. Looking at your code I would suggest putting the generation of the sine wave inside the render callback function (or call OutputTestSineWave from the render callback). This way you have the audio generation running inside a reliable high prio thread, there is no other rendering interfering with the timing precision and there is no need for a ringbuffer.
In other cases where you need to do "non-realtime" processing to get audiodata ready (think of reading from a file, reading from a network or even from another physical audio device) you cannot run this logic inside the Core Audio thread. A way to solve this is to start a separate, dedicated thread to do this processing. To pass the data to the realtime audio thread you would then make use of the earlier mentioned ringbuffer.
It basically boiles down to two simple goals: for the realtime thread it is necessary to have the audio data available at all times (all render calls), if this failes you will end up sending invalid (or better zeroed) audio data.
The main goal for the secondary thread is to fill up the ringbuffer as fast as possible and to keep the ringbuffer as full as possible. So, whenever there is room to put new audio data into the ringbuffer the thread should do just that.
The size of the ringbuffer in this case will dicate how much tolerance there will be for delay. The size of the ringbuffer will be a balance between certainty (bigger buffer) and latency (smaller buffer).
BTW. I'm quite certain Core Audio has all the facilities to do all this for you.
Question 4:
There are multiple ways of achieving you goal, and rendering the stuff inside the render callback from Core Audio is definitely one of them. The one thing you should keep in mind is that you have to make sure the function returns in time.
For changing parameters to manipulate the audio rendering you'll have to find a way of passing messages which enables the reader (audio renderer function) to get messages without locking and waiting. The way I have done this is to create a second ringbuffer which hold messages from which the audio renderer can consume. This can be as simple as a ringbuffer which hold structs with data (or even pointers to data). As long as you stick to the rules of no locking.
Question 5:
I don't know what resources you are aware of but here are some must-reads:
http://atastypixel.com/blog/four-common-mistakes-in-audio-development/
http://www.rossbencina.com/code/real-time-audio-programming-101-time-waits-for-nothing
https://developer.apple.com/library/archive/qa/qa1467/_index.html
You basic problem is that you are trying to push audio from your game loop instead of letting the audio system pull it; e.g. instead of always having (or quickly being able to create *) enough audio samples ready for the amount requested by the audio callback to be pulled by the audio callback. The "always" has to account for enough slop to cover timing jitter (being called late or early or too few times) in your game loop.
(* with no locks, semaphores, memory allocation or Objective C messages)

How to run a Erlang Process periodically with Precise Time (i.e. 10ms)

I want to run a periodic erlang process every 10ms (based on wall clock time), the 10ms should be as accurate as possible; what should be the right way to implement it?
If you want really reliable and accurate periodic process you should rely on actual wall clock time using erlang:monotonic_time/0,1. If you use method in Stratus3D's answer you will eventually fall behind.
start_link(Period) when Period > 0, is_integer(Period) ->
gen_server:start_link({local, ?SERVER}, ?MODULE, Period, []).
...
init(Period) ->
StartT = erlang:monotonic_time(millisecond),
self() ! tick,
{ok, {StartT, Period}}.
...
handle_info(tick, {StartT, Period} = S) ->
Next = Period - (erlang:monotonic_time(millisecond)-StartT) rem Period,
_Timer = erlang:send_after(Next, self(), tick),
do_task(),
{noreply, S}.
You can test in the shell:
spawn(fun() ->
P = 1000,
StartT = erlang:monotonic_time(millisecond),
self() ! tick,
(fun F() ->
receive
tick ->
Next = P - (erlang:monotonic_time(millisecond)-StartT) rem P,
erlang:send_after(Next, self(), tick),
io:format("X~n", []),
F()
end
end)()
end).
If you really want to be as precise as possible and you are sure your task will take less time than the interval you want it performed at you could have one long running process instead of spawning a process every 10ms. Erlang could spawn a new process every 10ms but unless there is a reason you cannot reuse the same process it's usually not worth the overhead (even though it's very little).
I would do something like this in an OTP gen_server:
-module(periodic_task).
... module exports
start_link() ->
gen_server:start_link({local, ?SERVER}, ?MODULE, [], []).
... Rest of API and other OTP callbacks
init([]) ->
Timer = erlang:send_after(0, self(), check),
{ok, Timer}.
handle_info(check, OldTimer) ->
erlang:cancel_timer(OldTimer),
Timer = erlang:send_after(10, self(), check),
do_task(), % A function that executes your task
{noreply, Timer}.
Then start the gen_server like this:
periodic_task:start_link().
As long as the gen_server is running (if it crashes so will the parent process since they are linked) the function do_task/0 will be executed almost every 10 milliseconds. Note that this will not be perfectly accurate. There will be a drift in the execution times. The actual interval will be 10ms + time it takes receive the timer message, cancel the old timer, and start the new one.
If you want to start a separate process every 10ms you could have the do_task/0 spawn a process. Note that this will add additional overhead, but won't necessarily make the interval between spawns less accurate.
My example was taken from this answer: What's the best way to do something periodically in Erlang?

Resources