Run while loop in parallel - libktx

I have a large collection (+90 000 objects) and I would like to run while loop in parallel on it, source of my function is below
val context = newSingleThreadAsyncContext()
return KtxAsync.async(context) {
val fields = regularMazeService.generateFields(colsNo, rowsNo)
val time = measureTimeMillis {
withContext(newAsyncContext(10)) {
while (availableFieldsWrappers.isNotEmpty()) {
val wrapper = getFirstShuffled(availableFieldsWrappers.lastIndex)
.let { availableFieldsWrappers[it] }
if (wrapper.neighborsIndexes.isEmpty()) {
availableFieldsWrappers.remove(wrapper)
continue
}
val nextFieldIndex = getFirstShuffled(wrapper.neighborsIndexes.lastIndex)
.let {
val fieldIndex = wrapper.neighborsIndexes[it]
wrapper.neighborsIndexes.removeAt(it)
fieldIndex
}
if (visitedFieldsIndexes.contains(nextFieldIndex)) {
wrapper.neighborsIndexes.remove(nextFieldIndex)
fields[nextFieldIndex].neighborFieldsIndexes.remove(wrapper.index)
continue
}
val nextField = fields[nextFieldIndex]
availableFieldsWrappers.add(FieldWrapper(nextField, nextFieldIndex))
visitedFieldsIndexes.add(nextFieldIndex)
wrapper.field.removeNeighborWall(nextFieldIndex)
nextField.removeNeighborWall(wrapper.index)
}
}
}
Gdx.app.log("maze-time", "$time")
On top of class
private val availableFieldsWrappers = Collections.synchronizedList(mutableListOf<FieldWrapper>())
private val visitedFieldsIndexes = Collections.synchronizedList(mutableListOf<Int>())
I test it a few times results are below:
1 thread - 21213ms
5 threads - 27894ms
10 threads - 21494ms
15 threads- 20986ms
What I'm doing wrong?

You are using Collections.synchronizedList from Java standard library, which returns a list wrapper that leverages blocking synchronized mechanism to ensure thread safety. This mechanism is not compatible with coroutines, as in it blocks the other threads from accessing the collection until the operation is finished. You should generally use non-blocking concurrent collections when accessing data from multiple coroutines or protect the shared data with a non-blocking mutex.
List.contains will be become slower and slower (O(n)) as more and more elements are added. Instead of a list, you should use a set for visitedFieldsIndexes. Just make sure to either protect it with a mutex or use a concurrent variant. Similarly, removal of values with random indices from the availableFieldsWrappers is pretty costly - instead, you can shuffle the list once and use simple iteration.
You are not reusing the coroutine contexts. In general, you can create asynchronous context once and reuse its instance instead of creating a new thread pool each time you need coroutines. You should invoke and assign the result of newAsyncContext(10) just once and reuse it throughout your application.
The code you have currently written does not leverage coroutines very well. Instead of thinking of coroutines dispatcher as a thread pool where you can launch N big tasks in parallel (i.e. your while availableFieldsWrappers.isNotEmpty loop), you should think of it as an executor of hundreds or thousands of small tasks, and adjust your code accordingly. I think you could avoid the available/visited collections altogether by rewriting your code with introduction of e.g. Kotlin flows or just multiple KtxAsync.async/KtxAsync.launch calls that handle smaller portion of the logic.
Unless some of the functions are suspending or use coroutines underneath, you're not really leveraging the multiple threads of an asynchronous context at all. withContext(newAsyncContext(10)) launches a single coroutine that handles the whole logic sequentially, leveraging only a single thread. See 4. for some ideas on how you can rewrite the code. Try collecting (or just printing) the thread hashes and names to see if you are using all of the threads well.

Related

How to detect unreleased lock in multi-task C project using static analysis tools?

Is there any way, using static analysis tools(I'm using Codesonar now), to detect unreleased lock problems (something like unreleased semaphores) in the following program?(The comment part marked by arrows)
The project is a multi-task system using Round-robin scheduling, where new_request() is an interrupt task comes randomly and send_buffer() is another period task.
In real case, get_buffer() and send_buffer() are various types of wrappers, which contains many call layers until actual lock/unlock process. So I can't simply specify get_buffer() as lock function in settings of static analysis tool.
int bufferSize = 0; // say max size is 5
// random task
void new_request()
{
int bufferNo = get_buffer(); // wrapper
if (bufferNo == -1)
{
return; // buffer is full
}
if (check_something() == OK)
{
add_to_sendlist(bufferNo); // for asynchronous process of send_buffer()
}
else // bad request
{
// ↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓
// There should be clear_buffer placed here
// but forgotten. Eventually the buffer will be
// full and won't be cleared since 5th bad request comes.
// ↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑
do_nothing();
// clear_buffer(bufferNo);
}
}
int get_buffer()
{
if(bufferSize < 5)
{
bufferSize++;
return bufferSize;
}
else
{
wait_until_empty(); // wait until someone is sent by send_buffer()
return -1;
}
}
// clear specifiled one in buffer
void clear_buffer(int bufferNo)
{
delete(bufferNo)
bufferSize--;
}
// period task
void send_buffer()
{
int sent = send_1st_stuff_in_list();
clear_buffer(sent);
}
yoyozi - Fair disclosure: I'm an engineer at GrammaTech who works on CodeSonar.
First some general things. The relevant parts of the manual for this are on the page: codesonar/doc/html/C_Module/LibraryModels/ConcurrencyModelsLocks.html. Especially the bottom of the page on Resolving Lock Operation Identification Problems.
Based on your comments, I think you have already read this, since you address setting the names in the configuration settings.
So then the question is how many different wrappers do you have? If it is only a few, then the settings in the configuration file are the way to go. If there are many, that gets tedious. And if there are very many it becomes practically impossible.
So knowing some estimate for how many wrapper sets you have would help.
Even with the wrappers accounted for, it may be that the deadlock and race detectors aren't quite what you need for your problem.
If I understand your issue correctly, you have a queue with limited space, and by accident malformed items don't get cleaned out of the queue, and so the queue gets full and that stalls all processing. While you may have multiple threads involved in this implementation, the issue itself would still be a problem in a basically serial setting.
The best way to work with an issue like this is to try and make a simpler example that displays the same core problem. If you can do this in a way that can be shared with GrammaTech, we can work with you on ways to adjust settings or maybe provide hints to the analysis so it can find this issue.
If you would like to talk about this in more detail, and with prodetction against public disclosure of your code, please contact us at support_at_grammatech_dot_com, where the at and dot should be replaced as needed to make a well formed email address.

dart:io sync vs async file operations

There are a number of sync and async operations for files in dart:io:
file.deleteSync() and file.delete()
file.readAsStringSync() and file.readAsString()
file.writeAsBytesSync(bytes) and file.writeAsBytes(bytes)
and many, many more.
What are the considerations that I should keep in mind when choosing between the sync and async options? I seem to recall seeing somewhere that the sync option is faster if you have to wait for it to finish anyway (await file.delete() for example). But I can't remember where I saw that or if it is true.
Is there any difference between this method:
Future deleteFile(File file) async {
await file.delete();
print('deleted');
}
and this method:
Future deleteFile(File file) async {
file.deleteSync();
print('deleted');
}
Let me try to summarize an answer based on the comments to my question. Correct me where I'm wrong.
Running code in an async method doesn't make it run on another thread.
Dart is a single threaded system.
Code gets run on an event loop.
Performing long running synchronous tasks will block the system whether it is in an async method or not.
An isolate is a single thread.
If you want to run tasks on another thread then you need to run it on another isolate.
Starting another isolate is called spawning the isolate.
There are a few options for running tasks on another isolate including compute and IsolateChannel and writing your own isolate communication code.
For File IO, the synchronous versions are faster than the asynchronous versions.
For heavy File IO, prefer the asynchronous version because they work on a separate thread.
For light File IO (like file.exists()?), using the synchronous version is an option since it is likely to be fast.
Further reading
Isolates and Event Loops
Single Thread Dart, What? — Part 1
Single Thread Dart, What? — Part 2
avoid_slow_async_io lint
sync variants unlike async ones stop the CPU from executing any event handlers - like the event loop, until the operation is complete.
Using sync:
void main() {
final file = File('...');
Future(() => print('1')); // Adding to the event queue
file.readAsBytesSync();
print('2');
}
Output:
2
1
Using async:
void main() async {
final file = File('...');
Future(() => print('1')); // Adding to the event queue
await file.readAsBytes();
print('2');
}
Output:
1
2

Realtime sine tone generation with Core Audio

I want to create a realtime sine generator using apples core audio framework. I want do do it low level so I can learn and understand the fundamentals.
I know that using PortAudio or Jack would probably be easier and I will use them at some point but I would like to get this to work first so I can be confident to understand the fundamentals.
I literally searched for days now on this topic but no one seems to have ever created a real time wave generator using core audio trying to optain low latency while using C and not Swift or Objective-C.
For this I use a project I set up a while ago. It was first designed to be a game. So after the Application starts up, it will enter a run loop. I thought this would perfectly fit as I can use the main loop to copy samples into the audio buffer and process rendering and input handling as well.
So far I get sound. Sometimes it works for a while then starts to glitch, sometimes it glitches right away.
This is my code. I tried to simplify if and only present the important parts.
I got multiple questions. They are located in the bottom section of this post.
Applications main run loop. This is where it all starts after the window is created and buffers and memory is initialized:
while (OSXIsGameRunning())
{
OSXProcessPendingMessages(&GameData);
[GlobalGLContext makeCurrentContext];
CGRect WindowFrame = [window frame];
CGRect ContentViewFrame = [[window contentView] frame];
CGPoint MouseLocationInScreen = [NSEvent mouseLocation];
BOOL MouseInWindowFlag = NSPointInRect(MouseLocationInScreen, WindowFrame);
CGPoint MouseLocationInView = {};
if (MouseInWindowFlag)
{
NSRect RectInWindow = [window convertRectFromScreen:NSMakeRect(MouseLocationInScreen.x, MouseLocationInScreen.y, 1, 1)];
NSPoint PointInWindow = RectInWindow.origin;
MouseLocationInView= [[window contentView] convertPoint:PointInWindow fromView:nil];
}
u32 MouseButtonMask = [NSEvent pressedMouseButtons];
OSXProcessFrameAndRunGameLogic(&GameData, ContentViewFrame,
MouseInWindowFlag, MouseLocationInView,
MouseButtonMask);
#if ENGINE_USE_VSYNC
[GlobalGLContext flushBuffer];
#else
glFlush();
#endif
}
Through using VSYNC I can throttle the loop down to 60 FPS. The timing is not super tight but it is quite steady. I also have some code to throttle it manually using mach timing which is even more imprecise. I left it out for readability.
Not using VSYNC or using mach timing to get 60 iterations a second also makes the audio glitch.
Timing log:
CyclesElapsed: 8154360866, TimeElapsed: 0.016624, FPS: 60.155666
CyclesElapsed: 8174382119, TimeElapsed: 0.020021, FPS: 49.946926
CyclesElapsed: 8189041370, TimeElapsed: 0.014659, FPS: 68.216309
CyclesElapsed: 8204363633, TimeElapsed: 0.015322, FPS: 65.264511
CyclesElapsed: 8221230959, TimeElapsed: 0.016867, FPS: 59.286217
CyclesElapsed: 8237971921, TimeElapsed: 0.016741, FPS: 59.733719
CyclesElapsed: 8254861722, TimeElapsed: 0.016890, FPS: 59.207333
CyclesElapsed: 8271667520, TimeElapsed: 0.016806, FPS: 59.503273
CyclesElapsed: 8292434135, TimeElapsed: 0.020767, FPS: 48.154209
What is important here is the function OSXProcessFrameAndRunGameLogic. It is called 60 times a second and it is passed a struct containing basic information like a buffer for rendering, keyboard state and a sound buffer which looks like this:
typedef struct osx_sound_output
{
game_sound_output_buffer SoundBuffer;
u32 SoundBufferSize;
s16* CoreAudioBuffer;
s16* ReadCursor;
s16* WriteCursor;
AudioStreamBasicDescription AudioDescriptor;
AudioUnit AudioUnit;
} osx_sound_output;
Where game_sound_output_buffer is:
typedef struct game_sound_output_buffer
{
real32 tSine;
int SamplesPerSecond;
int SampleCount;
int16 *Samples;
} game_sound_output_buffer;
These get set up before the application enters its run loop.
The size for the SoundBuffer itself is SamplesPerSecond * sizeof(uint16) * 2 where SamplesPerSecond = 48000.
So inside OSXProcessFrameAndRunGameLogic is the sound generation:
void OSXProcessFrameAndRunGameLogic(osx_game_data *GameData, CGRect WindowFrame,
b32 MouseInWindowFlag, CGPoint MouseLocation,
int MouseButtonMask)
{
GameData->SoundOutput.SoundBuffer.SampleCount = GameData->SoundOutput.SoundBuffer.SamplesPerSecond / GameData->TargetFramesPerSecond;
// Oszi 1
OutputTestSineWave(GameData, &GameData->SoundOutput.SoundBuffer, GameData->SynthesizerState.ToneHz);
int16* CurrentSample = GameData->SoundOutput.SoundBuffer.Samples;
for (int i = 0; i < GameData->SoundOutput.SoundBuffer.SampleCount; ++i)
{
*GameData->SoundOutput.WriteCursor++ = *CurrentSample++;
*GameData->SoundOutput.WriteCursor++ = *CurrentSample++;
if ((char*)GameData->SoundOutput.WriteCursor >= ((char*)GameData->SoundOutput.CoreAudioBuffer + GameData->SoundOutput.SoundBufferSize))
{
//printf("Write cursor wrapped!\n");
GameData->SoundOutput.WriteCursor = GameData->SoundOutput.CoreAudioBuffer;
}
}
}
Where OutputTestSineWave is the part where the buffer is actually filled with data:
void OutputTestSineWave(osx_game_data *GameData, game_sound_output_buffer *SoundBuffer, int ToneHz)
{
int16 ToneVolume = 3000;
int WavePeriod = SoundBuffer->SamplesPerSecond/ToneHz;
int16 *SampleOut = SoundBuffer->Samples;
for(int SampleIndex = 0;
SampleIndex < SoundBuffer->SampleCount;
++SampleIndex)
{
real32 SineValue = sinf(SoundBuffer->tSine);
int16 SampleValue = (int16)(SineValue * ToneVolume);
*SampleOut++ = SampleValue;
*SampleOut++ = SampleValue;
SoundBuffer->tSine += Tau32*1.0f/(real32)WavePeriod;
if(SoundBuffer->tSine > Tau32)
{
SoundBuffer->tSine -= Tau32;
}
}
}
So when the Buffers are created at start up also Core audio is initialized which I do like this:
void OSXInitCoreAudio(osx_sound_output* SoundOutput)
{
AudioComponentDescription acd;
acd.componentType = kAudioUnitType_Output;
acd.componentSubType = kAudioUnitSubType_DefaultOutput;
acd.componentManufacturer = kAudioUnitManufacturer_Apple;
AudioComponent outputComponent = AudioComponentFindNext(NULL, &acd);
AudioComponentInstanceNew(outputComponent, &SoundOutput->AudioUnit);
AudioUnitInitialize(SoundOutput->AudioUnit);
// uint16
//AudioStreamBasicDescription asbd;
SoundOutput->AudioDescriptor.mSampleRate = SoundOutput->SoundBuffer.SamplesPerSecond;
SoundOutput->AudioDescriptor.mFormatID = kAudioFormatLinearPCM;
SoundOutput->AudioDescriptor.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsNonInterleaved | kAudioFormatFlagIsPacked;
SoundOutput->AudioDescriptor.mFramesPerPacket = 1;
SoundOutput->AudioDescriptor.mChannelsPerFrame = 2; // Stereo
SoundOutput->AudioDescriptor.mBitsPerChannel = sizeof(int16) * 8;
SoundOutput->AudioDescriptor.mBytesPerFrame = sizeof(int16); // don't multiply by channel count with non-interleaved!
SoundOutput->AudioDescriptor.mBytesPerPacket = SoundOutput->AudioDescriptor.mFramesPerPacket * SoundOutput->AudioDescriptor.mBytesPerFrame;
AudioUnitSetProperty(SoundOutput->AudioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
0,
&SoundOutput->AudioDescriptor,
sizeof(SoundOutput->AudioDescriptor));
AURenderCallbackStruct cb;
cb.inputProc = OSXAudioUnitCallback;
cb.inputProcRefCon = SoundOutput;
AudioUnitSetProperty(SoundOutput->AudioUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Global,
0,
&cb,
sizeof(cb));
AudioOutputUnitStart(SoundOutput->AudioUnit);
}
The initialization code for core audio sets the render callback to OSXAudioUnitCallback
OSStatus OSXAudioUnitCallback(void * inRefCon,
AudioUnitRenderActionFlags * ioActionFlags,
const AudioTimeStamp * inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList * ioData)
{
#pragma unused(ioActionFlags)
#pragma unused(inTimeStamp)
#pragma unused(inBusNumber)
//double currentPhase = *((double*)inRefCon);
osx_sound_output* SoundOutput = ((osx_sound_output*)inRefCon);
if (SoundOutput->ReadCursor == SoundOutput->WriteCursor)
{
SoundOutput->SoundBuffer.SampleCount = 0;
//printf("AudioCallback: No Samples Yet!\n");
}
//printf("AudioCallback: SampleCount = %d\n", SoundOutput->SoundBuffer.SampleCount);
int SampleCount = inNumberFrames;
if (SoundOutput->SoundBuffer.SampleCount < inNumberFrames)
{
SampleCount = SoundOutput->SoundBuffer.SampleCount;
}
int16* outputBufferL = (int16 *)ioData->mBuffers[0].mData;
int16* outputBufferR = (int16 *)ioData->mBuffers[1].mData;
for (UInt32 i = 0; i < SampleCount; ++i)
{
outputBufferL[i] = *SoundOutput->ReadCursor++;
outputBufferR[i] = *SoundOutput->ReadCursor++;
if ((char*)SoundOutput->ReadCursor >= (char*)((char*)SoundOutput->CoreAudioBuffer + SoundOutput->SoundBufferSize))
{
//printf("Callback: Read cursor wrapped!\n");
SoundOutput->ReadCursor = SoundOutput->CoreAudioBuffer;
}
}
for (UInt32 i = SampleCount; i < inNumberFrames; ++i)
{
outputBufferL[i] = 0.0;
outputBufferR[i] = 0.0;
}
return noErr;
}
This is mostly all there is to it. This is quite long but I did not see a way to present all needed information in a more compact way. I wanted to show all because I am by no means a professional programmer. If there is something you feel is missing, pleas tell me.
My feeling tells me that there is something wrong with the timing. I feel the function OSXProcessFrameAndRunGameLogic sometimes needs more time so that the core audio callback is already pulling samples out of the buffer before it is fully written by OutputTestSineWave.
There is actually more stuff going on in OSXProcessFrameAndRunGameLogic which I did not show here. I am "software rendering" very basic stuff into a framebuffer which is then displayed by OpenGL and I also do keypress checks in there because yeah, its the main function of functionality. In the future this is the place where I would like to handle the controls for multiple oscillators, filters and stuff.
Anyway even if I stop the Rendering and Input handling from being called every iteration I still get audio glitches.
I tried pulling all the sound processing in OSXProcessFrameAndRunGameLogic into an own function void* RunSound(void *GameData) and changed it to:
pthread_t soundThread;
pthread_create(&soundThread, NULL, RunSound, GameData);
pthread_join(soundThread, NULL);
However I got mixed results and was not even sure if multithreading is done like that. Creating and destroying threads 60 times a second didn't seem the way to go.
I also had the idea to let sound processing happen on a completely different thread before the application actually runs into the main loop. Something like two simultaneously running while loops where the first processes audio and the latter UI and input.
Questions:
I get glitchy audio. Rendering and input seem to work correctly but audio sometimes glitches, sometimes it doesn't. From the code I provided, can you maybe see me doing something wrong?
Am I using the core audio technology in a wrong way in order to achieve real time low latency signal generation?
Should I do sound processing in a separate thread like I talked about above? How would threading in this context be done correctly? It would make sense to have a thread only dedicated for sound am I right?
Am I right that the basic audio processing should not be done in the render callback of core audio? Is this function only for outputting the provided sound buffer?
And if sound processing should be done right here, how can I access information like the keyboard state from inside the callback?
Are there any resources you could point me to that I maybe missed?
This is the only place I know where I can get help with this project. I would really appreciate your help.
And if something is not clear to you please let me know.
Thank you :)
In general when dealing with low-latency audio you want to achieve the most deterministic behaviour possible.
This, for example, translates to:
Don't hold any locks on the audio thread (priority inversion)
No memory allocation on the audio thread (takes often too much time)
No file/network IO on the audio thread (takes often too much time)
Question 1:
There are indeed some problems with your code for when you want to achieve continuous, realtime, non-glitching audio.
1. Two different clock domains.
You are providing audio data from a (what I call) a different clock domain than the clock domain asking for data. Clock domain 1 in this case is defined by your TargetFramesPerSecond value, clock domain 2 defined by Core Audio. However, due too how scheduling works you have no guarantee that you thread is finishing in time and on time. You try to target your rendering to n frames per second, but what happens when you don't make it time wise? As far as I can see you don't compensate for the deviation a render cycle took compared to the ideal timing.
The way threading works is that ultimately the OS scheduler decides when your thread is active. There are never guarantees and this causes you render cycles to be not very precise (in term of precision you need for audio rendering).
2. There is no synchronisation between the render thread and the Core Audio rendercallback thread.
The thread where the OSXAudioUnitCallback runs is not the same as where your OSXProcessFrameAndRunGameLogic and thus OutputTestSineWave run. You are providing data from your main thread, and data is being read from the Core Audio render thread. Normally you would use some mutexes to protect you data, but in this case that's not possible because you would run into the problem of priority inversion.
A way of dealing with race conditions is to use a buffer which uses atomic variables to store the usage and pointers of the buffer and let only 1 producer and 1 consumer use this buffer.
Good examples of such buffers are:
https://github.com/michaeltyson/TPCircularBuffer
https://github.com/andrewrk/libsoundio/blob/master/src/ring_buffer.h
3. There are a lot of calls in you audio render thread which prevent deterministic behaviour.
As you wrote you are doing a lot more inside the same audio render thread. Changes are quite high that there will be stuff going on (under the hood) which prevents your thread from being on time. Generally, you should avoid calls which take either too much time or are not deterministic. With all the OpenGL/keypres/framebuffer rendering there is no way to be certain you thread will "arrive on time".
Below are some resources worth looking into.
Question 2:
AFAICT generally speaking, you are using the Core Audio technology correctly. The only problem I think you have is on the providing side.
Question 3:
Yes. Definitely! Although, there are multiple ways of doing this.
In your case you have a normal-priority thread running to do the rendering and a high-performance, realtime thread on which the audio render callback is being called. Looking at your code I would suggest putting the generation of the sine wave inside the render callback function (or call OutputTestSineWave from the render callback). This way you have the audio generation running inside a reliable high prio thread, there is no other rendering interfering with the timing precision and there is no need for a ringbuffer.
In other cases where you need to do "non-realtime" processing to get audiodata ready (think of reading from a file, reading from a network or even from another physical audio device) you cannot run this logic inside the Core Audio thread. A way to solve this is to start a separate, dedicated thread to do this processing. To pass the data to the realtime audio thread you would then make use of the earlier mentioned ringbuffer.
It basically boiles down to two simple goals: for the realtime thread it is necessary to have the audio data available at all times (all render calls), if this failes you will end up sending invalid (or better zeroed) audio data.
The main goal for the secondary thread is to fill up the ringbuffer as fast as possible and to keep the ringbuffer as full as possible. So, whenever there is room to put new audio data into the ringbuffer the thread should do just that.
The size of the ringbuffer in this case will dicate how much tolerance there will be for delay. The size of the ringbuffer will be a balance between certainty (bigger buffer) and latency (smaller buffer).
BTW. I'm quite certain Core Audio has all the facilities to do all this for you.
Question 4:
There are multiple ways of achieving you goal, and rendering the stuff inside the render callback from Core Audio is definitely one of them. The one thing you should keep in mind is that you have to make sure the function returns in time.
For changing parameters to manipulate the audio rendering you'll have to find a way of passing messages which enables the reader (audio renderer function) to get messages without locking and waiting. The way I have done this is to create a second ringbuffer which hold messages from which the audio renderer can consume. This can be as simple as a ringbuffer which hold structs with data (or even pointers to data). As long as you stick to the rules of no locking.
Question 5:
I don't know what resources you are aware of but here are some must-reads:
http://atastypixel.com/blog/four-common-mistakes-in-audio-development/
http://www.rossbencina.com/code/real-time-audio-programming-101-time-waits-for-nothing
https://developer.apple.com/library/archive/qa/qa1467/_index.html
You basic problem is that you are trying to push audio from your game loop instead of letting the audio system pull it; e.g. instead of always having (or quickly being able to create *) enough audio samples ready for the amount requested by the audio callback to be pulled by the audio callback. The "always" has to account for enough slop to cover timing jitter (being called late or early or too few times) in your game loop.
(* with no locks, semaphores, memory allocation or Objective C messages)

Design of multi-threaded server in c

When trying to implement a simple echo server with concurrent support on linux.
Following approaches are used:
Use pthread functions to create a pool of thread, and maintained in a linked list. It's created on process start, and destroy on process termination.
Main thread will accept request, and use a POSIX message queue to store accepted socket file descriptor.
Threads in pool loop to read from message queue, and handle request it gets, when there is no request, it will block.
The program seems working now.
The questions are:
Is it suitable to use message queue in the middle, is it efficient enough?
What is the general approach to accomplish a thread tool that needs to handle concurrent request from multiple clients?
If it's not proper to make threads in pool loop & block to retrieve msg from message queue, then how to deliver requests to threads?
This seems unneccesarily complicated to me. The usual approach for a multithreaded server is:
Create a listen-socket in a thread process
Accept the client-connections in a thread
For each accepted client connection, create a new threads, which receives the corresponding file descriptor and does the work
The worker thread closes the client connection, when it is fully handled
I do not see much benefit in prepopulating a thread-pool here.
If you really want a threadpool:
I would just use a linked list for accepted connections and a pthread_mutex to synchronize access to it:
The listener-process enqueues client fds at the tail of the list.
The clients dequeue it at the head.
If the queue is empty, the thread can wait on a variable (pthread_cond_wait) and are notified by the listener process (pthread_cond_signal) when connections are available.
Another alternative
Depending on the complexity of handling requests, it might be an option to make the server single-threaded, i.e. handle all connections in one thread. This eliminates context-switches altogether and can thus be very performant.
One drawback is, that only one CPU-core is used. To improve that, a hybrid-model can be used:
Create one worker-thread per core.
Each thread handles simultaneously n connections.
You would however have to implement mechanisms to distribute the work fairly amongst the workers.
In addition to using pthread_mutex, you will want to use pthread_cond_t (pthread condition), this will allow you to put the threads in the thread pool to sleep while they are not actually doing work. Otherwise, you will be wasting compute cycles if they are sitting there in a loop checking for something in the work queue.
I would definitely consider using C++ instead of just pure C. The reason I suggest it is that in C++ you are able to use templates. Using a pure virtual base class (lets call it: "vtask"), you can create templated derived classes that accept arguments and insert the arguments when the overloaded operator() is called, allowing for much, much more functionality in your tasks:
//============================================================================//
void* thread_pool::execute_thread()
{
vtask* task = NULL;
while(true)
{
//--------------------------------------------------------------------//
// Try to pick a task
m_task_lock.lock();
//--------------------------------------------------------------------//
// We need to put condition.wait() in a loop for two reasons:
// 1. There can be spurious wake-ups (due to signal/ENITR)
// 2. When mutex is released for waiting, another thread can be waken up
// from a signal/broadcast and that thread can mess up the condition.
// So when the current thread wakes up the condition may no longer be
// actually true!
while ((m_pool_state != state::STOPPED) && (m_main_tasks.empty()))
{
// Wait until there is a task in the queue
// Unlock mutex while wait, then lock it back when signaled
m_task_cond.wait(m_task_lock.base_mutex_ptr());
}
// If the thread was waked to notify process shutdown, return from here
if (m_pool_state == state::STOPPED)
{
//m_has_exited.
m_task_lock.unlock();
//----------------------------------------------------------------//
if(mad::details::allocator_list_tl::get_allocator_list_if_exists() &&
tids.find(CORETHREADSELF()) != tids.end())
mad::details::allocator_list_tl::get_allocator_list()
->Destroy(tids.find(CORETHREADSELF())->second, 1);
//----------------------------------------------------------------//
CORETHREADEXIT(NULL);
}
task = m_main_tasks.front();
m_main_tasks.pop_front();
//--------------------------------------------------------------------//
//run(task);
// Unlock
m_task_lock.unlock();
//--------------------------------------------------------------------//
// execute the task
run(task);
m_task_count -= 1;
m_join_lock.lock();
m_join_cond.signal();
m_join_lock.unlock();
//--------------------------------------------------------------------//
}
return NULL;
}
//============================================================================//
int thread_pool::add_task(vtask* task)
{
#ifndef ENABLE_THREADING
run(task);
return 0;
#endif
if(!is_alive_flag)
{
run(task);
return 0;
}
// do outside of lock because is thread-safe and needs to be updated as
// soon as possible
m_task_count += 1;
m_task_lock.lock();
// if the thread pool hasn't been initialize, initialize it
if(m_pool_state == state::NONINIT)
initialize_threadpool();
// TODO: put a limit on how many tasks can be added at most
m_main_tasks.push_back(task);
// wake up one thread that is waiting for a task to be available
m_task_cond.signal();
m_task_lock.unlock();
return 0;
}
//============================================================================//
void thread_pool::run(vtask*& task)
{
(*task)();
if(task->force_delete())
{
delete task;
task = 0;
} else {
if(task->get() && !task->is_stored_elsewhere())
save_task(task);
else if(!task->is_stored_elsewhere())
{
delete task;
task = 0;
}
}
}
In the above, each created thread runs execute_thread() until the m_pool_state is set to state::STOPPED. You lock the m_task_lock, and if the state is not STOPPED and the list is empty, you pass the m_task_lock to your condition, which puts the thread to sleep and frees the lock. You create the tasks (not shown), add the task (m_task_count is an atomic, by the way, that is why it is thread safe). During the add task, the condition is signaled to wake up a thread, from which the thread proceeds from the m_task_cond.wait(m_task_lock.base_mutex_ptr()) section of execute_thread() after m_task_lock has been acquired and locked.
NOTE: this is a highly customized implementation that wraps most of the pthread functions/objects into C++ classes so copy-and-pasting will not work whatsoever... Sorry. And w.r.t. the thread_pool::run(), unless you are worrying about return values, the (*task)() line is all you need.
I hope this helps.
EDIT: the m_join_* references is for checking whether all the tasks have been completed. The main thread sits in a similar conditioned wait that checks whether all the tasks have been completed as this is necessary for the applications I use this implementation in before proceeding.

Complex multi-threaded interface

First of all its not a splash-screen what i want... just to be clear... ok... lets go to the description of the problem:
i have a form that fire N number of threads (i dont know how many, the user must choose)... each thread has a object, and during several moments the objects may fire a event to signal some change... there must be a form for each thread to "report" the messages that the events are sending...
my problem is: the threads create the forms perfectally... but the desappear... out of nowhere... they appear on the screen... and vanish... poof.... gone! how can i avoid that undesired "disposing"?!?!
Your threads must either
use proper InvokeRequired + Invoke logic
or run their own MessagePump (Application.Run)
Which one did you (not) do?
If you create a form in a thread, the form will vanish when the thread is done. If you want the form to survive longer than that you need to either keep the thread alive, or create the form on the application's main thread. The latter would be preferable. Just make sure that each to hook up event listener for the object in the corresponding form, and use Invoke or BeginInvoke as needed when updating the form.
A simple example:
First a worker:
class Worker
{
public event EventHandler SomethingHappened;
protected void OnSomethingHappened(EventArgs e)
{
var evnt = SomethingHappened;
if (evnt != null)
{
evnt(this, e);
}
}
public void Work()
{
// do lots of work, occasionally calling
// OnSomethingHappened
}
}
Then, in a form we have an event handler for the SomethingHappened event:
public void SomethingHappenedHandler(object sender, EventArgs e)
{
if (this.InvokeRequired)
{
this.Invoke(new Action(() => SomethingHappenedHandler(sender, e)));
return;
}
// update gui here
}
Then it's really just a matter of wiring it all together:
Worker w = new Worker();
ProgressForm f = new ProgressForm;
w.SomethingHappened += f.SomethingHappenedHandler;
f.Show();
Thread t = new Thread(w.Work);
t.Start();
Disclaimer: this sample is quickly tossed together and somewhat untested (sitting on the train, about to get off ;) ).
A Form must be hosted on a thread with a message loop. You can create a message loop by either calling Application.Run or Form.ShowDialog. However, unless you have really good reason for doing so I would avoid having more than one thread with a windows message loop.
I would also avoid creating N threads. There are better ways to parallelize N operations other than creating one thread per operation. To name only two: 1) queue a work item in the ThreadPool or 2) use the Task Parallel Library via the Task class. The problem with creating N threads is that each thread consumes a certain amount of resources. More threads means more resources will be consumed and more context switching will occur. More is not always better in the world of multithreading.

Resources