Get a GstBuffer/GstMemory/GstSample from an element in running pipeline - c

I need to randomly access image data in a running pipeline. Something like gst_base_sink_get_last_sample() but not for a sink element placed at the end of the pipeline. I need to inspect passing data in the middle of the pipeline while running, for example inspect the input buffer to the glupload. I also can not add tee to the pipeline to make a fork and send that to fakesink as tee has overheads of data copy for each frame. Is there any method I can use to extract current buffer/memory/sample of data in a PLAYING element in gstreamer pipeline?
Pipeline:

Thanks to #FlorianZwoch, I made it using pad probe.
Every time I need to get access to data to probe it, I add a probe to any pad I want like this:
GstPad* pad = gst_element_get_static_pad(v4l2convert, "src");
gst_pad_add_probe(pad, GST_PAD_PROBE_TYPE_BUFFER, probe_callback, NULL, NULL);
gst_object_unref(pad);
and then in the callback function I can access data using these lines of code:
GstPadProbeReturn Gstreamer::probe_callback(
GstPad* pad, GstPadProbeInfo* info, gpointer user_data)
{
Q_UNUSED(user_data);
gst_println("callback called");
GstBuffer* buffer = gst_pad_probe_info_get_buffer(info); // No need to unref later [see docs]
GstCaps* caps = gst_pad_get_current_caps(pad);
if (!buffer || !caps)
{
g_printerr("Probe callback failed to fetch valid data.");
goto label_return;
}
// Consume data here
return GST_PAD_PROBE_REMOVE;
}

Related

Reconnect RTSP stream in Gstreamer pipeline

I have a working Gstreamer pipeline using RTSP input streams. To handle these given RTSP input streams, the uridecobin element is used.
My goal is to reconnect to the RTSP input streams when internet connection is unstable.
When the internet connection is down for only few seconds and then it is up, then the pipeline starts to receive the frames again and everything is ok. When the internet connection is down for >20 seconds I get GST_MESSAGE_EOS. I tried to find some timeout variable in every element generated by uridecodebin, but I did not find it. Do you have any hint which element has this timeout variable and how to set it?
If it is not possible to set such timeout variable, is there any way to block GST_MESSAGE_EOS? Because when I receive GST_MESSAGE_EOS in bus, I try to remove uridecodebin from the pipeline and create a new one. But it does not work for me when GST_MESSAGE_EOS is received (When I try to remove uridecodebin from the pipeline and create a new one during normal state, it works).
I found the way how to block GST_MESSAGE_EOS.
Create the following function to drop GST_EVENT_EOS:
GstPadProbeReturn eos_probe_cb(GstPad *pad, GstPadProbeInfo *info, gpointer u_data)
{
if (GST_EVENT_TYPE(GST_PAD_PROBE_INFO_DATA(info)) == GST_EVENT_EOS)
{
return GST_PAD_PROBE_DROP;
}
return GST_PAD_PROBE_OK;
}
And then just add this function to some GstPad of your elements:
gst_pad_add_probe(src_pad, GST_PAD_PROBE_TYPE_EVENT_DOWNSTREAM, eos_probe_cb, (gpointer) user_data, NULL);

Gstreamer restart EOS element

I'd like to loop a file using GStreamer.
Gst.Element playbin = Gst.ElementFactory.make ("uridecodebin", null);
I do this by adding a probe to the playbin's src pad, and listen for EOS messages. Whenever one comes, I repeat the stream by seeking back to the beginning.
Gst.Pad srcpad = playbin.get_request_pad("src_%u");
srcpad.add_probe(Gst.PadProbeType.EVENT_DOWNSTREAM, (pad, info) => {
Gst.Event? event = info.get_event();
if (event != null)
{
if (event.type == Gst.EventType.EOS
|| event.type == Gst.EventType.SEGMENT_DONE)
{
var element = pad.get_parent_element();
element.seek(1.0, Gst.Format.TIME, Gst.SeekFlags.SEGMENT, Gst.SeekType.SET, 0, Gst.SeekType.NONE, 0);
return Gst.PadProbeReturn.HANDLED;
}
}
return Gst.PadProbeReturn.OK;
});
However, when I catch the EOS and seek back to the beginning, I get this error:
wavparse gstwavparse.c:2195:gst_wavparse_stream_data:<wavparse0> Error pushing on srcpad wavparse0:src, reason eos, is linked? = 1
How do I get my playbin element back out of the EOS state so that it can play from the place I seeked to?
I'd like to avoid listening to the pipeline bus because it's quite a complex application and the playbin is quite a few Bins deep.
Admittedly, my testing was performed with python, rather than C but I see no reason why the logic should be different.
Set the player's pipeline state to Gst.State.NULL, wait for a tenth of a second or so for the pipeline to reach that state, then simply play it again, as if starting from scratch after loading it. Normally, by simply setting the pipeline state to Gst.State.PLAYING.
Note:
As far as I'm aware, only local files can be pre-rolled i.e. do the seek and then play, thus if the file is not local you must play and then seek. Always wait for the pipeline to reach the desired state, before the next operation.
To pre-roll a local file, set the pipeline state to paused, then perform the seek, finally set it to playing, always waiting for the previous operation to finish before moving on to the next.

Realtime sine tone generation with Core Audio

I want to create a realtime sine generator using apples core audio framework. I want do do it low level so I can learn and understand the fundamentals.
I know that using PortAudio or Jack would probably be easier and I will use them at some point but I would like to get this to work first so I can be confident to understand the fundamentals.
I literally searched for days now on this topic but no one seems to have ever created a real time wave generator using core audio trying to optain low latency while using C and not Swift or Objective-C.
For this I use a project I set up a while ago. It was first designed to be a game. So after the Application starts up, it will enter a run loop. I thought this would perfectly fit as I can use the main loop to copy samples into the audio buffer and process rendering and input handling as well.
So far I get sound. Sometimes it works for a while then starts to glitch, sometimes it glitches right away.
This is my code. I tried to simplify if and only present the important parts.
I got multiple questions. They are located in the bottom section of this post.
Applications main run loop. This is where it all starts after the window is created and buffers and memory is initialized:
while (OSXIsGameRunning())
{
OSXProcessPendingMessages(&GameData);
[GlobalGLContext makeCurrentContext];
CGRect WindowFrame = [window frame];
CGRect ContentViewFrame = [[window contentView] frame];
CGPoint MouseLocationInScreen = [NSEvent mouseLocation];
BOOL MouseInWindowFlag = NSPointInRect(MouseLocationInScreen, WindowFrame);
CGPoint MouseLocationInView = {};
if (MouseInWindowFlag)
{
NSRect RectInWindow = [window convertRectFromScreen:NSMakeRect(MouseLocationInScreen.x, MouseLocationInScreen.y, 1, 1)];
NSPoint PointInWindow = RectInWindow.origin;
MouseLocationInView= [[window contentView] convertPoint:PointInWindow fromView:nil];
}
u32 MouseButtonMask = [NSEvent pressedMouseButtons];
OSXProcessFrameAndRunGameLogic(&GameData, ContentViewFrame,
MouseInWindowFlag, MouseLocationInView,
MouseButtonMask);
#if ENGINE_USE_VSYNC
[GlobalGLContext flushBuffer];
#else
glFlush();
#endif
}
Through using VSYNC I can throttle the loop down to 60 FPS. The timing is not super tight but it is quite steady. I also have some code to throttle it manually using mach timing which is even more imprecise. I left it out for readability.
Not using VSYNC or using mach timing to get 60 iterations a second also makes the audio glitch.
Timing log:
CyclesElapsed: 8154360866, TimeElapsed: 0.016624, FPS: 60.155666
CyclesElapsed: 8174382119, TimeElapsed: 0.020021, FPS: 49.946926
CyclesElapsed: 8189041370, TimeElapsed: 0.014659, FPS: 68.216309
CyclesElapsed: 8204363633, TimeElapsed: 0.015322, FPS: 65.264511
CyclesElapsed: 8221230959, TimeElapsed: 0.016867, FPS: 59.286217
CyclesElapsed: 8237971921, TimeElapsed: 0.016741, FPS: 59.733719
CyclesElapsed: 8254861722, TimeElapsed: 0.016890, FPS: 59.207333
CyclesElapsed: 8271667520, TimeElapsed: 0.016806, FPS: 59.503273
CyclesElapsed: 8292434135, TimeElapsed: 0.020767, FPS: 48.154209
What is important here is the function OSXProcessFrameAndRunGameLogic. It is called 60 times a second and it is passed a struct containing basic information like a buffer for rendering, keyboard state and a sound buffer which looks like this:
typedef struct osx_sound_output
{
game_sound_output_buffer SoundBuffer;
u32 SoundBufferSize;
s16* CoreAudioBuffer;
s16* ReadCursor;
s16* WriteCursor;
AudioStreamBasicDescription AudioDescriptor;
AudioUnit AudioUnit;
} osx_sound_output;
Where game_sound_output_buffer is:
typedef struct game_sound_output_buffer
{
real32 tSine;
int SamplesPerSecond;
int SampleCount;
int16 *Samples;
} game_sound_output_buffer;
These get set up before the application enters its run loop.
The size for the SoundBuffer itself is SamplesPerSecond * sizeof(uint16) * 2 where SamplesPerSecond = 48000.
So inside OSXProcessFrameAndRunGameLogic is the sound generation:
void OSXProcessFrameAndRunGameLogic(osx_game_data *GameData, CGRect WindowFrame,
b32 MouseInWindowFlag, CGPoint MouseLocation,
int MouseButtonMask)
{
GameData->SoundOutput.SoundBuffer.SampleCount = GameData->SoundOutput.SoundBuffer.SamplesPerSecond / GameData->TargetFramesPerSecond;
// Oszi 1
OutputTestSineWave(GameData, &GameData->SoundOutput.SoundBuffer, GameData->SynthesizerState.ToneHz);
int16* CurrentSample = GameData->SoundOutput.SoundBuffer.Samples;
for (int i = 0; i < GameData->SoundOutput.SoundBuffer.SampleCount; ++i)
{
*GameData->SoundOutput.WriteCursor++ = *CurrentSample++;
*GameData->SoundOutput.WriteCursor++ = *CurrentSample++;
if ((char*)GameData->SoundOutput.WriteCursor >= ((char*)GameData->SoundOutput.CoreAudioBuffer + GameData->SoundOutput.SoundBufferSize))
{
//printf("Write cursor wrapped!\n");
GameData->SoundOutput.WriteCursor = GameData->SoundOutput.CoreAudioBuffer;
}
}
}
Where OutputTestSineWave is the part where the buffer is actually filled with data:
void OutputTestSineWave(osx_game_data *GameData, game_sound_output_buffer *SoundBuffer, int ToneHz)
{
int16 ToneVolume = 3000;
int WavePeriod = SoundBuffer->SamplesPerSecond/ToneHz;
int16 *SampleOut = SoundBuffer->Samples;
for(int SampleIndex = 0;
SampleIndex < SoundBuffer->SampleCount;
++SampleIndex)
{
real32 SineValue = sinf(SoundBuffer->tSine);
int16 SampleValue = (int16)(SineValue * ToneVolume);
*SampleOut++ = SampleValue;
*SampleOut++ = SampleValue;
SoundBuffer->tSine += Tau32*1.0f/(real32)WavePeriod;
if(SoundBuffer->tSine > Tau32)
{
SoundBuffer->tSine -= Tau32;
}
}
}
So when the Buffers are created at start up also Core audio is initialized which I do like this:
void OSXInitCoreAudio(osx_sound_output* SoundOutput)
{
AudioComponentDescription acd;
acd.componentType = kAudioUnitType_Output;
acd.componentSubType = kAudioUnitSubType_DefaultOutput;
acd.componentManufacturer = kAudioUnitManufacturer_Apple;
AudioComponent outputComponent = AudioComponentFindNext(NULL, &acd);
AudioComponentInstanceNew(outputComponent, &SoundOutput->AudioUnit);
AudioUnitInitialize(SoundOutput->AudioUnit);
// uint16
//AudioStreamBasicDescription asbd;
SoundOutput->AudioDescriptor.mSampleRate = SoundOutput->SoundBuffer.SamplesPerSecond;
SoundOutput->AudioDescriptor.mFormatID = kAudioFormatLinearPCM;
SoundOutput->AudioDescriptor.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsNonInterleaved | kAudioFormatFlagIsPacked;
SoundOutput->AudioDescriptor.mFramesPerPacket = 1;
SoundOutput->AudioDescriptor.mChannelsPerFrame = 2; // Stereo
SoundOutput->AudioDescriptor.mBitsPerChannel = sizeof(int16) * 8;
SoundOutput->AudioDescriptor.mBytesPerFrame = sizeof(int16); // don't multiply by channel count with non-interleaved!
SoundOutput->AudioDescriptor.mBytesPerPacket = SoundOutput->AudioDescriptor.mFramesPerPacket * SoundOutput->AudioDescriptor.mBytesPerFrame;
AudioUnitSetProperty(SoundOutput->AudioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
0,
&SoundOutput->AudioDescriptor,
sizeof(SoundOutput->AudioDescriptor));
AURenderCallbackStruct cb;
cb.inputProc = OSXAudioUnitCallback;
cb.inputProcRefCon = SoundOutput;
AudioUnitSetProperty(SoundOutput->AudioUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Global,
0,
&cb,
sizeof(cb));
AudioOutputUnitStart(SoundOutput->AudioUnit);
}
The initialization code for core audio sets the render callback to OSXAudioUnitCallback
OSStatus OSXAudioUnitCallback(void * inRefCon,
AudioUnitRenderActionFlags * ioActionFlags,
const AudioTimeStamp * inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList * ioData)
{
#pragma unused(ioActionFlags)
#pragma unused(inTimeStamp)
#pragma unused(inBusNumber)
//double currentPhase = *((double*)inRefCon);
osx_sound_output* SoundOutput = ((osx_sound_output*)inRefCon);
if (SoundOutput->ReadCursor == SoundOutput->WriteCursor)
{
SoundOutput->SoundBuffer.SampleCount = 0;
//printf("AudioCallback: No Samples Yet!\n");
}
//printf("AudioCallback: SampleCount = %d\n", SoundOutput->SoundBuffer.SampleCount);
int SampleCount = inNumberFrames;
if (SoundOutput->SoundBuffer.SampleCount < inNumberFrames)
{
SampleCount = SoundOutput->SoundBuffer.SampleCount;
}
int16* outputBufferL = (int16 *)ioData->mBuffers[0].mData;
int16* outputBufferR = (int16 *)ioData->mBuffers[1].mData;
for (UInt32 i = 0; i < SampleCount; ++i)
{
outputBufferL[i] = *SoundOutput->ReadCursor++;
outputBufferR[i] = *SoundOutput->ReadCursor++;
if ((char*)SoundOutput->ReadCursor >= (char*)((char*)SoundOutput->CoreAudioBuffer + SoundOutput->SoundBufferSize))
{
//printf("Callback: Read cursor wrapped!\n");
SoundOutput->ReadCursor = SoundOutput->CoreAudioBuffer;
}
}
for (UInt32 i = SampleCount; i < inNumberFrames; ++i)
{
outputBufferL[i] = 0.0;
outputBufferR[i] = 0.0;
}
return noErr;
}
This is mostly all there is to it. This is quite long but I did not see a way to present all needed information in a more compact way. I wanted to show all because I am by no means a professional programmer. If there is something you feel is missing, pleas tell me.
My feeling tells me that there is something wrong with the timing. I feel the function OSXProcessFrameAndRunGameLogic sometimes needs more time so that the core audio callback is already pulling samples out of the buffer before it is fully written by OutputTestSineWave.
There is actually more stuff going on in OSXProcessFrameAndRunGameLogic which I did not show here. I am "software rendering" very basic stuff into a framebuffer which is then displayed by OpenGL and I also do keypress checks in there because yeah, its the main function of functionality. In the future this is the place where I would like to handle the controls for multiple oscillators, filters and stuff.
Anyway even if I stop the Rendering and Input handling from being called every iteration I still get audio glitches.
I tried pulling all the sound processing in OSXProcessFrameAndRunGameLogic into an own function void* RunSound(void *GameData) and changed it to:
pthread_t soundThread;
pthread_create(&soundThread, NULL, RunSound, GameData);
pthread_join(soundThread, NULL);
However I got mixed results and was not even sure if multithreading is done like that. Creating and destroying threads 60 times a second didn't seem the way to go.
I also had the idea to let sound processing happen on a completely different thread before the application actually runs into the main loop. Something like two simultaneously running while loops where the first processes audio and the latter UI and input.
Questions:
I get glitchy audio. Rendering and input seem to work correctly but audio sometimes glitches, sometimes it doesn't. From the code I provided, can you maybe see me doing something wrong?
Am I using the core audio technology in a wrong way in order to achieve real time low latency signal generation?
Should I do sound processing in a separate thread like I talked about above? How would threading in this context be done correctly? It would make sense to have a thread only dedicated for sound am I right?
Am I right that the basic audio processing should not be done in the render callback of core audio? Is this function only for outputting the provided sound buffer?
And if sound processing should be done right here, how can I access information like the keyboard state from inside the callback?
Are there any resources you could point me to that I maybe missed?
This is the only place I know where I can get help with this project. I would really appreciate your help.
And if something is not clear to you please let me know.
Thank you :)
In general when dealing with low-latency audio you want to achieve the most deterministic behaviour possible.
This, for example, translates to:
Don't hold any locks on the audio thread (priority inversion)
No memory allocation on the audio thread (takes often too much time)
No file/network IO on the audio thread (takes often too much time)
Question 1:
There are indeed some problems with your code for when you want to achieve continuous, realtime, non-glitching audio.
1. Two different clock domains.
You are providing audio data from a (what I call) a different clock domain than the clock domain asking for data. Clock domain 1 in this case is defined by your TargetFramesPerSecond value, clock domain 2 defined by Core Audio. However, due too how scheduling works you have no guarantee that you thread is finishing in time and on time. You try to target your rendering to n frames per second, but what happens when you don't make it time wise? As far as I can see you don't compensate for the deviation a render cycle took compared to the ideal timing.
The way threading works is that ultimately the OS scheduler decides when your thread is active. There are never guarantees and this causes you render cycles to be not very precise (in term of precision you need for audio rendering).
2. There is no synchronisation between the render thread and the Core Audio rendercallback thread.
The thread where the OSXAudioUnitCallback runs is not the same as where your OSXProcessFrameAndRunGameLogic and thus OutputTestSineWave run. You are providing data from your main thread, and data is being read from the Core Audio render thread. Normally you would use some mutexes to protect you data, but in this case that's not possible because you would run into the problem of priority inversion.
A way of dealing with race conditions is to use a buffer which uses atomic variables to store the usage and pointers of the buffer and let only 1 producer and 1 consumer use this buffer.
Good examples of such buffers are:
https://github.com/michaeltyson/TPCircularBuffer
https://github.com/andrewrk/libsoundio/blob/master/src/ring_buffer.h
3. There are a lot of calls in you audio render thread which prevent deterministic behaviour.
As you wrote you are doing a lot more inside the same audio render thread. Changes are quite high that there will be stuff going on (under the hood) which prevents your thread from being on time. Generally, you should avoid calls which take either too much time or are not deterministic. With all the OpenGL/keypres/framebuffer rendering there is no way to be certain you thread will "arrive on time".
Below are some resources worth looking into.
Question 2:
AFAICT generally speaking, you are using the Core Audio technology correctly. The only problem I think you have is on the providing side.
Question 3:
Yes. Definitely! Although, there are multiple ways of doing this.
In your case you have a normal-priority thread running to do the rendering and a high-performance, realtime thread on which the audio render callback is being called. Looking at your code I would suggest putting the generation of the sine wave inside the render callback function (or call OutputTestSineWave from the render callback). This way you have the audio generation running inside a reliable high prio thread, there is no other rendering interfering with the timing precision and there is no need for a ringbuffer.
In other cases where you need to do "non-realtime" processing to get audiodata ready (think of reading from a file, reading from a network or even from another physical audio device) you cannot run this logic inside the Core Audio thread. A way to solve this is to start a separate, dedicated thread to do this processing. To pass the data to the realtime audio thread you would then make use of the earlier mentioned ringbuffer.
It basically boiles down to two simple goals: for the realtime thread it is necessary to have the audio data available at all times (all render calls), if this failes you will end up sending invalid (or better zeroed) audio data.
The main goal for the secondary thread is to fill up the ringbuffer as fast as possible and to keep the ringbuffer as full as possible. So, whenever there is room to put new audio data into the ringbuffer the thread should do just that.
The size of the ringbuffer in this case will dicate how much tolerance there will be for delay. The size of the ringbuffer will be a balance between certainty (bigger buffer) and latency (smaller buffer).
BTW. I'm quite certain Core Audio has all the facilities to do all this for you.
Question 4:
There are multiple ways of achieving you goal, and rendering the stuff inside the render callback from Core Audio is definitely one of them. The one thing you should keep in mind is that you have to make sure the function returns in time.
For changing parameters to manipulate the audio rendering you'll have to find a way of passing messages which enables the reader (audio renderer function) to get messages without locking and waiting. The way I have done this is to create a second ringbuffer which hold messages from which the audio renderer can consume. This can be as simple as a ringbuffer which hold structs with data (or even pointers to data). As long as you stick to the rules of no locking.
Question 5:
I don't know what resources you are aware of but here are some must-reads:
http://atastypixel.com/blog/four-common-mistakes-in-audio-development/
http://www.rossbencina.com/code/real-time-audio-programming-101-time-waits-for-nothing
https://developer.apple.com/library/archive/qa/qa1467/_index.html
You basic problem is that you are trying to push audio from your game loop instead of letting the audio system pull it; e.g. instead of always having (or quickly being able to create *) enough audio samples ready for the amount requested by the audio callback to be pulled by the audio callback. The "always" has to account for enough slop to cover timing jitter (being called late or early or too few times) in your game loop.
(* with no locks, semaphores, memory allocation or Objective C messages)

Download File (using Thread class)

Ok, I understand that maybe very stupid question, but i never did it before, so i ask this question. How can i download file (let's say, from the internet) using Thread class?
What do you mean with "using Thread class"? I guess you want to download a file threaded so it does not block your UI or some other part of your program.
Ill assume that your using C++ and WINAPI.
First create a thread. This tutorial provides good information about WIN32 threads.
This thread will be responsible for downloading the file. To do this you simply connect to the webserver on port 80 and send a HTTP GET request for the file you want. It could look similar to this (note the newline characters):
GET /path/to/your/file.jpg HTTP/1.1\r\n
Host: www.host.com\r\n
Connection: close\r\n
\r\n
\r\n
The server will then answer with a HTTP response containing the file with a preceding header. Parse this header and read the contents.
More information on HTTP can be found here.
If would suggest that you do not use threads for downloading files. It's better to use asynchronous constructs that are more targeted towards I/O, since they will incur a lower overhead than threads. I don't know what version of the .NET Framework you are working with, but in 4.5, something like this should work:
private static Task DownloadFileAsync(string uri, string localPath)
{
// Get the http request
HttpWebRequest webRequest = WebRequest.CreateHttp(uri);
// Get the http response asynchronously
return webRequest.GetResponseAsync()
.ContinueWith(task =>
{
// When the GetResponseAsync task is finished, we will come
// into this contiuation (which is an anonymous method).
// Check if the GetResponseAsync task failed.
if (task.IsFaulted)
{
Console.WriteLine(task.Exception);
return null;
}
// Get the web response.
WebResponse response = task.Result;
// Open a file stream for the local file.
FileStream localStream = File.OpenWrite(localPath);
// Copy the contents from the response stream to the
// local file stream asynchronously.
return response.GetResponseStream().CopyToAsync(localStream)
.ContinueWith(streamTask =>
{
// When the CopyToAsync task is finished, we come
// to this continuation (which is also an anonymous
// method).
// Flush and dispose the local file stream. There
// is a FlushAsync method that will flush
// asychronously, returning yet another task, but
// for the sake of brevity I use the synchronous
// method here.
localStream.Flush();
localStream.Dispose();
// Don't forget to check if the previous task
// failed or not.
// All Task exceptions must be observed.
if (streamTask.IsFaulted)
{
Console.WriteLine(streamTask.Exception);
}
});
// since we end up with a task returning a task we should
// call Unwrap to return a single task representing the
// entire operation
}).Unwrap();
}
You would want to elaborate a bit on the error handling. What this code does is in short:
See the code comments for more detailed explanations of how it works.

Start and stop GLUT animation

I'm working on a project in c, where I'm going to make some heavy physics calculations, and I want the ability to see the results when I'm finished. The way it works now is that I run GLUT on the main thread, and use a seperate thread (pthread) to do input (from terminal) and calculations. I currently use glutTimerFunc to do the animation, but the problem is that that function will fire every given time intervall no matter what. I can stop the animation by using an if statement in the animation function, stopping the variables from being updatet, but this uses a lot of unnecessary resources (I think).
To fix this problem I was thinking that I could use an extra thread with a custom timer function that I could controll myself (without glutMainLoop messing things up). Currently this is my test function to check if this would work (meaning that the function itself is in no way finished). It runs in a seperate thread createt just before glutMainLoop:
void *threadAnimation() {
while (1) {
if (animationRun) {
rotate = rotate+0.00001;
if (rotate>360) {
rotate = rotate-360;
}
glutSetWindow(window);
glutPostRedisplay();
}
}
}
The specific problem I have is that the animation just runs for a couple of seconds, and then stops. Does anybody know how I can fix this? I am planning to use timers and so on later, but what I'm looking for is a way to ensure that glutPostRedisplay will be sent the right place. I tought glutSetWindow(window) was the solution, but apparently not. If I remove glutSetWindow(window) the animation still works, just not for as long, but runs much faster (so maybe glutSetWindow(window) takes a lot of resources?)
btw the variable "window" is created like this:
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB);
glutInitWindowSize(854, 540);
glutInitWindowPosition(100, 100);
window = glutCreateWindow("Animation View");
init();
glutDisplayFunc(display);
glutReshapeFunc(reshape);
timerInt = pthread_create(&timerThread, NULL, &threadAnimation, NULL);
glutMainLoop();
I don't acctually know if this is correct, but it compiles just fine. Any help is greatly appriciated!
Here's a little idea, create class that will contain all dynamic settings:
class DynamicInfo {
public int vertexCount;
public float *vertexes;
...
DynamicInfo &operator=( const DynamicInfo &origin);
};
And than main application will contain those:
DynamicInfo buffers[2];
int activeBuffer = 0;
In animation thread just draw (maybe use some threads locks for one variable):
DynamicInfo *current = buffers + activeBuffer; // Or rather use reference
In calculations:
// Store currently used buffer as current (for future manipulation)
DynamicInfo *current = buffers + activeBuffer;
// We finished calculations on activeBuffer[1] so we may propagate it to application
activeBuffer = (activeBuffer + 1)%2;
// Let actual data propagate to current buffer
(*current) = buffers[activeBuffer];
Again it's locking one variable.

Resources