clutter timeline not emitting signals in time - c

the timeline functionality in moblin clutter is used to do a callback every given milliseconds.although it is emitting signals lot faster(every 1ms or so). Why does this happen?
ClutterTimeline * clutter_timeline_new(guint msecs);

you should not be using a Timeline to get a notification (and execute code) that N amount of milliseconds have elapsed. ClutterTimeline is a object that is tied to the redraw cycle of the UI. timelines are advanced every time Clutter redraws a frame, to let the application code know that it has to update its state.
if you just need to have your code called after an interval, use g_timeout_add() instead; this function is tied only to the main loop, and not to the redraw cycle. there are other considerations to be taken care of when using a timeout, so you should read the documentation:
http://developer.gnome.org/glib/stable/glib-The-Main-Event-Loop.html#g-timeout-add
strictly speaking, if you're using Moblin, you're probably using a very old version of Clutter, so there may be bugs as well; not that I know of bugs where the ClutterTimeline::new-frame signal is called every millisecond, mind you.

Related

Libev: how to schedule a callback to be called as soon as possible

I'm learning libev and I've stumbled upon this question. Assume that I want to process something as soon as possible but not now (i.e. not in the current executing function). For example I want to divide some big synchronous job into multiple pieces that will be queued so that other callbacks can fire in between. In other words I want to schedule a callback with timeout 0.
So the first idea is to use ev_timer with timeout 0. The first question is: is that efficient? Is libev capable of transforming 0 timeout timer into an efficient "call as soon as possible" job? I assume it is not.
I've been digging through libev's docs and I found other options as well:
it can artificially delay invoking the callback by using a prepare or idle watcher
So the idle watcher is probably not going to be good here because
Idle watchers trigger events when no other events of the same or higher priority are pending
Which probably is not what I want. Prepare watchers might work here. But why not check watcher? Is there any crutial difference in the context I'm talking about?
The other option these docs suggest is:
or more sneakily, by reusing an existing (stopped) watcher and pushing it into the pending queue:
ev_set_cb (watcher, callback);
ev_feed_event (EV_A_ watcher, 0);
But that would require to always have a stopped watcher. Also since I don't know a priori how many calls I want to schedule at the same time then I would have to have multiple watchers and additionally keep track of them via some kind of list and increase it when needed.
So am I on the right track? Are these all possibilities or am I missing something simple?
You may want to check out the ev_prepare watcher. That one is scheduled for execution as the last handler in the given event loop iteration. It can be used for 'Execute this task ASAP' implementations. You can create dedicated watcher for each task you want to execute, or you can implement a queue with a one prepare watcher that is started once queue contains at least one task.
Alternatively, you can implement similar mechanism using ev_idle watcher, but this time, it will be executed only if the application doesn't process any 'higher priority' watcher handlers.

GTK3 sometimes ignores gtk_widget_queue_draw when repeated too quickly?

I have an application which does some simulation, and renders the result. Because the render can sometimes be very slow, I separated it into another thread, and the main thread calls gtk_widget_queue_draw once it's ready. If it's not finished drawing yet, the extra requests get discarded (since queue_draw only invalidates it, and it's impossible to be "more" invalid).
The result is that with large complicated systems, simulation maxes out a thread, and render maxes out another thread, and everything works.
I just ran into a different problem, and I don't see why it's happening: A sufficiently simple simulation and render (6 5-point lines) causes it to break.
The first few (I've seen everywhere from around 60 to 400) steps render fine (in a fraction of a second), until one step renders twice. After that, it ignores the queue_draw, until I do something like drag a window over the render window, after which it restarts (until it breaks again).
If I artificially slow down the requests (usleep(10000) is around enough), this does not happen.
This is a completely unacceptable solution however, because the process of displaying is not allowed to interfere with the normal simulation thread (No delays, no mutexes, etc. etc.). I need a solution that makes the render thread do "as well as possible", given that it is working with volatile data. It does not have to be perfectly accurate--I really don't care if a frame renders a little wrong (half of frame i, half of i+1 is fine), as long as it does render.
Why is this happening, and how do I fix it?
You are having a race condition.
Since GTK3.6, you need to call gtk_widget_queue_draw like this:
g_idle_add((GSourceFunc)gtk_widget_queue_draw,(void*)window);
or like this:
gdk_threads_add_idle((GSourceFunc)gtk_widget_queue_draw,(void*)window);
where window is the GtkWidget* you want to draw.
Use the gdk_threads_add_idle if you're not sure whether libraries or code used by your app use the deprecated (since 3.6) gdk_threads_enter and gdk_threads_leave functions.
See: https://developer.gnome.org/gdk3/stable/gdk3-Threads.html#gdk3-Threads.description
Before GTK3.6, gdk_threads_enter and gdk_threads_leave had to be used to acquire the lock for GTK calls.
Did you locked UI calls in the threads with gdk_threads_enter/gdk_threads_leave ?
Maybe adding a code sample would help too...

libspotify playlist update latency

We're using libspotify to update playlists that we have generated against a single account that need to be kept up to date over time. We're using a fork of the spotify-api-server to do this https://github.com/tom-martin/spotify-api-server
After sending an update to a playlist's tracks using libspotify we generally wait for the callback that we passed to sp_playlist_add_callbacks to be called before we report a success to the user. Often this callback arrives within a suitable time frame but increasingly we're getting unacceptable delays in receiving this callback. Sometimes 30 seconds, sometimes even longer, sometimes minutes, sometimes hours. It seems that generally these delays are caused by libspotify pausing for a period and not calling any callbacks until it seemingly "unfreezes" and calls all the backed up callbacks in quick succession.
Is it reasonable to use this callback as an indicator of a successful playlist update? Is there any obvious reason for these long delays?
Are you correctly handling the notify_main_thread function to keep libSpotify running?
Also, sometimes the playlist system gets backed up, goes down or otherwise takes a while to respond to requests. Our own clients keep their own cache of what the playlist tree should look like once pending transactions are successful to keep the UI snappy.

Background processing on UI thread? (Winforms)

Is there a (or, do you have your own) preferred way to do background processing in slices on the UI thread in Windows Forms? Like OnIdle() in MFC?
In native Windows programming you could roll your own message loop to do this, but Application.Run() doesn't give us access to the message loop.
The Application.Idle event gives us no way to trigger it repeatedly.
I guess you could call native PostMessage() with P/Invoke (since there's no managed version) to post yourself a private "WM_IDLE" message, and override WndProc() to catch it. I don't know how this would get along with Application.Run().
So far I've used a short Timer for this, but I'm afraid I may be losing cycles sleeping, especially since the actual Timer resolution is coarser than the nominal 1 ms minimum.
The best option I've seen is to use a modified version of the Managed DirectX Render Loop designed by Tom Miller. By adding a call to Thread.Sleep() inside the render loop, you can pull your CPU usage down dramatically.
This does require a P/Invoke call to track that the application is still idle, but as long as it's idle, you can make a "timer" that fires continuously during the idle phases, and use that to do your processing.
That being said, on modern systems, you almost always have extra cores. I would suggest just doing the processing on a true background thread.
I thought of my own possible answer, inspired by Reed's talk of multithreading. I may have a way to retrigger Application.Idle:
Create a hidden form, let's call it formRetrigger.
In Application.Idle, launch my Retrigger() method on a thread pool thread.
Retrigger() calls formRetrigger.InvokeOnClick() (or any of the other "Control.Invoke" methods). I expect this to launch another message through Application's queue, causing Idle to get triggered again.

Should I run everything in a background thread?

I am designing an application which has the potential to hang while waiting for data from servers (either Database or internet) the problem is that I don't know how best to cope with the multitude of different places things may take time.
I am happy to display a 'loading' dialog to the user while access is happening but ideally I don't want this to flick up and disappear for short running operations.
Microsoft word appears to handle this quite nicely, as if you click a button and the operation takes a long time, after a few seconds you get a 'working..' dialog. The operation is still synchronous and you can't interrupt the operation. However if the same operation happens quickly you obviously don't get the dialog.
I am happy(ish) to devise some generic background worker thread handler and 99% of my data processing is already done in static atomic methods, but I would like to go best practice on this one if I can.
If anyone has patterns, code or suggestions I welcome them all
Cheers
I would definitely think asynchronously using a pattern with 2 events. The first "event" is that you actually got your data from wherever/whenever you had to wait for it. The second event is a delay timer. If you get your data before this timer pops, all is well and good. If not, THEN you pop up your "I'm busy" and allow them to cancel the request. Usually cancel just mean "ignore" when you finally get the response.
Microsoft word appears to handle this quite nicely, as if you click a button and the operation takes a long time, after a few seconds you get a 'working..' dialog. The operation is still synchronous and you can't interrupt the operation. However if the same operation happens quickly you obviously don't get the dialog.
If this is the behavior your want...
You could handle this, fairly easily, by wrapping a class around BackgroundWorker. Just time the start of the DoWork event, and the time to the first progress report. If a certain amount of time passes, you could show your dialog - otherwise, block (since it's a short process).
That being said, any time you're doing work that can be processed asynchronously, I'd recommend doing it that way. It's much nicer to never block your UI for a noticable interval, even if it's short. This becomes much simpler in .NET 4 (or 3.5 with Rx framework) by using the task parallel library.
Ideally you should be running any IO or non-UI processing either in a background thread or asynchronously to avoid locking up the UI.

Resources