In real-time games, there is always a game loop that runs every few milliseconds, updates the game with new data and repaints the entire screen.
Is this something that is seen in other types of applications, other than games? A 'constant-update-loop'?
For example, imagine an application like MSPaint. The user can draw lines on the screen with the mouse. The line that is being drawn is displayed on the screen as it is being drawn.
Please imagine this line is actually made of a lot of smaller lines, each 2 pixels long. It would make sense to store each of these small lines in a List.
But as I said, the line that is being drawn (the large line, made out of lots of small lines) is displayed as it is being drawn. This means that a repaint of the screen would be necessary to display the new small line that was added the previous moment.
But - please correct me if I'm mistaken - it would be difficult to repaint only the specific part of the screen where the new small line was drawn. If so, a repaint of the entire screen would be necessary.
Thus it would make sense to use an 'update loop' to constantly repaint the entire screen, and constantly iterate over the list of lines and draw these lines over and over again - like in games.
Is this approach existent in non-game applications, and specifically in 'drawing' applications?
Thanks
Essentially you do have a loop in all applications, and games. How that is implemented depends on the system and desire of your application/game. I will loosely base my response toward the Windows system, if only because you mention MS Paint
MS Paint will likely NOT contain a List of lines like you are mentioning, instead it will edit the bitmap image directly each frame, coloring the required pixels immediately and then drawing it. In this situation drawing small portions of the image/application is as easy as telling "this part" of the image to draw itself "over there". So as you move the pencil tool around the pixels turn black and get drawn.
MS Paint and most applications will use a primary loop that WAITS for the next event, meaning, it will allow the operating system (Windows) to not process anything until it has messages/events to process (such as: Mouse Move, Button Press, Redraw, etc).
For a game, it needs to be a little differently. A game (typically) doesn't want the operating system to WAIT for the next message/event before processing continues. Instead here you Poll the operating system to check if there are messages to be handled, and if so handle them, if not continue with the game creating a single frame (perform an Update and Render/Draw the scene.)
MS Paint doesn't need to keep updating and drawing when the user is not interacting, and this is preferred for applications because constantly updating/drawing uses a lot of system resources and if every application did this, you wouldn't have 30 things running at the same time like you probably do now.
Related
I have an application written in C using GTK (although the language is probably unimportant for this question).
This application has a fullscreengtk_window with a single gtk_drawing_area. For the drawing area, I have registered a tick callback via gtk_widget_add_tick_callback which just calls gtk_widget_queue_draw every tick. Inside the drawing area draw callback, I change the color of the entire window at regular intervals (e.g., from black to white at 1Hz).
Say that in this call to the draw callback I want to change the window from black to white. I would like to know the precise time (down to the nearest ms) that the change is actually drawn on the screen (ideally in the same units as CLOCK_MONOTONIC). I don't think this is the same thing as the GdkFrameClock available in the tick callback, which, as I understand it, is about the time of the frame, not the time when the frame is actually displayed on the screen.
If I just measure the CLOCK_MONOTONIC time in the drawing callback, and then use a photo-diode to measure when the actual change is via an attached A2D, the actual change is the display is understandably delayed by a number of refresh intervals (in my case, 3 screen refreshes).
Just as a summary: if I am in a GTK widget draw callback, is there any way to know the time when the display will actually be shown on the monitor in the units of CLOCK_MONOTONIC? Or alternatively, is there a way that I can block a separate thread until a specific redraw that I care about is actually displayed on the screen (a function I can write like wait_for_screen_flip())?
Update: Ideally, the same solution would work for any Linux compositor (X11 or Wayland), which is why I am hoping for a GTK/GDK solution, where the compositor is abstracted away.
Similarly to Uli's answer of the Present extension and PresentCompleteNotify for X11, Wayland has a similar protocol called wp_presentation_feedback:
https://cgit.freedesktop.org/wayland/wayland-protocols/tree/stable/presentation-time/presentation-time.xml
This protocol allows the Wayland compositor to inform clients when their content was actually displayed (turned to light). It is independent of the actual buffer mechanism used (EGL/SHM/etc). To use it, you call wp_presentation_get_feedback before wl_surface_commit; when the commit has completed, a presented event will be sent to the client from the new wp_presentation_feedback object, or discarded if it was never shown.
Presentation feedback is currently implemented in Weston; it is not yet implemented in Mutter, and I don't believe it's implemented in KWin either. GTK+ plans to support it when it becomes available in Mutter, but I don't have any great insight as to how it would be exposed through the GTK+ API.
That being said, if you can get access to the Wayland display, it's possible that you could use the interface directly yourself.
I just came across https://developer.gnome.org/gdk3/stable/gdk3-GdkFrameTimings.html#gdk-frame-timings-get-presentation-time which seems to do just like what you want and is part of Gdk. I do not know how to use it nor have I seen some example of it, but https://developer.gnome.org/gdk3/stable/gdk3-GdkFrameTimings.html#gdk3-GdkFrameTimings.description says
The information in GdkFrameTimings is useful for precise synchronization of video with the event or audio streams, and for measuring quality metrics for the application’s display, such as latency and jitter.
Take a look at https://cgit.freedesktop.org/xorg/proto/presentproto/tree/presentproto.txt. Specifically, you want PresentCompleteNotify events. Note that these can only tell you later when presentation actually happened, so (I think) you will not know ahead of time when this is (but you could perhaps guess based on recent notifies?).
Note that this is
a relatively new X11 extension, so might not actually be supported everywhere
depends on the driver used (and likely lots of other factors) for the quality of data
cannot be used from GTK since it requires a different way to display to the screen (you draw to a Pixmap and then use PresentPixmap to make it visible and ask for a notify)
Also note that this extension provides lots of other things. You can for example say "please display at time ". Just read the protocol specification from start to end. :-)
This is more of a conceptual/implementation question, rather than a specific language problem.
Does anybody have any insight into cursor movement recording?
It's very easy to get a cursor's current position, but how would you go about recording the path followed by the cursor?
(To the degree of detail where it could be plotted graphically without ambiguity as to the path taken)
I imagine you could record the cursor's current position repeatedly after a small duration, logging it all to make a list of chronologically visited coordinates,
but I'm not sure how frequent (or feasible) the recording should take place; every 10 ms?
I've not even encountered a method of sleeping for such short durations to the nescessary precision!
I'm concerned also about the performance of the sleeping and recording during intense CPU usage; when the user is using the mouse to interact with intensive software.
I'm not even entirely sure where the cursor is really moving.
If I sweep my cursor across the screen, has the computer (somewhere internally) acknowledged that I crossed all those pixels,
or has my mouse really told it "I was there, now I'm over here, now I'm there".
I do also seek a method of differentiating between fast and slow movement, but for now, I can just observe the plot spacing on a graph of the visited coordinates.
Does anybody have any insight into this?
Any potential pitfalls; are my concerns legitimate?
Am I going the wrong way about this?
(As is observable, I really need some guidance in the matter)
Thanks!
It's far easier to log the mouse movements within the same application - just log something on every WM_MOUSEMOVE message. You will get a message periodically updating the mouse pointer location. You will not get a WM_MOUSEMOVE message for every pixel the mouse crosses, but it will jump depending on how fast you move the mouse and how busy the system is.
Logging mouse movements in some other application is going to be slightly more involved. If you have written both the logger and the application being logged, then you can handle WM_MOUSEMOVE in the application being logged, and send a corresponding message to your logger application. Your choice of IPC; a simple SendMessage() might be sufficient.
Logging mouse movements across the whole system is a totally different problem. You may have to hook in at somewhere closer to the driver level.
I just thought of another approach - the CBT (Computer-Based Training) hooks are designed to provide exactly this sort of information across applications. I've never used these though, so you'll have to do more investigation.
In my WPF app I have a control representing a pack of 20 cards (each about 150x80 px) that fan out in an arc, so they're all slightly overlapping in the centre of the arc. When the control is added there's an animation to fan them out.
After that, the fan/control can be moved around, and when the user hovers over a card it expands and then goes back to normal size when they move off it.
This all works fine, but has a noticeable effect on performance- everything is very jerky, presumably because when other things move all the overlapping stuff and transforms in the control are being constantly recalculated/redrawn.
Any suggestions for how I can improve performance while still keeping individual cards in the fan responsive?
To find the source of the slowdown you need to profile.
Try to find out whether or not WPF is switching back to software rendering or not.
After that try to run on a different computer with other (better) hardware/graphics card.
If it doesn't get any better there might be errors in your app.
Using gdk_screen_get_monitor_geometry, I can get the total area in pixels and the relative position of each monitor, even when there are two or more used as a single screen.
However, I want to get the usable area (that is, excluding panels) of each monitor. The only thing I have found is _NET_WORKAREA, but that is one giant area stretching across all monitors. Depending on the resolution and arrangement, there may be panels inside this area.
How can I get the actual usable area of each monitor? Ideally, using only Gtk/Gdk, nothing X11-specific.
The following approach is a bit convoluted, but it is what I'd use. It should be robust even when there is complex interaction between the window manager and GTK+ when a window is mapped -- for example, when some of the panels are automatically hidden.
The basic idea is to create a transparent decorationless maximized window for each screen, obtain its geometry (size and position) when it gets mapped (for example, using a map-event callback), and immediately destroy them. That gets you the usable area within each screen. You can then use your existing gdk_screen_get_monitor_geometry() approach to determine how the usable area is split between monitors, if any.
In detail:
Use gdk_display_get_default() to get the default display, then gdk_display_get_n_screens() to find out how many screens it has.
Create a new window for each screen using gtk_window_new(), moving the windows to their respective screens using gtk_window_set_screen(). Undecorate the windows using gtk_window_set_decorated(,FALSE), maximuze them using gtk_window_maximize(,TRUE), and make them transparent using gtk_window_set_opacity(,0.0). Connect the map-event signal to a callback handler (using g_signal_connect()). Show the window using gtk_widget_show().
The signal handler needs to call gtk_window_get_position() and/or gtk_window_get_size() to get the position and/or size of the newly-mapped window, and then destroy the window using gtk_widget_destroy().
Note that in practice, you only need one window. I would personally use a simple loop. I suspect that due to window manager oddities/bugs, it is much more robust to create a new window for each screen, rather than just move the same window between screens. It turns out it is easier, too, as you can use a single simple callback function to obtain the usable area for each screen.
Like I said, this is quite convoluted. On the other hand, a standard application should not care about the screen sizes; it should simply do what the user or window manager asks. Because of that, I would not be surprised if there are no better facilities to find out this information. Screen size may change at any point, for example if the user rotates their display, or changes the display resolution.
in the end I ended up using xlib directly, various "tricks" like the one suggested above ended up eventually failing in the long run often with odd corner cases and never followed the KISS principle.
The solution I used is in the X-Tile code base.
I've written a simple game-like app in WPF. The number of objects drawn is well within WPF capabilities - something like a few hundred ellipses and lines with simple fills. I have a DispatcherTimer to adjust the positions of the objects from time to time (1/60th of a second).
The code to compute the new positions can be quite intensive when there are lots of objects, and can fully load a processor. Whenever this occurs, WPF starts skipping frames, presumably trying to compensate for the "slowness" of my application.
What I would much rather happen is for all the frames to be drawn anyway, only slower. The dropped frames do not add any speed - because visual updates were pretty quick anyway.
Can I somehow force WPF to have my changes to the visuals be reflected on the screen regardless of whether WPF thinks it's a good idea?
Unfortunately I don't think there's anything you can do about this, although I will happily be corrected! WPF is designed to be an application creation framework, not a games library, so it priortises application performance and "usability" over framerate. This actually works very well when producing applications as it allows you to use quite rich animations and effects while maintaining perceived performance on lower end systems.
The only thing I think you might be able to try is push your movement code's Dispatcher priority down slightly to below Render (Loaded is the next one down) using something like:
this.Dispatcher.BeginInvoke(DispatcherPriority.Loaded, MoveMyStuff);
I don't have any kind of test harness to verify if that will help though.
This issue was fixed by using a Canvas with an OnRender override instead of creating and moving UIElements. This does mean that everything needs to be drawn by hand in OnRender, but it can now run at any FPS consistently, without skipping any frames.