The function xcb_copy_area by my understanding essentially copies a region from one xcb_drawable to another. I'm not sure then why it would also take a graphics context as a parameter, seeing as the source of the copy has presumably already been drawn or rendered. What is the use of this parameter in this case?
It's worth noting that my understanding of graphics contexts are not great but there aren't many resources on explaining them. I'm assuming this is an issue with my mental model of what's going on within xcb.
Relevant docs: https://www.x.org/releases/X11R7.6/doc/xproto/x11protocol.html#requests:CopyArea
The text description contains this (emphasis mine and original emphasis and a link were lost):
If the dst-drawable is a window with a background other than None, these corresponding destination regions are tiled (with plane-mask of all ones and function Copy) with that background. Regardless of tiling and whether the destination is a window or a pixmap, if graphics-exposures in gc is True, then GraphicsExposure events for all corresponding destination regions are generated.
So, my understanding is: The GC is used to draw the background of the window and this is where most of its properties are used.
The doc says explicitly which GC components are used:
GC components: function, plane-mask, subwindow-mode, graphics-exposures, clip-x-origin, clip-y-origin, clip-mask
I guess that function and plane-mask specify how the source and target are "combined". So, CopyArea can not only copy, but do all the other (weird) things that are possible with a GC.
subwindow-mode says what happens with subwindows. It is possible to clip them out or to draw over them.
graphics-exposures is about events that are generated in response to drawing
clip-x-origin, clip-y-origin, and clip-mask clearly are about clipping the drawing.
Related
Hi I'm trying to render 3 full screen windows on different monitors, until now I've successfully queried for existing monitors EnumDisplayMonitors to get the 4 parameters necessary to create 3 windows with WS_POPUP style applied.
In one frame I do the following:
for(int i=0; i<monitorsNum; i++)
{
wglMakeCurrent(hdcs[i], sharedHrc);
doRendering();
SwapBuffers(hdcs[i]);
}
Many websites suggest the same, however, when I go from 1 monitor to 2 or more monitors, textures disappear:
What you see is the same scene rendered 3 times, the slight different background clear color shows that at least I'm doing the stuff partially correct (gl clear color showed correctly, it even works with 3 monitors of 3 different sizes). I tried to intercept all the gl calls with glGetError() without getting any error. Is there a specific step I missed, or maybe it is a issue of my laptop?
If it helps, the 3 windows are created with an existing framework, so at creation each window has been given its own hrc, but then I just use one hrc for the other 2 windows. (so 3 hrc created, and 1 used, if it matters)
There are many reasons that a texture may not display correctly when rendering some geometry.
But assuming your problem isn't related to any of these things such as incorrect UVS, shader issues, texture creation etc the issue could be related to the fact that you are now managing multiple contexts.
To set up multiple windows you need to create a context for each window.
The wglMakeCurrent function allows you to switch the context for each window, rendering as you go.
https://learn.microsoft.com/en-us/windows/desktop/api/wingdi/nf-wingdi-wglmakecurrent
The wglMakeCurrent function makes a specified OpenGL rendering context the calling thread's current rendering context. All subsequent OpenGL calls made by the thread are drawn on the device identified by hdc. You can also use wglMakeCurrent to change the calling thread's current rendering context so it's no longer current.
An OpenGL context represents the default frame buffer (default place that your shader will output to when rendering) but it also stores all of the state associated with that instance of OpenGL.
Furthermore:
Each context has its own set of OpenGL Objects, which are independent of those from other contexts.
So this means that each context does not have access to the same resources unless explicitly told.
Any object sharing must be made explicitly, either as the context is created or before a newly created context creates any objects. However, contexts do not have to share objects; they can remain completely separate from one another.
So one reason you may be able to render the same texture(s) in each window is because that texture is not a shared resource. glClearColor works fine because it is not dependent on any resources that are associated with any particular context.
I have an application written in C using GTK (although the language is probably unimportant for this question).
This application has a fullscreengtk_window with a single gtk_drawing_area. For the drawing area, I have registered a tick callback via gtk_widget_add_tick_callback which just calls gtk_widget_queue_draw every tick. Inside the drawing area draw callback, I change the color of the entire window at regular intervals (e.g., from black to white at 1Hz).
Say that in this call to the draw callback I want to change the window from black to white. I would like to know the precise time (down to the nearest ms) that the change is actually drawn on the screen (ideally in the same units as CLOCK_MONOTONIC). I don't think this is the same thing as the GdkFrameClock available in the tick callback, which, as I understand it, is about the time of the frame, not the time when the frame is actually displayed on the screen.
If I just measure the CLOCK_MONOTONIC time in the drawing callback, and then use a photo-diode to measure when the actual change is via an attached A2D, the actual change is the display is understandably delayed by a number of refresh intervals (in my case, 3 screen refreshes).
Just as a summary: if I am in a GTK widget draw callback, is there any way to know the time when the display will actually be shown on the monitor in the units of CLOCK_MONOTONIC? Or alternatively, is there a way that I can block a separate thread until a specific redraw that I care about is actually displayed on the screen (a function I can write like wait_for_screen_flip())?
Update: Ideally, the same solution would work for any Linux compositor (X11 or Wayland), which is why I am hoping for a GTK/GDK solution, where the compositor is abstracted away.
Similarly to Uli's answer of the Present extension and PresentCompleteNotify for X11, Wayland has a similar protocol called wp_presentation_feedback:
https://cgit.freedesktop.org/wayland/wayland-protocols/tree/stable/presentation-time/presentation-time.xml
This protocol allows the Wayland compositor to inform clients when their content was actually displayed (turned to light). It is independent of the actual buffer mechanism used (EGL/SHM/etc). To use it, you call wp_presentation_get_feedback before wl_surface_commit; when the commit has completed, a presented event will be sent to the client from the new wp_presentation_feedback object, or discarded if it was never shown.
Presentation feedback is currently implemented in Weston; it is not yet implemented in Mutter, and I don't believe it's implemented in KWin either. GTK+ plans to support it when it becomes available in Mutter, but I don't have any great insight as to how it would be exposed through the GTK+ API.
That being said, if you can get access to the Wayland display, it's possible that you could use the interface directly yourself.
I just came across https://developer.gnome.org/gdk3/stable/gdk3-GdkFrameTimings.html#gdk-frame-timings-get-presentation-time which seems to do just like what you want and is part of Gdk. I do not know how to use it nor have I seen some example of it, but https://developer.gnome.org/gdk3/stable/gdk3-GdkFrameTimings.html#gdk3-GdkFrameTimings.description says
The information in GdkFrameTimings is useful for precise synchronization of video with the event or audio streams, and for measuring quality metrics for the application’s display, such as latency and jitter.
Take a look at https://cgit.freedesktop.org/xorg/proto/presentproto/tree/presentproto.txt. Specifically, you want PresentCompleteNotify events. Note that these can only tell you later when presentation actually happened, so (I think) you will not know ahead of time when this is (but you could perhaps guess based on recent notifies?).
Note that this is
a relatively new X11 extension, so might not actually be supported everywhere
depends on the driver used (and likely lots of other factors) for the quality of data
cannot be used from GTK since it requires a different way to display to the screen (you draw to a Pixmap and then use PresentPixmap to make it visible and ask for a notify)
Also note that this extension provides lots of other things. You can for example say "please display at time ". Just read the protocol specification from start to end. :-)
I am working on an application with ARKit and SceneKit frameworks. In my application I have enabled surface detection (I followed the placing objects sample provided by Apple). How to find if the surface detected is no more available? That is, initially only if user has detected the surface in ARSession I am allowing him to place the 3D object.
But if the user moves rapidly or focuses somewhere, the detected surface area is getting lost. In this case if the user tries to place another object I shouldn't allow him to place it until he scans the floor again and get the surface corrected.
Is there any delegate which is available to let us know that the surface detected is no more available?
There are delegate functions that you can use. The delegate is the ARSCNViewDelegate
It has a function that is renderer(_:didRemove:for:) that fires when an ARAnchor has been removed. You can use this function to perform some operation when a surface gets removed.
ARSCNViewDelegate Link
There are two ways to “lose” a surface, so there’s more than one approach to dealing with such a problem.
As noted in the other answer, there’s an ARSCNViewDelegate method that ARKit calls when an anchor is removed from the AR session. However, ARKit doesn’t remove plane anchors during a running session — once it’s detected a plane, it assumes the plane is always there. So that method gets called only if:
You remove the plane anchor directly by passing it to session.remove(anchor:), or
You reset the session by running it again with the .removeExistingAnchors option.
I’m not sure the former is a good idea, but the latter is important to handle, so you probably want your delegate to handle it well.
You can also “lose” a surface by having it pass out of view — for example, ARKit detects a table, and then the user turns around so the camera isn’t pointed at or near the table anymore.
ARKit itself doesn’t offer you any help for dealing with this problem. It gives you all the info you need to do the math yourself, though. You get the plane anchor’s position, orientation, and size, so you can calculate its four corner points. And you get the camera’s projection matrix, so you can check for whether any point is in the viewing frustum.
Since you’re already using SceneKit, though, there are also ways to get SceneKit to do the math for you... Working backwards:
SceneKit gives you an isNode(_:insideFrustumOf:) test, so if you have a SCNNode whose bounding box matches the extent of your plane anchor, you can pass that along with the camera (view.pointOfView) to find out if the node is visible.
To get a node whose bounding box matches a plane anchor, implement the ARSCNViewDelegate didAdd and didUpdate callbacks to create/update an SCNPlane whose position and dimensions match the ARPlaneAnchor’s center and extent. (Don’t forget to flip the plane sideways, since SCNPlane is vertically oriented by default.)
If you don’t want that plane visible in the AR view, set its materials to be transparent.
The Gtk+ 3 migration guide shows how the GdkEventExpose.region field can be used to provide a fine-grained area for re-rendering widgets. We already do something like this in Inkscape to avoid rendering excessive amounts of complicated stuff on our drawing canvas.
However, the example in the guide shows how to do this for the old Gtk+ 2 expose_event handler.
How do I do the equivalent in a new Gtk+ 3 draw handler, which receives a "ready-clipped" Cairo context as a parameter, rather than a GdkEventExpose?
I guess one possibility is to use cairo_copy_clip_rectangle_list on the "ready-clipped" cairo context to obtain a list of rectangles that make up the region to draw. Does anyone have any experience of using this? Does it seem like a sensible approach?
Yes, you should use cairo_copy_clip_rectangle_list() on the cairo_t that you get in your widget's ::draw() signal handler. See this commit for an example:
http://git.gnome.org/browse/vte/commit/?id=21a064ac8b5925108b0ab9bd6516664c8cd3e268
Since I have not much clue, I decided to check the source code. GDK emits a GDK_EXPOSE event on a window and creates the GdkEventExpose instance for this.
This event is then handled in gtk/gtkmain.c via gtk_widget_send_expose():
http://git.gnome.org/browse/gtk+/tree/gtk/gtkwidget.c?id=eecb9607a5c0ee38eadb446545beccd0922cb0b8#n6104
This function clips the cairo_t to GdkEventExpose.region, as you already learned in the docs.
This then calls _gtk_widget_draw_internal() which emits the actual draw signal:
http://git.gnome.org/browse/gtk+/tree/gtk/gtkwidget.c?id=eecb9607a5c0ee38eadb446545beccd0922cb0b8#n5726
As far as I can see, nothing here let's you access the clip region directly. In gtk_widget_send_expose() the GdkEvent is added as userdata to the cairo context. However, this is not accessible, because all the involved functions and variables are static. So you'll have to use cairo_copy_clip_rectangle_list().
However, this sounds quite inefficent. First gdk_cairo_region transforms the region into a number of calls to cairo_rectangle and then cairo transforms this from its internal representation into a cairo_rectangle_list_t (which may fail if the clip is, for some reason, not a region). If you see this being slow, it might make sense to have some shortcut for this added to gtk directly.
Using gdk_screen_get_monitor_geometry, I can get the total area in pixels and the relative position of each monitor, even when there are two or more used as a single screen.
However, I want to get the usable area (that is, excluding panels) of each monitor. The only thing I have found is _NET_WORKAREA, but that is one giant area stretching across all monitors. Depending on the resolution and arrangement, there may be panels inside this area.
How can I get the actual usable area of each monitor? Ideally, using only Gtk/Gdk, nothing X11-specific.
The following approach is a bit convoluted, but it is what I'd use. It should be robust even when there is complex interaction between the window manager and GTK+ when a window is mapped -- for example, when some of the panels are automatically hidden.
The basic idea is to create a transparent decorationless maximized window for each screen, obtain its geometry (size and position) when it gets mapped (for example, using a map-event callback), and immediately destroy them. That gets you the usable area within each screen. You can then use your existing gdk_screen_get_monitor_geometry() approach to determine how the usable area is split between monitors, if any.
In detail:
Use gdk_display_get_default() to get the default display, then gdk_display_get_n_screens() to find out how many screens it has.
Create a new window for each screen using gtk_window_new(), moving the windows to their respective screens using gtk_window_set_screen(). Undecorate the windows using gtk_window_set_decorated(,FALSE), maximuze them using gtk_window_maximize(,TRUE), and make them transparent using gtk_window_set_opacity(,0.0). Connect the map-event signal to a callback handler (using g_signal_connect()). Show the window using gtk_widget_show().
The signal handler needs to call gtk_window_get_position() and/or gtk_window_get_size() to get the position and/or size of the newly-mapped window, and then destroy the window using gtk_widget_destroy().
Note that in practice, you only need one window. I would personally use a simple loop. I suspect that due to window manager oddities/bugs, it is much more robust to create a new window for each screen, rather than just move the same window between screens. It turns out it is easier, too, as you can use a single simple callback function to obtain the usable area for each screen.
Like I said, this is quite convoluted. On the other hand, a standard application should not care about the screen sizes; it should simply do what the user or window manager asks. Because of that, I would not be surprised if there are no better facilities to find out this information. Screen size may change at any point, for example if the user rotates their display, or changes the display resolution.
in the end I ended up using xlib directly, various "tricks" like the one suggested above ended up eventually failing in the long run often with odd corner cases and never followed the KISS principle.
The solution I used is in the X-Tile code base.