Artifacts and problems with shadows of directional light & point light - reactjs

So I'm using React, Three Fiber as well as the drei library and cannon for physics.
I am making an apartment viewer as a personal project in which you can walk around in - so far everything works fine. To make it later on possible to load in the apartment model from a database (and make the creation process easier for multiple models), the transformations for the light switches, aktivatable point lights and the apartments collision boxes are copied from objects within the gltf file.
To prevent the collision boxes from rendering or otherwise effecting the rendering process, they are made invisible. (I also tried to set child.castShadow = false with no effect).
For some reason the shadows are corrupted: unwanted point light shadows.
I also tried to change some properties of the original child: Object3D properties in the Apartment component (the only place where the boxes could affect the shadows), without changing results.
Another thing is that there doesn't seem to be any options to adjust shadows anymore. Properties like shadowBias, shadowMapWidth etc. are deprecated. By hovering over it I get something like #deprecated — Use shadow.mapSize.width instead. At least I couldn't find a solution to that, also because the Three Fiber documentation isn't that extensive. Just using them doesn't work either.

Related

React Three Fiber: How to switch between TrackballControls and OrbitControls?

I'm trying to create a viewer in react-three-fiber with react-three/drei where I can switch between OrbitControls and TrackballControls.
The problem is that when switching from TrackballControls to OrbitControls, the axis that the camera rotates around changes as of course the TrackballControls change the up-vector when moving around.
I created a couple of minimal examples in codesandbox to explain my approach to solve this and to show where I'm stuck.
Base Case
This shows the initial attempt to switch between the different control types:
https://codesandbox.io/s/r3f-camera-controllers-base-neu07
Attempt #1
Obviously, this does not work as it is, so I tried resetting the up-vector to (0, 1, 0) and calling lookAt(). This seems to work initially as the camera reorients itself correctly (this is how it should look like). However, it does not rotate around the correct axis and instead moves in strange arcs. See here:
https://codesandbox.io/s/r3f-camera-controllers-set-up-vector-yps4k
Attempt #2
For this question it was suggested to create a new camera which I also tried but ultimately it lead to the same result. This here is my attempt at creating a new camera and copying some values to the new camera:
https://codesandbox.io/s/r3f-camera-controllers-reset-camera-3cih0
Any help appreciated. Thanks!
After a couple of days, I finally figured out a way to achieve what I want.
Instead of trying to remove the different controls, I just enable and disable them separately. I can then call the reset() functions on both Controls via a ref if the control prop changes. To retain the camera position, I can just temporarily store it before resetting the controls.
You can find an example here.

wglMakeCurrent textures disappear

Hi I'm trying to render 3 full screen windows on different monitors, until now I've successfully queried for existing monitors EnumDisplayMonitors to get the 4 parameters necessary to create 3 windows with WS_POPUP style applied.
In one frame I do the following:
for(int i=0; i<monitorsNum; i++)
{
wglMakeCurrent(hdcs[i], sharedHrc);
doRendering();
SwapBuffers(hdcs[i]);
}
Many websites suggest the same, however, when I go from 1 monitor to 2 or more monitors, textures disappear:
What you see is the same scene rendered 3 times, the slight different background clear color shows that at least I'm doing the stuff partially correct (gl clear color showed correctly, it even works with 3 monitors of 3 different sizes). I tried to intercept all the gl calls with glGetError() without getting any error. Is there a specific step I missed, or maybe it is a issue of my laptop?
If it helps, the 3 windows are created with an existing framework, so at creation each window has been given its own hrc, but then I just use one hrc for the other 2 windows. (so 3 hrc created, and 1 used, if it matters)
There are many reasons that a texture may not display correctly when rendering some geometry.
But assuming your problem isn't related to any of these things such as incorrect UVS, shader issues, texture creation etc the issue could be related to the fact that you are now managing multiple contexts.
To set up multiple windows you need to create a context for each window.
The wglMakeCurrent function allows you to switch the context for each window, rendering as you go.
https://learn.microsoft.com/en-us/windows/desktop/api/wingdi/nf-wingdi-wglmakecurrent
The wglMakeCurrent function makes a specified OpenGL rendering context the calling thread's current rendering context. All subsequent OpenGL calls made by the thread are drawn on the device identified by hdc. You can also use wglMakeCurrent to change the calling thread's current rendering context so it's no longer current.
An OpenGL context represents the default frame buffer (default place that your shader will output to when rendering) but it also stores all of the state associated with that instance of OpenGL.
Furthermore:
Each context has its own set of OpenGL Objects, which are independent of those from other contexts.
So this means that each context does not have access to the same resources unless explicitly told.
Any object sharing must be made explicitly, either as the context is created or before a newly created context creates any objects. However, contexts do not have to share objects; they can remain completely separate from one another.
So one reason you may be able to render the same texture(s) in each window is because that texture is not a shared resource. glClearColor works fine because it is not dependent on any resources that are associated with any particular context.

Do empty transforms take up much CPU power?

So long story short, I'm developing a game for Mobile set in a city and there are a lot of objects being brought in and out of play as the player moves about.
I have tried numerous methods for seamlessly loading in/out objects.
First I tried instantiating and destroying objects for loading/unloading. This had noticeable spikes for even loading in something simple like a generic 3D box.
Second attempt, I put the Instantiating/Destroying calls into their own Coroutines, this made the spikes less severe, but they were still noticeable.
Thirdly, I decided to pre-instantiate all the objects I'd need and then keep all the ones not in play as deactive (SetActive(false)). It turns out that setting active to true (even done inside a coroutine) had worse performance than instantiating the objects.
So, I finally arrived at my last idea for loading. I preloaded all the needed objects, then manually went through each one's children, disabling each component that could use up CPU. Such components as scripts, renderers, colliders, audio sources, particle systems, rigidbody (set to isKinematic = true) etc were all disabled, leaving only an object with children transforms. Now I can finally enable an object (enable all its components) and the game has no spikes in performance.
However, this last solution has its own draw back. If I preload too many objects the games FPS will be significantly impacted. Event though there is nothing enabled inside the object besides its transforms.
So my question is, does having many transforms (non moving) in the scene cause a significant hit to the CPU usage? and, If so, what is the best way to do continuous loading/unloading of game objects for Mobile?
Use a component called lod group. You can set then your low poly models and it will unable and enable the mesh renderers according with the distance of your camera.

Textured resizable buttons with Core Image filter and appearance proxy iOS

The app I'm writing involves buttons that have a slight noise filter texture, which can be any size. For a standard button I'd simply use resizableImageWithCapInsets: but due to the texture, this causes unusual artefacts to appear on the resulting button.
A solution I have in mind, is to use the Core Image monochrome filter combined with the random noise filter to add the noise texture to a plain image. In theory this works, and in practice this has been shown to work (One example here) but these are all in cases where the button size is known at the point of invoking the CI code.
What I'm looking to do, is use the appearance proxies, so across the app I can simply set the style of UIBarButtonItems for instance.
Is there a way I can apply these CI filters to the buttons through the appearance proxies or isn't this possible? Would something like a category on UIImage to add noise work? I'm not entirely sure at which point the appearance proxy would actually invoke that code.
Any help is appreciated
OK So I finally solved it but found out some stuff on the way.
It seems you can create a category on UIImage and use that in the appearance proxy. I created a category to add noise, and it seemed to partly work, but I couldn't get it looking how I wanted as it wasn't quite rendering properly, but in the process of coding this discovered another method
resizableImageWithCapInsets:resizingMode:
Because the texture I was dealing with was simply noise, it could be tiled, so rather than the image now being stretched, the centre of the image is instead tiled which gives me the appearance I needed :)

Gtk: get usable area of each monitor (excluding panels)

Using gdk_screen_get_monitor_geometry, I can get the total area in pixels and the relative position of each monitor, even when there are two or more used as a single screen.
However, I want to get the usable area (that is, excluding panels) of each monitor. The only thing I have found is _NET_WORKAREA, but that is one giant area stretching across all monitors. Depending on the resolution and arrangement, there may be panels inside this area.
How can I get the actual usable area of each monitor? Ideally, using only Gtk/Gdk, nothing X11-specific.
The following approach is a bit convoluted, but it is what I'd use. It should be robust even when there is complex interaction between the window manager and GTK+ when a window is mapped -- for example, when some of the panels are automatically hidden.
The basic idea is to create a transparent decorationless maximized window for each screen, obtain its geometry (size and position) when it gets mapped (for example, using a map-event callback), and immediately destroy them. That gets you the usable area within each screen. You can then use your existing gdk_screen_get_monitor_geometry() approach to determine how the usable area is split between monitors, if any.
In detail:
Use gdk_display_get_default() to get the default display, then gdk_display_get_n_screens() to find out how many screens it has.
Create a new window for each screen using gtk_window_new(), moving the windows to their respective screens using gtk_window_set_screen(). Undecorate the windows using gtk_window_set_decorated(,FALSE), maximuze them using gtk_window_maximize(,TRUE), and make them transparent using gtk_window_set_opacity(,0.0). Connect the map-event signal to a callback handler (using g_signal_connect()). Show the window using gtk_widget_show().
The signal handler needs to call gtk_window_get_position() and/or gtk_window_get_size() to get the position and/or size of the newly-mapped window, and then destroy the window using gtk_widget_destroy().
Note that in practice, you only need one window. I would personally use a simple loop. I suspect that due to window manager oddities/bugs, it is much more robust to create a new window for each screen, rather than just move the same window between screens. It turns out it is easier, too, as you can use a single simple callback function to obtain the usable area for each screen.
Like I said, this is quite convoluted. On the other hand, a standard application should not care about the screen sizes; it should simply do what the user or window manager asks. Because of that, I would not be surprised if there are no better facilities to find out this information. Screen size may change at any point, for example if the user rotates their display, or changes the display resolution.
in the end I ended up using xlib directly, various "tricks" like the one suggested above ended up eventually failing in the long run often with odd corner cases and never followed the KISS principle.
The solution I used is in the X-Tile code base.

Resources