Export NodePath as an absolute in Godot - export

Trying to make an easy way to attach scene instances to other scene instances in the game editor. I found that you can export a NodePath and it works... Except that it always grabs the path relative to the scene I've attached it to. I end up having to do some clever editing of the path to make it work from the node that actually calls the function to change the scene instance. It's becoming more work than I'd like and I'm wondering if there is an easier way. My goal so to simply attach the scene instance from the editor an move without having to worry about which parent each instance is attached to. Is there a way to specify to the editor that I want an absolute path exported and saved, rather then a relative one?
something like:
export (NodePath, absolute) var button_1_passage_path

Handling relative NodePaths
I end up having to do some clever editing of the path to make it work from the node that actually calls the function to change the scene instance.
You don't need any clever editing. You just need to call it from the other Node.
Instead of doing this:
var node = get_node(other_node.node_path_variable)
Do this:
var node = other_node.get_node(other_node.node_path_variable)
Making NodePaths absolute
Is there a way to specify to the editor that I want an absolute path exported and saved, rather then a relative one?
This is a NO.
The editor will always create relative NodePaths. And we cannot workaround it.
To begin to attempt to workaround it, we need to use a tool script because we want to set the absolute NodePath on the editor.
But then it will be running in the editor scene tree, which looks different to the scene tree you get while running the game… So the absolute path of the node on the editor scene tree would not work on the game.
And there is NO way to compute the absolute path it will have when you run the game, because that depends on where you will instantiate the scene that you are editing.
At best we can compute the absolute path if the scene you are editing is the current scene (not instantiated inside another scene), but then the NodePath would be fragile… As soon as you instantiate the scene inside another, the absolute NodePath would be wrong.
Relative NodePaths are the way.
Decoupling scenes
My goal so to simply attach the scene instance from the editor an move without having to worry about which parent each instance is attached to.
The general advice is to not reach outside the scene. In particular do not reach outside with hard coded NodePaths. If your scene depends on where you instantiate it, then your scene is not properly decoupled.
If you need your scene to communicate with nodes that are outside of it, your first option is to add custom signals, which you can connect outside of the scene. And for any information you cannot pass around with signals, you can have the parent call methods on the scene.
For abstract:
Call down, signal up
See Node communication (the right way).
And yes you can expose NodePath properties, which you can set outside of the scene. That is, you would be telling the scene where the other nodes are.
Godot should update your NodePaths when you move the Nodes in the scene tree editor. If you have found situations where it does not, please report them. The only case I'm aware it does not work is when you cut or copy Nodes. I'll guess this is what you have been doing. So try dragging them, or use the Reparent option from the context menu.
Anyway, using NodePaths only gets you to a loosely coupled scene. So here are a couple other ways to fully decouple your scenes:
Using a signal bus: Autoloads are available everywhere. So you can use them for communication. In the case of a signal bus, you create an autoload where you define signals, so those signals can be connected an emitted from any script in any scene. I have an explanation elsewhere.
Using resource based communications: Everywhere when you preload resources or set resources from the editor you always get the same object. So you can use resources for communication. Give the same resource to multiple nodes, and in the resource define properties that they all can access, and signals that they all can be connected and they all can emit. I have an example elsewhere.

I found a weird solution for this actually!
the get_path() will give the absolute path
the code i used looked something like this
export(NodePath) var nodepath
var absolute_nodepath
func _ready():
absolute_nodepath = get_node(nodepath).get_path()
Hope it helps :D

Related

How to share pointer events

I want to create a list of switches (or custom controls that handle horizontal pointer movements).
That is easily done by putting those components in a Container using BoxLayout.y as LayoutManager.
But because the components (the horizontally movable Switch or custom components) take a lot of room in the list it is very difficult to scroll the list. This is because all the pointer events are handled by the nested components and none get through to the surrounding Container - the one with the BoxLayout.y.
The natural thing I tried to do was to call the respective pointer...-Methods of the parent Container - which turned out to be a stupid idea - it led to a StackOverflowError.
What I really would like to do was handle the pointer events in both the child and the parent Components for a certain threshold distance on order to determine whether the user wants to scroll horizontally or vertically.
I noticed that with nested BoxLayout.x-Containers nested in a BoxLayout.y-Container this works out of the box. But I haven't been able to grasp how to achieve that with a custom control - and it does not work the the CN1-Switch-Components either.
The question is: How do do this in a reasonable manner? Is it even possible? Or would that require gesture detection which is not (yet) part of Codename One?
This is the default behavior of Codename One. Scrolling takes over and there are biases based on the X/Y axis you use. All of that is built in. As I recall you changed a lot of default behaviors in Codename One, I suggest trying a clean project and seeing how it works e.g. with something like this: https://www.codenameone.com/blog/button-lists.html

Opacity bug in VS 2015 using Helix Toolkit

I've been trying to create a method to change the opacity of an object when I select it in a combo box so that I can see another object behind it. This is done prior to changing the camera position/direction to follow the 2nd object from the 1st object's position. This is done by cloning the object's default material with
this.DefaultMaterial = this.DefaultMaterial.Clone();
and then calling the
MaterialHelper.ChangeOpacity(DefaultMaterial,0.1);
method as I've written it out there.
The opacity seems to work properly for the most part, but for some of the objects in the view port, I can't see them through my initial opaque object. For instance, when I turn the camera to the particular object in question (the buggy one), instead of being able to see it behind my initial object I see through my initial object and past the second one(the buggy one) as if the buggy one wasn't even there. I just see what's behind it.
I have no idea why this is happening.
Does anyone know what could be causing this? Or if maybe there is a different way of making something transparent rather than setting its Opacity?
I saw some people referencing a TRANSPARENCY property, but wasn't sure if that applies to a FileModelVisual3D object, which is what the initial object is.
The buggy object is a UIElement3D, the opaque one is a FileModelVisual3D, there are other objects of the Point3DCollection class which also have the bug, as the UIElement3D does.
Because of the RenderOrder and depth buffer. You have to move your transparent object to the end of the rendering. It's not a bug, it's how rendering works.
Or change to use Helix-toolkit sharpdx, and use transparent rendering pass.

Cef Browser Wpf explore the dom, find elements and change values

Further to a post (CefSharp load a page with browser login).
I implemented the IRequestHandler interface and the different methods, particularly the GetAuthCredentials where I show a dialog and recover user and password and passing it to the event handler.
Now I want to access to the dom where I get several frameset with differents frames and I'm interested in one frame which I know the name Atribute.
Inside this frame I need to get list of different type of input , select etc...
In my app I have a button which I use to set values of the different elements depending if they are present on the displayed page.
PROBLEM is I don't see any way of getting the document, the frames collection etc....
CefSharp doesn't expose the underlying DOM, and is unlikely to see http://magpcss.org/ceforum/viewtopic.php?f=6&t=10652&p=19533#p16750
Your best bet is to use EvaluateScriptAsync and a combination of Javascript Binding
https://github.com/cefsharp/CefSharp/wiki/Frequently-asked-questions#2-how-do-you-call-a-javascript-method-that-return-a-result
https://github.com/cefsharp/CefSharp/wiki/Frequently-asked-questions#3-how-do-you-expose-a-net-class-to-javascript
If you absolutely must have DOM access and cannot invent your way to a solution then CefGlue might be a better choice for you. (I should point out that the DOM can only be accesses in the Render process, and as such calls needed to be passed to the Browser process though IPC, so it isn't a trivial task).

Winform equivalent to Android's fragments?

You know, like in CCleaner app where main activity is on the left side of the app and the right side area is changeable fragment.
http://cache.filehippo.com/img/ex/3049__ccleaner1.png
How to do it? I imagine I could do it by putting all fragments in the same place and just change their visibility to show just 1 at the moment, but that would make the code a whole lot of mess.
I've done this in the past by using UserControls. It's worked nicely for things like choosing payment methods (cash, cc, cheque...).
There are a couple of options for changing the display, either have all the UserControls present on the form and hide or show them as required or have an empty placeholder panel and use a factory to construct the appropriate UserControl as needed. I've done both and it really depends on the complexity (and expected longevity and users) of the project as to which is appropriate.
Using a Model-View-Presenter pattern helped with managing all of this.
What you don't want to end up with is a massive switch statement that changes the visibility of dozens of manually positioned controls individually. I've seen it and that way lies madness.

Getting X11 mouse click & position when using XtAppMainLoop? (improving xload)

How can I register a callback to get mouse right button down and up events?
Background: The xload application that comes standard with many UNIX flavors is almost handy. It draws a graph of system load by time. It uses a simple StripChart widget that is flawed; it draws the grid wrong. I've fixed that, but now I want to be able to click on the strip chart graph and get the time corresponding to that load.
The source is here: http://learjeff.net/forums/xload .
The main sets things up and calls XtAppMainLoop. I want to know how to register (for the child StripChart panel) a handler to get mouse up-down events, and how to find the mouse location in the panel on the down events. I'm not an X11 programmer; I'm just hacking an existing program. The last time I did GUI programming was pre-X11 (SunWindows). I did once modify X11 to run over OSI TP4, but that didn't require much knowledge of widgets. Any and all help appreciated, including pointers to the right functions to look up. Thanks!
Meanwhile, feel free to use my modified xload app. You'll have to guess what the numbers on the label mean.
The program is written using X Toolkit Intrinsics and Xaw.
You want to add one or more actions to the widget in question (the actions field in the core of struct stripChartClassRec) and one or more translations (the tm_table field).
Actions are C functions called in response to abstract events that you, the widget writer, define (like e.g. startDragging() or pageDown()). Translations map these abstract events to concrete events that Xt understands (like e.g. <Btn1Down> or <Key>KP_PageUp).
Inside the action you have access to the XEvent struct that has triggered the action, and from there you can get mouse coordinates or whatever.
To see how actions and translations are set up, look at one of the existing Xaw widgets, e.g. Panner. You can download the Xaw source here.

Resources