I have an object with 4 fields, each field holding an array with 600 datapoints.
I'd like to plot each array on a separate d3.js graph -- small multiples i think it's called. I'm still a little shaky on the data-binding part of it and can't seem to go from a container to multiple svg appends.
I understand this following basic example, but I think, am missing something about the nature of data-binding in d3:
circleData = [[10, "rgb(246, 239, 247)"], [15, "rgb(189,201,225)"],
[20, "rgb(103,169,207)"], [25, "rgb(28,144,153)"], [30, "rgb(1,108,89)"]];
//Select the div element
selectExample = d3.select("#data_example2");
//We'll select all the circle from within the selected <div> element
selectExample.selectAll("circle")
.data(circleData)//<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
.enter()
.append("circle")
.attr("cx", function(d){return d[0]*14})
.attr("cy", 50)
.attr("r", function(d){return d[0]})
.style("fill", function(d){return d[1]});
Basically what i would do right now is just replace circleData in .data(circleData) with my own data object and keep appending axes and labels below it, basically expecting that for every of 4 fields(subarrays), a graph will pop up. .(i.e.
svgSelection.
.data(my_multi-field_array_object)
.enter()
.append("g")
... //continue with the individual plot's code
This, unsurprisingly, does not work. What am i doing wrong?
Renesting the object of arrays to be an array of objects did the trick.
Not sure why that wouldn't be covered by the function.
Related
I have a dataset that consists of an (x,y) pair and v which describes a value at (x,y). The data needs to produce a figure that looks like:
This was created by using a surface plot, changing the eye and up values, and then turning the aspectratio on the z-axis to 0.01:
layout= {{
...
aspectmode: "manual",
aspectratio: {x: "3", y: "1", z: ".01"},
scene: {
...
zaxis: {
visible: false
}
}
}}
Notice that the x/y axes are still raised and awkwardly placed. I have two parts to my question:
Is there a better graph to show this data like this using Plotly? The end product needs to be the same, but the way I get there can change.
If this is the best way, how do I "lower" the x/y axes to make it look like a 2D plot?
The original reason I went the route of using a surface plot is because of Matlab. When building a surface plot and rotating it to one of the planes (x/y/z), it will essentially turn into a 2D figure.
After a good walk and looking at the documentation, using:
layout = {{
...
scene: {
...
xaxis: {
...
tickangle: 0
}
}
}}
Removed the '3D' effects. I also changed the aspectratio of z to be: .001
I'm having a problem on the texture coordinates of planes geometries being updated by ARKit. Texture images get stretched, I want to avoid that.
Right now I'm detecting horizontal and vertical walls and applying a texture to them. It's working like a charm...
But when the geometry gets updated because it extents the detection of the wall/floor, the texture coordinates get stretched instead of re-mapping, causing the texture to look stretched like image below.
You can also see an un-edited video of the problem happening: https://www.youtube.com/watch?v=wfwYPwzND74
This is the piece of code where the geometry gets updated:
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
guard let planeAnchor = anchor as? ARPlaneAnchor else {
return
}
let planeGeometry = ARSCNPlaneGeometry(device: device)!
planeGeometry.update(from: planeAnchor.geometry)
//I suppose I need to do some texture re-mapping here.
planeGeometry.materials = node.geometry!.materials
node.geometry = planeGeometry
}
I have seen that you can define the texture coordinates by defining it as a source like this:
let textCords = [] //Array of coordinates
let uvData = Data(bytes: textCords, count: textCords.count * MemoryLayout<vector_float2>.size)
let textureSource = SCNGeometrySource(data: uvData,
semantic: .texcoord,
vectorCount: textCords.count,
usesFloatComponents: true,
componentsPerVector: 2,
bytesPerComponent: MemoryLayout<Float>.size,
dataOffset: 0,
dataStride: MemoryLayout<vector_float2>.size)
But I have no idea how to fill the textCords array to make it fit correctly accordingly to the updated planeGeometry
Edit:
Re-defining the approach:
Thinking deeply on the problem, I came with the idea that I need to modify the texture's transform to fix the stretching part, but then I have two options if I do that:
Either keep the texture big enough to fill the entire geometry but
keeping a ratio of 1:1 to avoid stretching
Or keep the texture the
original size but with 1:1 aspect ratio and repeat the texture
multiple times to fit the entire geometry.
Any of these approaches I'm still lost of how to do it. What would you suggest?
In leaflet and mapbox I would like to get rid of the two gray bars above and under the map as seen on the picture below. My #map DOM element takes the full screen and the gray bars disappear when I zoom in (e.g., zoomLevel = 3). So the gray bars seem to be caused by the fact that a zoomLevel has a given height (in px) of the tiles which is smaller than my screen.
I want to keep the tiles of the same zoom level but make sure the height of the tiles cover at least the full screen.
Here is my map setup code:
vm.map = L.map('map', {
center: [35, 15],
zoom: 2,
maxZoom: 21,
scrollWheelZoom: true,
maxBounds: [
[89.9, 160.9],
[-89.9, -160.9]
],
zoomControl: false,
noWrap: true,
zoomAnimation: true,
markerZoomAnimation: true,
});
I am using angular and my screen dimensions are 1920 x 1080
Sounds like you need to calculate the minimum zoom level at which the map only shows the area between 85°N and 85°S.
The getBoundsZoom() method of L.Map helps with this, e.g.:
var bounds = L.latLngBounds([[85, 180],[-85, -180]]);
var wantedZoom = map.getBoundsZoom(bounds, true);
var center = bounds.getCenter();
map.setView(center, wantedZoom);
Note that this is a generic solution, and works for any size of the map container.
You could also set the minZoom of the map to that value, and experiment with fractional zoom (see the zoomSnap option). If you want the user to be unable to drag outside the painted area, use the map's maxBounds option with something like [[85, Infinity],[-85, -Infinity]].
(And if you are wondering why 85°N and 85°S, do read https://en.wikipedia.org/wiki/Web_Mercator )
I'm struggling with the best way of changing the center point of a 3D object (Model3DGroup) in WPF.
I've exported a model from SketchUp and everything is fine, but the centers got off position, causing me trouble in rotating the objects.
Now I need to make some rotations around each object own center and have no idea on how to do it...
Any suggestions would be appreciated.
Thanks
Using Jackson Pope suggestion, I used the code below to get the center point of an object:
var bounds = this.My3DObject.Bounds;
var x = bounds.X + (bounds.SizeX / 2);
var y = bounds.Y + (bounds.SizeY / 2);
var z = bounds.Z + (bounds.SizeZ / 2);
var centerPoint = new Point3D(x, y, z);
Meanwhile I'll try find a better solution to try and move all the points to the desired offset...
To rotate an object around it's centre you need to first translate it so that its centre is at the origin. Then rotate it (and then potentially translate it back to its original position).
Find the minimum and maximum extent of the object, calculate its centre = min + (max-min)/2. Translate by (-centreX, -centreY, -centreZ), rotate and then translate by (centreX, centreY, centreZ).
Im trying to apply a material to my GeometryModel3D at runtime like so:
var model3D = ShardModelVisual.Content as GeometryModel3D;
var materialGroup = model3D.Material as MaterialGroup;
BitmapImage image;
ResourceLoader.TryLoadImage("pack://application:,,,/AnzSurface;component/path file/img.png", out image, ".png");
var iceBrush = new ImageBrush(image);
var grp = new TransformGroup();
grp.Children.Add(new ScaleTransform(0.25, 0.65, 0.5, 0.5));
grp.Children.Add(new TranslateTransform(0.0, 0.0));
iceBrush.Transform = grp;
var iceMat = new DiffuseMaterial(iceBrush);
materialGroup.Children.Add(iceMat);
Which all works fine, and the material gets added.
What I dont understand is how I can map the users click on the screen to the offsets that need to be applied to the TranslateTransform.
I.e. at the moment, x: -0.25 moves the material backwards along the X axis, but I have NO IDEA how to get that type of a coordinate from the users mouse click...
when I do:
e.MouseDevice.GetPosition(ShardsViewPort3D);
that gives me normal X/Y corrds of the mouse click...
Thanks for any help you can give!
It sounds like you want to slide the material around on your geometry when you click on it. Here's how:
Use hit testing to translate your X/Y coordinates from the mouse click into a RayMeshGeometry3DHitTestResult as described in my earlier answer. This will give you the MeshGeometry3D that was hit, the vertices of the triangle that was hit, and the relative position on that triangle.
Look up each vertex index (VertexIndex1, VertexIndex2, VertexIndex2) in the MeshGeometry3D.TextureCoordinates to get the texture coordinates. This will give you three (u,v) pairs as Point objects. Multiply each the (u,v) pairs by the corresponding weight from the hit test result (VertexWeight1, VertexWeight2, VertexWeight3) and add the pairs together, ie:
uMouse = u1 * VertexWeight1 + u2 * VertexWeight2 + u3 * VertexWeight3
vMouse = v1 * VertexWeight1 + v2 * VertexWeight2 + v3 * VertexWeight3
Now you have a point (uMouse, vMouse) that indicates where on your material your mouse was clicked.
If you want a particular point on your texture to move to exactly where the mouse was clicked, just subtract the (uMouse, vMouse) where the mouse was clicked from the (u,v) coordinate of the location in the material you want to appear under the mouse, and set this as your TranslateTransform. If you want to handle dragging, store the computed (uMouse,vMouse) where the drag started and the transform as of the drag start, then as dragging progresses compute the new transform as:
translate = (uMouse,vMouse) - (uMouseDragStart, vMouseDragStart) + origTranslate
In code you'll write this as Point additions. I spelled it out as (u,v) in this explanation because I thought it was easier to understand if I did so. In actuality the code to compute (uMouse, vMouse) will look more like this:
var uv1 = hit.MeshHit.TextureCoordinates[hit.VertexIndex1];
var uv2 = hit.MeshHit.TextureCoordinates[hit.VertexIndex2];
var uv3 = hit.MeshHit.TextureCoordinates[hit.VertexIndex3];
var uvMouse = new Vector(
uv1.X * hit.VertexWeight1 + uv2.X * hit.VertexWeight2 + uv3.X * hit.VertexWeight3)
uv1.Y * hit.VertexWeight1 + uv2.Y * hit.VertexWeight2 + uv3.Y * hit.VertexWeight3);
and the code to update the transform during a drag will look something like this:
...
var translate = translateAtDragStart + uvMouse - uvMouseAtDragStart;
... = new TranslateTransform(translate.X, translate.Y);
You'll have to adapt this to the exact situation.
Note that your HitTest callback may be called multiple times, starting at the closest mesh and moving back. It may even be called with 2D hits, for example if a 2D object is in front of your Viewport3D. So you'll want to check each hit to see if it is really what you want, for example during dragging you want to keep checking the position on the mesh being dragged even if it is no longer foremost. Return HitTestResultBehavior.Stop from you callback once you have acted on the mesh being dragged.