Decals and React - reactjs

I’ve been stuck for days, and couldn’t figure it out.
I’m trying to put a small graphic (as in Team Badge, and player’s number at the back) onto a basketball jersey. I can’t use Texture as anything I specified will cover the whole jersey.
From this https://threejs.org/examples/#webgl_decals I believe decal is the way to go. What I don’t understand is that I need to shove a Mesh into the Constructor of DecalGeometry. What kind of mesh should I use? Not the mesh of the basketball jersey?
Is this the right step?
get the upload of PNG from browser
create DecalGeometry using an arbitrary Geometry (may be a square?)
specify some material (I want it to be transparent)
create a new mesh using the above DecalGeometry and material (I can’t figure out what values to use for the decal projector…)
load the PNG into a Texture
load the basketball jersey’s Mesh
merge the basketball jersey’s Mesh with the new Mesh containing the DecalGeometry
Also, how do I control the size of the graphic inside the DecalGeometry? As the graphic has color, I’m not sure if the basketball jersey’s color will affect the graphic and make the graphic look broken?
Thanks a lot!

Related

Cesium Draw and Measure tools for 2D maps?

I am using Cesium to display a strictly 2D map in the browser (wrapped in React using the Resium library).
I am interested in giving the user the option to draw lines\polygons, and to measure the distances between two points or the area withing the polygon.
Basially, I want precisely this OpenLayeres example, but in Cesium: https://openlayers.org/en/latest/examples/measure.html
How would one go about doing it?
Thank you
I was doing some research on cesium drawing tool library.
It wasn't same as the one you asked but you can use it for your need. you can use for 2D and 3D maps. it gives all basic functionalities like draw circle, polygon, add marker, draw line, calculate distance etc.
I have found cesium-draw the library for the VUE.
[cesium draw ](https://www.npmjs.com/package/cesium-draw)<br>
If you can work with Angular their is a lot of cesium sport in angular so you can use
[angular-cesium](https://angular-cesium.com/)
if you have found any thing else please do share I love to see.

Apply 2 different textures to two sides of the same face of an object in Autodesk Maya

I've been trying to apply 2 separate textures to the inside and outside of a bowl shaped object in Maya. I don't want to extrude that particular face and put a different texture. That's easy though.
But since I'm working on a low-res model where I'm restricted to have a certain number of faces, so I need an alternative. Someone gave me an idea of using Maya Nodes. But, I couldn't come up with a solution.
How can I do this?
You can achieve that using samplerInfo node.
Just put a switch or colorBlend and use flippedNormal as a switcher between 2 textures/shaders/w/e is it.

Is it possible to get a "SCNVector3" position of a World object using CoreML and ARKit?

I am working on a AR based solution in which I am rendering some 3D models using SceneKit and ARKit. I have also integrated CoreML to identify objects and render corresponding 3D objects in scene.
But right now I am just rendering it in the center of screen as soon I detect the object(Only for the list of objects that I have). Is it possible to get the position of the real world object so that I can show some overlay above the object?
That is if I have a water bottled scanned, I should able to get the position of the water bottle. It could be anywhere in the water bottle but shouldn't go outside of it. Is this possible using SceneKit?
All parts of what you ask are theoretically possible, but a) for several parts, there’s no integrated API to do things for you, and b) you’re probably signing yourself up for a more difficult problem than you think.
What you presumably have with your Core ML integration is an image classifier, as that’s what most of the easy to find ML models do. Image classification answers one question: “what is this a picture of?”
What you’re looking for involves at least two additional questions:
“Given that this image has been classified as containing (some specific object), where in the 2D image is that object?”
“Given the position of a detected object in the 2D video image, where is it in the 3D space tracked by ARKit?”
Question 1 is pretty reasonable. There are models that do both classification and detection (location/bounds within an image) in the ML community. Probably the best known one is YOLO — here’s a blog post about using it with Core ML.
Question 2 is the “research team and five years” part. You’ll notice in the YOLO papers that it gives you only coarse bounding boxes for detected objects — that is, it’s working in 2D image space, not doing 3D scene reconstruction.
To really know the shape, or even the 3D bounding box of an object means integrating object detection with scene reconstruction. For example, if an object has some height in the 2D image, are you looking at a 3D object that’s tall with a small footprint, or one that’s long and low, receding into the distance? Such integration would require taking apart the inner workings of ARKit, which nobody outside Apple can do, or recreating an ARKit-alike from scratch.
There might be some assumptions you can make to get very rough estimates of 3D shape from a 2D bounding box, though. For example, if you do AR hit tests on the lower corners of a box and find that they’re on a horizontal plane, you can guess that the 2D height of the box is proportional to the 3D height of the object, and that its footprint on the plane is proportional to the box’s width. You’d have to do some research and testing to see if assumptions like that hold up, especially in whatever use cases your app covers.

OpenGL: How to drag image and move it to the line by using the mouse

I want to drag an image to one line by using the mouse and when the image is close to the line, the image will automatically move on to the line, like some "floor planner" program ------------you create wall and drag the door to this wall and when the door is close to the wall, the door will automatically show up on the wall.
Can OpenGL do it?
if it can, can anyone tell me how? If it can not, can anyone tell me how I can do it?
Show me an example.
OpenGL is a rendering API, it's purpose is to generate rasterized images based on descriptions provided to it by an application.
It knows nothing about user input, and even less about the application's "domain objects" such as doors, walls, and so on. All it deals with is abstract coordinates and matrices that describe the transforms and projections to take those 3D coordinates into 2D for rasterization, as well as shading for surfaces and so on.
So, it's up to you to implement that, so that the coordinates you eventually pass to OpenGL end up being what you want them to be.
Snapping is typically a combination of measuring the distance to some guiding object, and the following quantization of the input coordinates to correspond to the the guide.

2D CAD application in WPF

I'm trying to write an CAD-like application in WPF(.NET 4.0) that needs to be able to display a lot of 2D points/lines. It will be used to display CAD-plans of entire cities with zoom, pan, rotate and point snapping on mouseover.
Right now I purely use WPF. I read the objects from the CAD file draw them into a StreamGeometry, use it as stroke of a new Path and add it to a Canvas, with several transforms.
My problem is that this solution doesn't scale well enough. It works fine with small CAD-files, but when I want to display like half a city(with houses and land boundaries) it is very very delayed.
I also tried to convert my CAD-file to an image, but
- a resolution a 32000x32000 is sometimes not enough
- when zooming out the lines are too thin.
In the end I need to be able to place this on a Canvas(2D/3D) as background.
What are my best options here?
Thanks,
Niklas
wpf is not good for a large 3d models. im afraid it is too slow. Your best bet is direct 3d or openGL
However, even with the speed of direct3d,openGL you will still need to work out how to cull as many polygons/vertices as possible before the rendering of the scene if you are trying to show an entire city.
there is a large amount of information on this (generally under game development)
there are a few techniques including frustrum culling, near and far plane culling.
also, since you probably have a static scene you may be able to use binary spacial partitioning.
As I understand the subject is 2D CAD system within WPF.
Great! I use it...
OpenGL and DirectX are in infinite loop OnDraw always. The CPU works all the time.
WPF/Silverlight 2D is smart model.
Yes, total amount of elements (for example, primitives inherited from Shape) must be not so much. But how many?
I tested own app (Silverlight). WPF will be a bit faster I hope...
Here my 2D CAD results. Performance is still great. Each beam consists of multiple primitives.
Use a VirtualCanvas like this one from Chris Lovett.

Resources