Exported Collada file from Blender is split up into pieces when imported into SceneKit - scenekit

I have a terrain object made with Blender.
When I export it using the COLLADA exporter and import the file into SceneKit, the terrain is split up into pieces (like 8 or 10).
This makes it fairly complicated to apply textures and move the object.
Any ideas why this is happening?

Related

Export Canvas from React Konva to SVG & PDF?

Currently, I can only use jpeg & png exports like:
stageRef.current?.getStage().toDataURL({ mimeType: 'image/jpeg', quality: 1 })
stageRef.current?.getStage().toDataURL({ mimeType: 'image/png', quality: 1 })
I want to export Canvas to svg as well as pdf like Figma does it.
I found out about Data URIs which led to MIME_types. In there, they have written application/pdf & image/svg+xml should work but when I do that I still get a .png image.
Is there any way to achieve .svg & .pdf from Canvas in Konva?
stage.toDataURL() is using canvas.toDataURL() API to do the export. Most of the browsers support only jpg and png formats.
For SVG or PDF exports you have to write your own implementation.
For PDF exports you may use external libraries to generate a PDF file with an image from stage.toDataURL() method. As a demo take a look into Saving Konva stage to PDF demo.
There are no built-in methods for SVG exports in Konva library. You have to write your own implementation. If you use basic shapes such as Rect, Circle and Text without any fancy filters, writing such conversions shouldn't be hard, because there are similar tags in SVG spec.
toDataUrl() exports a bitmap, rather than a vector.
You can generate an svg by using the canvas2svg package.
You can set your Layer's context equal to a c2s instance, rendering it, and resetting your Layer's ref to what it was previously, as shown here.

(OFF Line MAP) how to download osm tiles to use OFF LINE MAP in react-native-maps

Respected sir/ma'am I want to implement offline osm map in my project. but i don't found any proper documentation for how to download tiles for offline usage. i am currently using react-native-maps package for implementing custom Tile Overlay
import { LocalTile } from 'react-native-maps';
<MapView
region={this.state.region}
onRegionChange={this.onRegionChange}
>
<LocalTile
/**
* The path template of the locally stored tiles. The patterns {x} {y} {z} will be replaced at runtime
* For example, /storage/emulated/0/mytiles/{z}/{x}/{y}.png
*/
pathTemplate={this.state.pathTemplate}
/**
* The size of provided local tiles (usually 256 or 512).
*/
tileSize={256}
/>
</MapView>
There is no straightforward approach for downloading offline raster tiles for OSM. Rendering these tiles is very resource intensive and depending on the zoom level the download and storage size of raster tiles will become huge. In other words: raster tiles should be avoided for offline use. You should consider using vector tiles instead.
Possible solutions:
Render your own raster tiles, for example by using Maperitive
Switch to a library supporting vector tiles and download them, for example from OpenMapTiles.com

How to create own dataset for Mask RCNN?

I have two folders of images, one includes images and another includes bitmaps as annotations.How can I prepare them as dataset for use in Mask RCNN?
For Mask RCNN you need to directly annotate the images so that it could be lablled also in a specific class. Use tools such as VGG Annotator for this purpose. Then you have to customly edit the .py file for your requiremtns and run it, here you will be the directory of these images along with the annotations so that it can recognise what part of the image is representing which class. Later you need to train it these using a pretrained weight file of COCO dataset. This will generate your custom weight file. THis custom weight is used for testing.
Refer these for more insight
https://github.com/gabrielgarza/Mask_RCNN
https://github.com/d-smit/kangaroo-detection-mask-rcnn
https://github.com/thomasjvarghese799/Mask-RCNN-on-Rick-Morty

In Aframe - Maya exported obj models have black textures

I'm a 3d artist and my exported models have black textures in Aframe.
I don't know what I did wrong. I use MAYA to export a simple texture with a MTL file and obj model.
You should check out the official FAQ on working with models.
To get Your model to work 'as expected', You need to use either format:
.gltf
three.js JSON
I'd recommend .gltf, since there are multiple working exporters, like this from khronos group.
Furthermore, a-frame's Don McCurdy has a wonderful set of loaders.
Sometimes You can get .obj's to work and render textures # the material component , but they can be unpredictable, like Don said in a comment to this anwser.
Found 2 things, somehow in my mtl file the kd(duffuse) was set to 0.0 0.0 0.0
on export, i put it on 1.0 1.0 1.0.
Second, additional to direction light the ambient light in a-frame showed my texture.

Is it possible to replace maya geometry cache with alembic cache?

I am researching the way to improve efficiency of using cache in Maya. For wide use, Alembic Cache is a good choice as I know. So, I try to use it as replacement of Geometry Cache(mcx). But Alembic Cache has a limitation of matching names of targets. On behalf of Geometry Cache, is there any aid for using Alembic Cache?
I just answered this in a previously asked question.
To overcome the non-matching name target problem:
When you import the abc file into Maya, it creates an abc Node in the Maya scene. If the object name doesn't match your scene's object name, you can connect it manually.
The way you do this is as follows.
The alembic node has a bunch of output plug Arrays, like outPolyMesh, outNSurface etc. These contain the outputs. If your render object is a mesh, you will be able to find the corresponding output plug inside the outPolyMesh array. In your connections editor, just connect the corresponding outPolyMesh[i] plug into your inMesh plug of your render model's shape node.
Hope that was useful.

Resources