How to get the storage key from my 3D floorplan scene id? - archilogic

I upload my 2D flow plan and receive the 3D floor plan mail. I can get the scene id from 3d floor plan url, but I can not use it to my aframe scene since agrane need storage key to load the scene. I can paste the scene id to app creator to get the storage key data. How can I get the .data3d.buffer from my uploaded 3d model through storage API not through app creator?

we just released the scene API with the new 3dio.js version 1.0.1
To get the baked model (data3d.buffer file) including the furniture items from an Archilogic scene into A-Frame you can do:
const sceneId = '5dc58829-ecd3-4b33-bdaf-f798b7edecd4'
const sceneEl = document.querySelector('a-scene')
io3d.scene.getAframeElements(sceneId)
.then(element => {
sceneEl.appendChild(element)
})
Take a look at the documentation here: https://3d.io/docs/api/1/scene.html
To improve the A-Frame lighting for interior spaces you can add the io3d-lighting component to the A-Frame scene element.
<a-scene io3d-lighting>
</a-scene>

Related

Randomly set point on model's surface (Not sphere)

I am creating a React project with React Three fiber to add a 3D model that the user can interact with.
Following the documentation, I imported my model (GLB format) I configured my camera.
Now I want to add random points on the surface of the model (like on this example => example )
Unfortunately, I cannot find the method to be applied in the Three Fiber documentation.
Anyone can tell me how to do that ?
Thank you for your time !

Is it possible to add more than one IndoorManager tileset to an Azure Map?

I have a requirement to show more than one building indoor map on a single Azure Map. Does Azure Map's Indoor Module support adding more than one tileset to a map?
A single building's indoor map is created using the Azure Indoor Map Creator which results in a single tilesetId. This is added to an Azure Map using atlas.indoor.IndoorManager. I have two different tilesetIds that work when added to a map on their own, but I'm unable to have both visible and selectable individually.
This is the code I use but only the first tileset added is ever shown.
// building 1
const levelControl = new atlas.control.LevelControl({
position: "top-right",
});
const indoorManager = new atlas.indoor.IndoorManager(map, {
levelControl: levelControl,
tilesetId: tilesetId
});
// building 2
const levelControl2 = new atlas.control.LevelControl({
position: "top-right",
});
const indoorManager2 = new atlas.indoor.IndoorManager(map, {
levelControl: levelControl2,
tilesetId: tilesetId2,
});
The indoor manager class itself only appears to support a single tileset so my assumption is that I need a class per tileset, each with its own level control. However, this only ever renders the first indoor manager. I've swapped the tilesets around to confirm this.
The indoor manager currently does not support multiple tilesets. To get two buildings showing, you would need to append the conversionId of the second building to an existing dataset. You can then create a new tileset that contains both buildings.
Thanks,
Brendan

How to use user image input from a react app in python face recognition model?

I have used the face-recognition library to compare two faces from given images but I want to take the images from a react app I built. I am taking both the images as user input from the react app. How do I use them in the model and then send both images to the database I created in atlas if they are of the same person.
This is the coding I am working on:
import face_recognition
known_image = face_recognition.load_image_file("img1.jpeg")
unknown_image = face_recognition.load_image_file("img2.jpeg")
known_encoding = face_recognition.face_encodings(known_image)[0]
unknown_encoding = face_recognition.face_encodings(unknown_image)[0]
results = face_recognition.compare_faces([known_encoding], unknown_encoding)

ARKit RealityKit WorldMap Persistence

So I have a RealityKit app, in it I add variousEntities.
I looked for inspiration for persistence at Apple's SceneKit example with the code, which I implemented only to find out missing Entities upon WorldMap Load
I'm assuming you are able to save and load the worldMap from the ARView's session, but the problem is that this only persists the old styler ARAnchors, and not the cool new Entity objects from the new RealityKit features.
The work around that I did, was to initialize my AnchorEntities, using the constructor that takes an ARAnchor. So, from my hitTest, or RayCast, I take the world transform and save it as an ARAnchor, then use that to initialize an AnchorEntity. I gave this ARAnchor a unique name, to be used later to re-map to an entity upon loading a persisted world map, since this map still only has ARAnchors.
let arAnchor = ARAnchor(name: anchorId, transform: rayCast.worldTransform) // ARAnchor with unique name or ID
let anchorEntity = AnchorEntity(anchor: arAnchor)
That's what it looks like before adding the anchors to the scene for the first time. After you save your world map, close, and reload, I then loop over the loaded or persisted ARAnchors, and associate each anchor with their respective Entities, which maps to the name in the ARAnchor.
let anchorEntity = AnchorEntity(anchor: persistedArAnchor) // use the existing ARAnchor that persisted to construct an Entity
var someModelEntity: Entity = myEntityThatMatchesTheAnchorName // remake the entity that maps to the existing named ARAnchor
anchorEntity.addChild(someModelEntity)
arView.scene.addAnchor(anchorEntity)
It's indirect, but taking advantage of that association between AnchorEntity and ARAnchor was the first solution I could find, given the limitation of only knowing how to persists ARAnchors, and not Entities in the worldMap.

AngularJS - How to add child objects during edit and then PUT

I'm getting started with AngularJS with Web API and EF on the back end. I have an edit form that is populated via a GET request which returns a Boat object. One of Boat's properties is an Images array.
If the user adds images to Boat then I need records to be inserted into the database for each new image when the boat is updated. (But obviously not for images that the boat already had prior to edit.)
The current Web API function is:
public async Task<IHttpActionResult> Put(int id, Boat boat)
It seems I could either:
a) Push any newly added images to $scope.Boat.Images prior to the PUT. Then in the Web API function, loop through the received Boat.Images, check if a database record exists for that image, if no record exists add the image record to the database. This seems a bit uneconomical because I'm looping through every existing image and checking if it actually exists in the db already.
or
b) Send a separate object "newImages" with the PUT. Then I guess the Web API function would be:
public async Task<IHttpActionResult> Put(int id, Boat boat, string[] newImages)
This would have the benefit of not having to check which images already exist vs new ones. ie. Everything in newImages gets added to the database. But, is it weird from an AngularJS point of view to separate the new images from the Boat.Images collection?
Would you do a) or b) or... c) some other way?
I'll do A as it fits more the Restful approach.
Now Images could be an Array of complex object instead of Array of string, therefore you would have a property ID.
Then on the WebAPI side, you just add the Images without IDs set.
var newImages = Boat.Images.Select(x => x.Id == null);

Resources