Azure Kinect Green Screen - azurekinect

I'm looking for sample code, or getting started to perform a "green screen" with the latest Azure Kinect DK.
How should I proceed to build and display a color stream with ONLY body area ?
Is it possible to avoid using body stream ? Because it require a stronger (NVidia computer)

We just published the new Green screen code sample as part of our GitHub open source repo microsoft/Azure-Kinect-Sensor-SDK.
You can find more information at green screen example
If you have any questions about the code, you can open GitHub issue.

I just received my Azure Kinect device recently and have not yet tried a few things I am interested in so this reply comes not from direct experience, however sample code and the SDK's indicate there may be a viable approach.
You can always try to implement traditional color-recognition algorithms, but if your usage scenario allows it, you can use data from the depth camera to filter only data within a depth range, with the "green screen" being out of range. You can then correlate pixels from the depth camera to the RGB image data to pick out data from the color stream that is within a certain depth range. Also, the background does not have to be a real green screen, but rather just has to be outside of the filtered depth range.
This approach allows you to use the sensor SDK, not requiring the body tracking SDK and its associated GPU requirements.

Related

How to make images resize as per client device size

I have a react app that has many image references ( tags <img src=... /> and css background:url(...)) type.
These images are hosted on Azure Storage.
To speed up my App loading time on various devices (desktops and mobile), I need to resize these images before they hit the client, ie, on the server somewhere.
So far, I can think of the following options:
Pick each image, and produce multiple versions of them for various standard device sizes. Then, pick up each <img src=... /> tag, and, using JS alter the image name, such that the right size of image gets served. This will not work with css.
Use Azure CDN to automatically resize images. I was hoping that resizing would happen automatically, as the CDN portal retrieves the user-agent from the device. Does anyone know if this is true?
Serve images through an Azure function, resizing them on the fly (as suggested here)
Can someone suggest other options they can think of, or a pros / con of the above.
Since you're using javascript, use the window tag. For browsers, the window tab measures the resolution of the browser and you can set the height and width of your image to window.innerHeight and window.innerWidth. There are multiple other ways to do this but this is the easiest and most optimised if your coding project needs to be efficient with the least lines of code necessary.
More info about the window object here : https://www.w3schools.com/js/js_window.asp
P.S. this is only a solution for desktop, for mobile you can use screen.width, screen.height. This might not work on desktop but on a macOS Big Sur device it works, I tried it (This might be because macOS Big Sur is like a mobile optimised interface given that you can even run iOS apps on it but we don't know unless we try). That might be a better option as it is most likely common across all your devices.
More info about the screen object here : https://www.tutorialrepublic.com/faq/how-to-detect-screen-resolution-with-javascript.php
On the off-chance that none of them are common across all of your target devices, try making a detector program with which you can detect the device type and store that in a variable. Then create 2 if statements saying
if(deviceType = iOS){
<img src=..., screen.width, screen.height/>
}else if(deviceType = Windows){
<img src=..., window.innerWidth, window.innerHeight/>
}
Obviously this code won't work but it's just there to show you the flow where you can sort of understand what I meant. You need to integrate it your own way but this was just a way to make it easier as many times people mention that my answers are not easy to understand, just as a safety measure.
The best part of these options is that instead of remade copies of the image itself, this will resize the one, which saves storage space and eliminates the chance of the user using an unexpected display output like a 49" Samsung Odyssey G9 monitor where the resolution is extremely far from what you might have expected and resized. This also means you don't have to create a separate file just to make image resizing code, just the one to detect the OS (not necessary if the screen object works) which would've already been done since this is Azure we're talking about and they always detect their user base.
If you have any queries, please reply back.
Good luck!

Drone SDK supporting free-flight control

This is my first time getting into drones.
I am looking at DJI drones, currently as it seems most promising from a documentation and reviews point of view.
Basically, I would like to program a drone(s) to fly a certain pattern and take pictures when a certain criteria is met. For example, I would like the drone to take off and fly around a small park, stopping to take a picture of each tree it encounters, automatically (auto-piloted / driven by some "AI").
Now I glanced thru the DJI SDK documentation, and so far it SEEMS this is possible (via FlightControl class). But im not sure.
Question:
Can my requirements be met with current drone SDK technologies?
Yes, the correct SDK, 4.11.1 will do everything you mentioned. You will need to do some location calculations but that's about it.
The sample will almost do everything you want as-is, with minor changes.
With the DJI Mobile SDK you can use the Mission classes to automatically fly a given set of coordinates (waypoints) and do some actions once you arrive at a waypoint, e.g. take a picture.
However the SDK has limitations:
The SDK is unable to detects objects in the video stream. Therefore it is needed to use your own code to detect objects yourself.
The way the drone flies to the waypoint is quite limited, e.g. the drone will always face the camera in the direction of flight.
When using the DJI Mission classes, a change of the route during execution is only possible with the use of timeline Missions by adding timeline elements to the list.
As you already assumed in the comment: Yes, the Mobile SDK is more advanced than Windows SDK.

How would I add live video streaming code to the graphite dashboard?

I am running an apache2 graphite host from an Orange Pi One, having written a service to translate and send data from sensors on the GPIO to the carbon line receiver. My project is to incorporate all the I/O from the device into a dashboard.
There are loads of graphite dashboards, but I can't find one that has a simple video stream applet/plugin.
I have searched graphite-web github and can easily adapt dashboard.html, but I am not sure whether the entire file is a placeholder, and whether any additions would render properly after all the javascript has run and rendered the page. It would seem I might need to reverse engineer the javascript, which seems quite an effort for the simple task I want.
If I can figure out the video stream code for the CSI camera, then I can adapt it to modify the dashboard with all the other data I want to display.
So, I am really looking for some guidance on getting started with dashboard code modification?
You can use the text panel to add HTML content to the dashboard.

Is it possible with Codename One to take a temporary photo?

Part of the process used in my app involves taking a photo (done with Capture.capturePhoto()). The photo is then resized to a small square of 200px and finally sent to a server.
I am able to delete the resized image with FileSystemStorage.delete() however the initial photo taken with Capture.capturePhoto() cannot be deleted because of the app being sand boxed (as described in this SO question )
This can be embarrassing for the user because these photos are polluting their gallery (the photos have no value for the user).
As deleting the initial photo is not possible, I was wondering if I could force the Captured photo to be stored in cache so that it gets automatically removed by the OS.
Maybe this question could be a solution for Android but I would prefer to avoid having to go native?
Consequently is it possible with Codename one to take a photo that will only be temporary and be deleted automatically ?
Thanks a lot,
Cheers
We try to delete the file automatically but since the OS takes the photo some platforms just stick it in the gallery and there isn't much we can do there. It's literally a matter of "this works on Android device A and fails on Android device B".
Apps like snapchat etc. don't use the device camera app but instead use the low level camera API's which are more complex and flaky. At this time we don't map these API's in Codename One so if you need something with lower level control you will need to use native interfaces. This is a non-trivial API though.

Mapping without Google Maps (on a stand-alone server)

I've been asked to create a stand-alone site/app that's not connected to the web (all on a local server).
One part of it is to have a map of a natural reserve with a bunch of links that will show footpaths, different animals habitat areas, visitor centres and such.
So there's a map (static picture) and when you click on it some overlay goes on top of it.
At least that's the way I see it now.
I've looked here: http://www.carto.net/williams/yosemite/ but it just looks mucho ugly.
Getting Maps Premium is not an option as it's not that cheap. And the reason they don't want to use Maps/Earth free API is because internet connection is still very slow there (sattelite internet only and when optic cable will be hooked up nobody knows).
Looking for some recommendations as to how to proceed there. Drawing paths/areas on the picture of the maps seems extremely insufficient and time consuming.
I'd need some way to use coordinates to automatically draw areas and lines over the map (and then somehow export that as a graphis file (or SVG) that'll be layered on top of original map simply using ajax.
Will ARCGIS pro edition be the way to go or should I start learning SVG. Do you know some good SVG books/tutorials (as related to mapping)? Maybe there's some other way around altogether...
They do have detailed maps of the area in ARCGIS (whatever format they are in I don't know yet).
Just looking for some ideas, any help will be appreciated. Thanks in advance.
Do you know GeoServer? More or less all-in-one, compatible with different types of datasets, widely customisable.
Starting from "raw" SVG and write the whole thing yourself will probably be prohibitively time consuming.
If you have very little data (say less than 50 geometries) that is fixed, you could also use OpenLayers without any backend server.
For the data you could use a OpenLayers.Layer.Image if your (overlay-) map consists of a small raster image. For vector data, you can use OpenLayers.Layer.Text or a OpenLayers.Layer.Vecor together with protocols OpenLayers.Layer.KML or .JSON.
You can click through the current release examples.
I admit that this is not an easy task for a beginner, but it's fun hacking the maps together.

Resources