Can you record a video with expo-camera while running TensorFlow? - tensorflow.js

There is a bug with react-native Tensor Flow while expo-camera is recording so that Tensor Flow does not get new image data to process or render to the screen.
I am using the Tensor Flow PoseModel to track the user's pose. When I turn on recording with expo-camera, Tensor Flow no longer receives an updated image tensor so the screen looks stuck.
I think there is a problem with getting the correct WebGL texture from expo-camera while the camera is recording. But I am new to both Tensor Flow and WebGL so I haven't figured out the issue.
YouTube Video That Shows Bug
This repo gives a minimal reproduction: https://github.com/vpontis/expo-camera-tensor-flow-bug

Related

Preloading of images

I am making a little game with cards with React JS where you are looking for pairs of cards that are same. The game is fetching image URLs for random animal images from several APIs every time you restart the game. The cards are turned on their back at start of the game and then you can flip them to see their face. The problem I run into is that depending on from which API the image is coming the game can be bogged down by fetching the actual image when the card is turned. So what I would like is to kind of have the game actually fetch all the images at the start and then just use the cached images during the game itself, but I am not sure how I would load the images, as they only get fetched through image src attribute when they are turned. What would be the best approach to do this please?
I tried looking for solutions online, but I cannot seem to find a similar issue.

Video streaming including application overlay components

I want to make a react-native app having the capability of video streaming from a mobile app to a connected browser user. On top of that, I want to overlay some application components so connected users can see video streaming as well as some of the application UI.
For an example take a reference of the below-given image. Here, video streaming is running in the car showroom and there are a few app components shown as an overlay of the video like an app menu and a car image.
I want to achive same functionality and using VideoSDK platform for video streaming service.
So far I have created react-native app and able to stream video through camera to the connected browser user.
Next, I want to add my app menu on top of the video as per the image and therefore i am thinking screenshare with combination of video sharing is way to go.
The above image is the actual implementation using video SDK in the browser but as you can see screen share window is opening in a totally different context which is not the expected implementation.
Can someone suggest how can I achieve the functionality of video streaming having the capability of app overlay components?
I have reviewed your requirement and I am glad to inform you that we do have application with same requirements, for further discussion and demos can we connect over mail i.e. karan10010#gmail.com

Kinesis :: bounding boxes on aws kinesis live video stream and play the video from front-end react.js app

I am working on an application which will detect faces from live stream. I have a front-end application in react.js and backend is serverless AWS which will call rekognition api to detect faces and return co-ordinates of bounding boxes of the detected faces. My video source is aws kinesis video stream.
In UI, ReactPlayer is being used to play the video from hlsurl. Now my question is, how can I draw bounding boxes on detected faces/mask on the real time stream and show the video from UI?
my approach:
return bounding-boxes from a lambda function and draw using html canvas, but due to latency it won't give the feel of real time detection.
So, how can I show the kinesis real time video stream with bounding boxes is there any other approach to follow? I have gone through the documentations but those are for static images only.

How to annotate an image in react native?

I am new to react-native. I need to create an volume estimation app from food images. For volume estimation, I need a segmentation json file whose format is in the form of labelme annotations.
But when I use labelme I have to manually draw polygons around the image.
Can I do something similiar in react-native like when I upload an image I can draw a polygon around the image and retrieve its coordinates. Then I could convert the coordinates into labelme annotation json file.
Any suggestions?
Thank You

React-Native Audio Waveform editor

I am planning to build an audio editor app with react-native. The functionalities include having a textbox where user can provide the URL for any audio file. Once the file is loaded on the UI, it will be played with a Waveform UI. User can select the start and endpoints of the audio by moving the slider on the waveform and once it's fixed, the app will get the start-time and end-time of the selected waveform, which will be then sent to the backend to cut the audio(probably using FFmpeg library).
I need but can't seem to find any react-native library that allows the user to interact with the waveform.
The UI can be somewhat similar to:
I don't believe that there is one that allows users to interact with the waveform out of the box.
You could use react-native-audiowaveform to show the waveform, and then capture the user's touches.

Resources