I have a fully running CV project on Python, running on a Raspberry Pi. I would like to convert this to a web app (I've just started using react).
My idea is to have a video-stream source that connects to my Wi-Fi. This videostream would then be accessed through a web-app (i.e. by typing the hardware IP address). Once the user logs into the web-app, there'll be a live feed of the camera and buttons/functions that allows to perform different CV tasks on the source live-stream (i.e. object detection, colour detection, etc.). How should I go about creating this? In particular:
On the hardware side, I was thinking of just having a Raspberry Pi with a camera, streaming the live feed via Wi-fi. Couple of questions:
where should the computer vision algorithms be sitting? I assume on the web-app side? Currently everything is done on the Raspberry Pi side, but I feel I reached the limit in terms of computational power
would the Raspberry Pi be completely dumb? I.e. just stream the camera feed?
what's the best way to stream the camera feed to the React App? Websockets?
Related
Use case: I have two gaming PCs running a VR game in two separate rooms and a laptop in the living room connected to a projector. The VR headsets are wireless and will be in the same room as the projector with two people playing multiplayer together and the spectator screens of each gaming PC displayed side by side on the projector for others to watch the action. The game is a rhythm game (Beat Saber) so low latency is extremely important. On the other hand I can sacrifice on video quality because each desktop will only display on a 960x540 portion of the 1080p projector screen. Also audio is already taken care of and doesn't need to be transmitted to the laptop.
I have written a program with WPF and C# which displays two webpages side by side with a black bar at the top and bottom. The idea was to log into a low latency screen sharing webpage (for example parsec.app) and connect one of the PCs on each side. This actually works, but the problem is that after connecting the second computer, they both become very laggy and low fps when viewing content with a lot of movement. I really need both video streams to be smooth and with low latency. (<150ms) So using a third party service for the sharing/streaming of the screens seems to be out of the question. So I have to find a way to send the desktop streams directly over the LAN and then display them side by side with my program. I would like to have my own program display the streams so that I have authority over the layout and can add thematic pictures to the unused space on the screen. Maybe even make it customizable with up to four streams simultaneously in the future.
The laptop and one of the gaming PCs are only connected to the LAN via WiFi and the other gaming PC is connected via Ethernet. I have done some research and ffmpg or NDI seem to be the lowest latency ways to send video through a network, only I have no idea how to use them and don't have any experience programming network applications. I tried streaming my screen from one PC to the other with VLC using UDP but couldn't even get that working..
Here's a link to my visual studio project so far:
https://drive.google.com/file/d/1W7khWBvKZ1zMvreH9nyfAHPVQ6BDKN5Z/view?usp=share_link
Here's a video showing my program in action:
https://drive.google.com/file/d/1db3EHHV23mvdky36fcox9--crlrbZXK4/view?usp=share_link
Does anyone have any insights or ideas on how to solve my problem? Of course I'm willing to do some studying too if someone can point me to the resources where I can learn the required skills to make this work.
I would like to know how I could control the movement of a physical robot using a web interface. For example, I have created a web interface with four movement buttons (front, back, left, right) but do not know how to connect that interface to the physical robot and control its movements. I have experience in controlling a simulated Turtlebot (in Gazebo) with the interface locally on my laptop using ROSBRIDGE and SimpleHTTPServer. Would I have to use these as well to control a physical robot?
I'm running ROS2 Crystal, Ubuntu 18.04. Thank you!
Yes, The interface to control a physical robot would be the same as simulation.
You will need to to publish control command to /cmd_vel topic and then you can subscribe to the topic to convert those velocity commands to actual motor commands.
You can also look into using Robot Web Tools for the web interface.
Additionally if you could provide more detiails about your setup I could give more information.
You can also use existing tools that allow you to quickly connect to your robot without having to build the communication infrastructure yourself. An example of that is Freedom Robotics platform that includes a variety of teleop tools for ROS.
You can find more information here (a post from their Head of Robotics) or try out for free.
I used this for a few of my personal projects and it saved me from all the hassle of creating the web interface and the API communication with ROS.
Allthough loooking carefully to select to proper drive to create a new iot device, I erroneously selected by micro sd with my source code. Stupid me!
However I have the app still on another running device. Is their a way to copy my app package from the raspberry back to my devbox?
I am working on creating a touch pad device (custom hardware but similar to an android device) that acts as a touchscreen drawing pad similar to the Wacom Bamboo drawing pads. However, the key feature of the device is instead of connecting it to the computer with wires or via Bluetooth, it connects to the local WiFi network and searches for devices with a port open (currently 5000 for testing purposes). Currently, I have a client written in C that when launched opens up a DatagramSocket on port 5000 and waits for a custom UDP packet containing normalized X, Y, and pressure. Then, for testing purposes, I am putting the normalized X and Y into SendInput. SendInput "works" however injecting packets into the computers current mouse is not what I want. Instead, I want to have it considered as a seperate input device so programs like gimp will be able to detect it and assign custom functions based on the data (ie: have gimp utilize the pressure data).
The problem is I dont know where to start to create a driver that does the former. I have been extensively looking at the winddk thinking that might be the key. The problem with the winddk is I cannot find any documentation on creating a HID driver using data that is not from a ps/2 or usb. This tutorial got me thinking about using IOCTLs, but I am not really sure how to make them be considered as input.
As a side note, in the title I said TCP/UDP because I am willing, and considering for security purposes, to change from UDP connection to TCP.
If someone can push me in the right direction or link me to some related documentation and samples, that would be awesome because right now I am lost. Thank you.
A customer (photographer) asked me, if it was possible to write some kind of software for cellphones, so he could physically connect it to his professional digital camera (Canon or Nikon) and transfer the pictures (or a subset) to the cellphone.
I am trying not to put constraints on cellphone platform (Symbian, Windows Mobile etc) from the beginning, so I am leaving that sort of constraints out on purpose.
Can anybody give me some hints?
You need a connection between the camera and the cellphone:
Some windows mobile devices got a USB-Host-Function, so you can connect either a cardreader or the camera itself via a usb-cable and read the files from the device. I never heard of a symbian-device which supports usb-host, but there might be some.
If the camera supports either bluetooth or ir, you could use these protocols to transfer the files as most mobile-phonse support this.
If you got a connection (and the protocol-support by your platform) it is easy to write a application to transfer the file from the device to you cellphone. You can write this application in any supported language (java for j2me, python (symbian), .net (windows mobile)
My digital camera saves photos to a memory card. I can simply take the memory card out of the camera and insert it into my Windows Mobile phone and view the photos on the phone.