detecting a mobile device flat movement in 2D space - is it possible? - mobile

I program mobile apps using Adobe AIR but i guess this question refers to mobile development in general.
I would like to do something like a 2d drawing app with the movement of the mobile device itself in space. For instance, holding the phone flat horizontally on my palm and "drawing" a square in space would draw a square on the screen. This would require the user to calibrate a start point and also decide if the plane is horizontal or vertical.
Could this be done with the accelerometer, gyroscope, or a combination of the two?
If not (since steady movement with no acceleration isn't detected - right?) can this be achieved with the camera always open and detecting general movement in a 2d plane?
thanx

Related

Program to display two low latency desktop streams side by side over LAN

Use case: I have two gaming PCs running a VR game in two separate rooms and a laptop in the living room connected to a projector. The VR headsets are wireless and will be in the same room as the projector with two people playing multiplayer together and the spectator screens of each gaming PC displayed side by side on the projector for others to watch the action. The game is a rhythm game (Beat Saber) so low latency is extremely important. On the other hand I can sacrifice on video quality because each desktop will only display on a 960x540 portion of the 1080p projector screen. Also audio is already taken care of and doesn't need to be transmitted to the laptop.
I have written a program with WPF and C# which displays two webpages side by side with a black bar at the top and bottom. The idea was to log into a low latency screen sharing webpage (for example parsec.app) and connect one of the PCs on each side. This actually works, but the problem is that after connecting the second computer, they both become very laggy and low fps when viewing content with a lot of movement. I really need both video streams to be smooth and with low latency. (<150ms) So using a third party service for the sharing/streaming of the screens seems to be out of the question. So I have to find a way to send the desktop streams directly over the LAN and then display them side by side with my program. I would like to have my own program display the streams so that I have authority over the layout and can add thematic pictures to the unused space on the screen. Maybe even make it customizable with up to four streams simultaneously in the future.
The laptop and one of the gaming PCs are only connected to the LAN via WiFi and the other gaming PC is connected via Ethernet. I have done some research and ffmpg or NDI seem to be the lowest latency ways to send video through a network, only I have no idea how to use them and don't have any experience programming network applications. I tried streaming my screen from one PC to the other with VLC using UDP but couldn't even get that working..
Here's a link to my visual studio project so far:
https://drive.google.com/file/d/1W7khWBvKZ1zMvreH9nyfAHPVQ6BDKN5Z/view?usp=share_link
Here's a video showing my program in action:
https://drive.google.com/file/d/1db3EHHV23mvdky36fcox9--crlrbZXK4/view?usp=share_link
Does anyone have any insights or ideas on how to solve my problem? Of course I'm willing to do some studying too if someone can point me to the resources where I can learn the required skills to make this work.

Does libdrm talk to kernel DRM/graphics card via ioctl()?

This can be a silly question as I do not know much about this topic at all... It seems that user applications can talk directly to GPU to render an image, for example using OpenGL, through mesa and libdrm, where the libdrm is a wrapper around various ioctl() calls, as illustrated in this graph. Does it mean that for every new frame of a 3D game, the game application needs to call the ioctl() once (or maybe even twice if KMS needs to be reached)? That sounds like a lot of user-kernel space barrier crossing (thinking about a 120 fps game).
libdrm is an user space wrapper to perform fine grained access of the underlying KMS driver features like modesetting, checking if plane being used is an overlay plane or primary plane etc. libdrm implementations are generally different for various CPU/GPU/OS combinations, as the h/w driver running in kernel tend to support different set of functionalities apart from the standard ones. The standard way of working with libdrm is to open a drm device available in /dev/ node and perform libdrm function calls using the fd returned from open().
More often than not, the display compositor software for a particular OS like X11, wayland, hardware-composer will need to be in control of the drm device, which means non privileged applications have no way of being DRM master. Most of the libdrm mode setting functionalities do not work if the application trying to use them are not the DRM master. Recommended practice instead of using libdrm directly, is to use a standard graphics library like openGL or VULKAN to prepare and render frames in your application.
The number of ioctls required to interact with the kernel DRM module is most likely not the biggest bottleneck you will face when trying to render high FPS applications. The preferred way to run high fps applications while cooperating with the display compositor of the target system is to have
a double or triple buffered setup for rendering, where the next buffer to be rendered is ready to be rendered before the current frame has finished rendered.
Take advantage of h/w acceleration wherever possible, e.g for performing scaling/resizing/image format conversions/color space conversions.
Pre compute and reuse shader elements
Try to reuse texture elements as much as possible instead of computing a lot of textures for every frame being rendered.
Use vector/SIMD/SSEv2,3,4/AVX/neon instructions wherever possible to take advantage of modern CPU pipelines

How to locate the position of the wiimote in space using data of infrared from the sensor bar

I am developing a video game on a FPGA board running linux. I want to use the wiimote to control the game. Now I have a sensor bar with multiple infrared sources at both ends, and a wiimote. I want to track the position of the wiimote in space according to the data of the positions of the infrared sources read in from the IR camera. I have searched for the algorithms to do the positioning but couldn't find detailed algorithms. I am also wondering that whether there happened to be some read-to-use library which already have the positioning algorithm implemented so that I don't need to do it myself.
Currently I am manipulating wiimote with libwiimote library. I can get at most four postions of IR sources from the IR camera. How can I compute the position of the wiimote based on these data? Thanks!

Enable/Disable Wi-Fi communication on a mobile device when entering or exiting a fixed location like an industrial space

I have a client that has an industrial space. In this space they have automated machines that move up/down, sideways, forward/backward, and raise completely by swinging the machine’s bottom upward. To control these devices they have a large system box with relays that basically tell which switch to activate. This control box is in the center of the industrial space. What they want is to store all the logic and system files in the “control center” box (that is located in the middle of the space) and then have the ability to operate individuals machines, or several machines either, all together or synchronously, while personally moving around the space. After talking with them, they are keen to the idea of us developing a Wi-Fi based interface in the Control Center, and then be able to use a mobile device (phone or tablet) to communicate with it. The Wi-Fi and mobile app concept is easy enough to write, the issue is that they have two concerns: Personal Safety, and Security of the network.
For personal safety, they need it to be that the mobile device cannot connect or operate the equipment if the user is NOT in the same room, or space. Meaning, they cannot push a button on the tablet, walk away or out of sight and not see if a machine is crushing an employee, or an employee is not playing around and dangling from the equipment. So they have to actually be within eye sight in order to watch for safety.
For security, they want to be able to connect to the Control Center through Wi-Fi, and operate the system. They do not want other people with smart devices or laptops to be able to access, hack it, and see the addresses and so on.
I was originally thinking of using GPS satellites and establish GeoPolygon (GeoFence), using latitudes and longitudes, to map out the perimeter of the industrial space. Then with that, when the user enters or exits the space the GPS would trigger a push notification (like a foursquare check-in), and disable/enable Wi-Fi connections, and thus letting the user know that Wi-Fi is either on or off and they can or cannot access the control center to operate the machines.
The problem I found is that GPS can be unreliable, especially indoors, in cityscapes, or when there is a lot of frequency static or interference.
So, to finish my long story, I was wondering if anyone had any plausible ideas as to how I can toggle Wi-Fi access on or off based on the user’s location and when the user is moving in or out of the fixed-location space (ie. wharehouse)?
GPS does not provide 18 inches of accuracy.
Indoors, GPS probably does not work at all, or with errors of 30m or more.
Outside 3- 6 m is normal, in cities up to 30m error happend evry day, too (in a 20 minute trip driving home, passing by houses with height of 18m left and right of the road.)
Wifi also works for higher distances, you cannot get the distance to the device.
What you probably need is another thing:
NFC: Near Field Communication
Till the current Iphone 5, NFC is not supported.
Maybe next generations or some Android phones.
At least you now know what does not work.
I heard about wallpaper made for blocking wifi connection, maybe you can put this wallpaper inside the rooms and make more networks, so only person inside the room can control this room, than connect that wifi modem with cabel to some network controll PC and throught it you can control remotely also that room.

Pulling multiple live video streams into WPF

I'd like to create an app that pulls multiple live video feeds, supplied either by coax, hdmi or some other standard, into WPF for manipulation (i.e. apply a few transforms or pixel shaders) which is then output to monitor. What would I look at to get started with this app - is there any hardware that would make things easier?
If you are pulling in standard broadcast via coax or over the air, a $100 ATSC HD TV tuner will do. I don't have any experience with HD capture cards (I think they run for about $1000), or more specifically, cards that take in a raw HD stream.
When you install a capture device (TV tuner, webcam, capture card) in Windows, it creates a DirectShow source filter wrapper for it. Based off what kind of hw you are targeting, determines how you create the DirectShow graph. I have no reason to expect HD capture cards to be different than any capture card or webcam (TV tuners are slightly different).
You can use my WPF MediaKit as a base. The web cam control may work out of the box or just require slight changes for an HD capture card. A TV tuner would require a lot more than just this.

Resources