Capturing screen as a video stream - c

I want to capture live video of my Windows 7 computer, and later on have it streamed lived to a client to view as a livestream. Somewhere I heard that I could use C to make a virtual device and have the computer render the screen on that, and just output it through a socket. But, I've searched and I can't find any way to do this.
So what I'm looking for is maybe a tutorial or place to start with capturing live screen output and have it transmitted through the internet to a client that can view it live.
Any help is greatly appreciated :)

Related

Program to display two low latency desktop streams side by side over LAN

Use case: I have two gaming PCs running a VR game in two separate rooms and a laptop in the living room connected to a projector. The VR headsets are wireless and will be in the same room as the projector with two people playing multiplayer together and the spectator screens of each gaming PC displayed side by side on the projector for others to watch the action. The game is a rhythm game (Beat Saber) so low latency is extremely important. On the other hand I can sacrifice on video quality because each desktop will only display on a 960x540 portion of the 1080p projector screen. Also audio is already taken care of and doesn't need to be transmitted to the laptop.
I have written a program with WPF and C# which displays two webpages side by side with a black bar at the top and bottom. The idea was to log into a low latency screen sharing webpage (for example parsec.app) and connect one of the PCs on each side. This actually works, but the problem is that after connecting the second computer, they both become very laggy and low fps when viewing content with a lot of movement. I really need both video streams to be smooth and with low latency. (<150ms) So using a third party service for the sharing/streaming of the screens seems to be out of the question. So I have to find a way to send the desktop streams directly over the LAN and then display them side by side with my program. I would like to have my own program display the streams so that I have authority over the layout and can add thematic pictures to the unused space on the screen. Maybe even make it customizable with up to four streams simultaneously in the future.
The laptop and one of the gaming PCs are only connected to the LAN via WiFi and the other gaming PC is connected via Ethernet. I have done some research and ffmpg or NDI seem to be the lowest latency ways to send video through a network, only I have no idea how to use them and don't have any experience programming network applications. I tried streaming my screen from one PC to the other with VLC using UDP but couldn't even get that working..
Here's a link to my visual studio project so far:
https://drive.google.com/file/d/1W7khWBvKZ1zMvreH9nyfAHPVQ6BDKN5Z/view?usp=share_link
Here's a video showing my program in action:
https://drive.google.com/file/d/1db3EHHV23mvdky36fcox9--crlrbZXK4/view?usp=share_link
Does anyone have any insights or ideas on how to solve my problem? Of course I'm willing to do some studying too if someone can point me to the resources where I can learn the required skills to make this work.

How do I capture the audio of a wpf window or cscore output in c#?

I made a music player in wpf using cscore. Now, I want to add a feature so I can stream the output in real-time (like a radio) to another instance of the music player over internet. I could see how to stream the data later, but first I need to know how to get the bytes of the audio output. I'm asking for help because I'm lost, I've done some research and found nothing but how to stream the desktop audio. That's not a solution, because I want to listen to the same music with some friends while hanging out on Discord, so if I stream the desktop audio, they will listen to themselves besides the music. Any help will be welcome. Thanks in advance!
I am have not used cscore I mainly use naudio a similar library that facilitates getting audio to and from the sound card. So I will try and answer in a way that allows you to find what you are looking for in cscore.
In your player code you will be pulling data from the audio file. In naudio this is done with a audio file reader. I think it is called a wavFileReader in cscore, This file reader translates the audio file into a stream of audio samples in the form of byte arrays, the byte arrays are then used to feed the WASAPI Out to allow the audio to play on the sound card.
The ideal place to start with your streaming system would be in the middle of those two processes. So rather than just passing the audio samples to the sound card you need to take a copy of the byte array containing the samples. it is this data you will need to stream to your friends.
From here you will need to look at compressing the audio and streaming protocols like RTP all can be done in c#. The issue will be, as it always is in audio having your data stream keep pace with the sound card. Every time WASAPIOut asks for more samples you need to have the ready otherwise the audio will be choppy.
I do hope this helps point you in the right direction. Others with experience with cscore may have some code examples to assist you more directly I am simply trying to point you in the right direction

Web RTC Local audio issue

I am trying to implement a webrtc-based video chat room and I encountered the following problem. When the call starts and my local stream goes, I listen to my audio on my device before the other device receives it. The devices are far enough apart so that the echo does not affect them. I don't know if that happens because I did something wrong or it is always like that.
I appreciate any recommendations.

MediaElement doesn’t release RTSP stream and VLC Player rebuffers it before stabilizing. How to display RTSP streams correctly?

I am working on a WPF application that displays RTSP video streams. Currently, the application handles communication with two types of devices that use RTSP: cameras and archivers (pretty much DVR). Over the lifetime of the app the streams may and usually will be closed multiple times, so we need to make sure they are not cluttering the memory and network when we close them.
We had a go with the MediaElement. We needed to install LAV Filters for ME to display RTSP streams at all. We could see the video from the cameras but the stream wasn’t released until we stopped the video, invoked Close() on MediaElement and set its source to null. The video seemed to be released but we still decided to check memory usage using a profiler. We simply created a loop in which we initialized a new MediaElement (local reference), played RTSP stream and closed it after establishing the connection. Over half an hour of running the test we witnessed a steady increase in memory consumption, and as a result we lost 20MB of memory to all the MediaElements we created. The reason is still unknown to us (timers being bound to the dispatcher?) but after searching the Internet we accepted that there is a problem with MediaElement itself.
We decided this is negligible for our use case (no one is going to create MediaElements with that frequency). Unfortunately, MediaElement was not releaseing streams for archivers when using the same approach. After we got rid of the MediaElement for archivers stream, the archiver’s server still reported the connection being open.
We analyzed the packets with Wireshark. Cameras and archivers use the same version of the RTSP protocol, but when we close the connection on the camera the RTCP and data packets stop coming, which is not the case with the archivers.
We decided to abandon ME altogether and switch to the VLC Player. When we hit stop on the VLC player the connections are nicely closed, but VLC has a bug causing rebuffering of streams at the beginning of any connection. It’s a known issue: https://trac.videolan.org/vlc/ticket/9087. Rebuffering is not consistent. Sometimes it happens twice, sometimes three times. We tried playing around with buffer settings for VLC (network-buffer, live-buffer… you name it) but nothing helped.
There are many questions we are looking answers for:
why ME is keeping the connection alive for archivers but not for cameras? Is archiver not handling RTSP termination packet correctly?
which component is responsible for keeping the connection open on the client side and how could we possibly work around it (terminate the stream)?
how to prevent VLC from rebuffering the stream after the connections is established?
Did you have any success with streaming multiple RTSP streams without performance/memory issues in your application? What components did you use?
Side note: we also played with MediaPlayerHQ, which behaves nicely, unless we kill the process. If we do that the stream remains open for couple of minutes.
I will appreciate any hints and suggestions!
Check out https://net7mma.codeplex.com, it is excellent on memory cpu. It has been tested with 1000 clients connected and the end users never experienced any additional delays. Vlc achieves Rtcp synchronization with my library so buffering should happen only once if at all. The library should also help to allow you to decode the video / audio using media foundation. It also supports getting media information from container files and will support their playback through the included rtsp server or writing them to alternate container formats.
The code is completely managed and written in c# and released under the apache license.

Receive data from Audio jack in windows phone

I wasn't sure if this questions belongs here, but I'm completely new to this topic so hopefully you can help me out.
I have some kind of hardware which will be connected to my windows phone (wp7/wp8) through the audio jack. It sends some data in an audio format. I would like to retrieve this data through code. How can I do this?
Any source code or samples would be appreciated.
I wanted to post an answer so I could elaborate on my comment.
The problem with what you're trying to do is that the jack on Windows phones is an output jack. I haven't seen any that support microphone input. This means the jack is meant to generate a signal, but never to receive one. That means that you would never be able to attach a device and send audio data into the phone through that jack. It's completely missing any hardware that would allow you to interpret what you send into it, and honestly you risk frying the jack or worse, the phone itself.
Perhaps the best course of action would be to look at different ways to push that audio data into the phone. I can't really guide you on that because it's a very open course of action that depends on your specific needs.

Resources