Audio conference between WPF Applications - wpf

I have 2 WPF applications that communicate using a couple of duplex WCF services. I need to enable audio communication also between them. I've been looking for a solution for a while now, but couldn't find a good one.
What I've tried is audio streaming using Microsoft Expression Encoder on the "server" side (the one that feeds the audio), and playing it on the "client" using VLC .NET. It works, at least streaming a song, but it's a big resource eater. The initial buffering also takes a lot, and so is stopping the stream.
What other options do I have? I want a clear, lightweight audio conversation between the apps, kinda like Skype. Is this possible? Thanks
EDIT: I found NAudio and it looks like a good audio library, I managed to stream my microphone quite easily. However, I have big problem - I can hear the voice clearly on the client, but it echoes indefinitely. Plus, there's this annoying background sound (could this be caused by the processor?) and after a while, a very high, loud sound is played on the receiving end. All I can do is stop the whole transmission. I have no idea what's causing these problems. I use the 'SpeexChatCodec' as in the NetworkChat example provided (sampling rate: 8000, 2 channels). Any suggestions? Thanks

It would be a lot of work to write a library that would support that from scratch... if you can spend $150 on this I would suggest purchasing a library like iConf .NET Video Conferencing SDK from AvSpeed...

Related

Use DroneKit to build a Ground Control Station for Windows

On the DroneKit.io page, it mentions using DroneKit Python when creating Ground Control Stations for Windows. However, there appears to be no documentation for this.
Is it meant to simply simulate a com port and act as a proxy for other Ground Control Stations, which just makes it easier hijack the MAVLink?
Also, it mentions Python being used for low-latency processes. This seems to be oxymoronic. Is there a reason that it would be better than just using C/C++ for the purpose of hijacking the MAVLink?
Thanks!
DroneKit-Python can be used either to create a python-based ground station or it can be run on a companion computer. There is no practical difference between the two except how you set up the connection to the vehicle from the computer running the script. The different ways of starting MAVProxy for the different connections are covered in the Getting Started documentation.
The reason that there is no "specific" documentation on using DK-Python for GCS is primarily "marketing". The far bigger market for ground station GCS software is in tablets/phones that will use DK-Android or a future iOS port. DK-Python has been positioned solely as for use in the air interface. Even though there is no "specific" documentation, in fact all the existing documentation is relevant.
Is it meant to simply simulate a com port and act as a proxy for other Ground Control Stations, which just makes it easier hijack the MAVLink?
No. See above.
Also, it mentions Python being used for low-latency processes. This seems to be oxymoronic. Is there a reason that it would be better than just using C/C++ for the purpose of hijacking the MAVLink?
It doesn't mention low-latency processes, so the answer is "invalid question".
You're probably misreading the text "that require a low-latency link". The point here is that if you have dronekit-python running on a companion computer and connected by a fast link you can do real time handling of incoming sensor data. This allows computer vision control of the UAV. However if you run DK-Python on a ground control station you will have a much slower link. You can still control movement of the UAV but the latency will be much higher.
Hope that helps!

MediaElement doesn’t release RTSP stream and VLC Player rebuffers it before stabilizing. How to display RTSP streams correctly?

I am working on a WPF application that displays RTSP video streams. Currently, the application handles communication with two types of devices that use RTSP: cameras and archivers (pretty much DVR). Over the lifetime of the app the streams may and usually will be closed multiple times, so we need to make sure they are not cluttering the memory and network when we close them.
We had a go with the MediaElement. We needed to install LAV Filters for ME to display RTSP streams at all. We could see the video from the cameras but the stream wasn’t released until we stopped the video, invoked Close() on MediaElement and set its source to null. The video seemed to be released but we still decided to check memory usage using a profiler. We simply created a loop in which we initialized a new MediaElement (local reference), played RTSP stream and closed it after establishing the connection. Over half an hour of running the test we witnessed a steady increase in memory consumption, and as a result we lost 20MB of memory to all the MediaElements we created. The reason is still unknown to us (timers being bound to the dispatcher?) but after searching the Internet we accepted that there is a problem with MediaElement itself.
We decided this is negligible for our use case (no one is going to create MediaElements with that frequency). Unfortunately, MediaElement was not releaseing streams for archivers when using the same approach. After we got rid of the MediaElement for archivers stream, the archiver’s server still reported the connection being open.
We analyzed the packets with Wireshark. Cameras and archivers use the same version of the RTSP protocol, but when we close the connection on the camera the RTCP and data packets stop coming, which is not the case with the archivers.
We decided to abandon ME altogether and switch to the VLC Player. When we hit stop on the VLC player the connections are nicely closed, but VLC has a bug causing rebuffering of streams at the beginning of any connection. It’s a known issue: https://trac.videolan.org/vlc/ticket/9087. Rebuffering is not consistent. Sometimes it happens twice, sometimes three times. We tried playing around with buffer settings for VLC (network-buffer, live-buffer… you name it) but nothing helped.
There are many questions we are looking answers for:
why ME is keeping the connection alive for archivers but not for cameras? Is archiver not handling RTSP termination packet correctly?
which component is responsible for keeping the connection open on the client side and how could we possibly work around it (terminate the stream)?
how to prevent VLC from rebuffering the stream after the connections is established?
Did you have any success with streaming multiple RTSP streams without performance/memory issues in your application? What components did you use?
Side note: we also played with MediaPlayerHQ, which behaves nicely, unless we kill the process. If we do that the stream remains open for couple of minutes.
I will appreciate any hints and suggestions!
Check out https://net7mma.codeplex.com, it is excellent on memory cpu. It has been tested with 1000 clients connected and the end users never experienced any additional delays. Vlc achieves Rtcp synchronization with my library so buffering should happen only once if at all. The library should also help to allow you to decode the video / audio using media foundation. It also supports getting media information from container files and will support their playback through the included rtsp server or writing them to alternate container formats.
The code is completely managed and written in c# and released under the apache license.

Photoshop CS5's use of Wintab driver

Wacom's drivers have always been atrociously bad, so I'm currently working on a hack.
The main problem I'm having is with calibration on a tablet PC. And before you say anything: no, just no. I've tried literally dozens of drivers, and of the few that work, none allows calibration of Wintab input. You can calibrate MS Ink, but that does nothing for apps like Photoshop that don't support the Ink API.
Having researched the issue a bit, the way I plan to hack it is by writing a wrapper for wintab32.dll which adjusts data packets as they're sent to applications, enabling calibration and perhaps tweaks to pressure sensitivity and whatever else I feel Wacom should have supported all along.
The calibration function is trivial, as is wrapping wintab32.dll and getting at the data that needs calibrating. As far as I can tell there are about half a dozen functions that request packet data, and I've inserted code in each of them to modify said data.
It works, too, at least if I test it on some wintab sample projects.
Photoshop is different, though. I can confirm that it loads the wrapped DLL, opens a wintab context and uses the API to request packet data, which is then modified en route. But then Photoshop ignores the modifications and somehow gets at the original, uncalibrated data and uses that. I can find nothing in the Wintab documentation to suggest how this is even possible.
I'm pretty stumped. Any thoughts?
Could it be that Photoshop only requests packets from Wintab in order to clear the packet queue, and then does something else to actually read the state of the stylus? And if so, what could that be? Some secret, obscure way of polling the data using WTInfo? A hook into the data stream between Wintab and the underlying driver/serial port?
I'm not very sure, but maybe the input from Ink API is also being written on the canvas. I mean, you are writing using two inputs now, which is WinTab and Ink. Got it?
If only you could ignore the Ink input, then that will show the right result.
P/S: This is only a hunch.

C - how can I capture sound from the microphone? [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How to get PCM data from microphone in C++ (os Windows)?
How can i capture sound from the microphone, and hear it in another computer live?
Thanks.
The simplist way is to use the waveIn functions provided by the Win32 API.
You can read Recording and Playing Sound with the Waveform Audio Interface for an overview, or just dive into the API documentation.
To record, you can use the waveIn functions in win32API.
BUT before you send it, remember that the data got in the byte-buffer through waveIn function is PCM format, and it will easily clog your NETWORK. You must first compress the PCM data into aLaw or uLaw format before tunneling it through WinSOCK Apis. Otherwise, it will surely NOT be a "live" feed, also taking up a lot of bandwidth.
Another easy solution for audio i/o is portaudio. Aside from being portable, it's very easy to use.
To get audio data over the network, as another answer pointed out, you should be aware that your data is huge. However, a good place to start is to try sending raw data. Once you can do that, then you can worry about compressing it -- you need to solve a complex problem one step at a time. Eventually, you'll probably want to use UDP for the raw audio packets.
A good library for sending audio, video, chat and other data is google's libjingle which implements the google talk protocol. It has solved many of the issues with UDP vs TCP, firewalls etc. You may find it a bit hard to work with anyway as it's a lot of code and you'll need to work with XMPP which you may not be familiar with. Also, it's C++, not C. It also requires some server mediation, although you can use google's servers. If that doesn't work for you you can do something home grown but you may find you need to do a fair bit of work to get it all right.
I am sure there are some libraries to help you. Try googling for things like "internet telephony library c" and "voip c library" (even though this is not, in the strictest sense, voip)

OS X/Linux audio playback with an event-based interface?

I'm working on a streaming audio player for Linux/OS X with a bizarre use case that has convinced me nothing that already exists will work. For the first portion, I just want to receive MP3 data and play it. I'm currently using libmad for decoding and libao for playback. My problem is with libao, and I'm not convinced it's my best option.
In particular, the ao_play function is blocking. It doesn't return until the entire buffer passed to it has been played. This doesn't give enough time to decode blocks between calls to ao_play, so the decoding has to be done either entirely ahead of time, or concurrently. Since this is intended to be streaming, I'm rejecting ahead-of-time decoding offhand. (It's conceivable I could send more than an hour's worth of audio data - I don't want to use that much memory.) This leaves concurrency. But while pthreads is standard across Linux and OS X, many of the surrounding libraries are not. I'm not really convinced I want to go to concurrency - so I'm reconsidering my choice of libao.
For my application, the best model I can think of for audio playback would be getting a file descriptor I could select on to get notified when it's ready for writes, then issue non-blocking writes to. (This is due to the rest of the details of the use case, which imply I really want a select loop anyway.)
Is there a library that works on both Linux and OS X that works this way?
Although it's much hated, PulseAudio basically works exactly like you describe (using the Asynchronous API, not the simple one).
Unless what you want to do involves low-latencies or advanced sound work, in which case you might want to look at the JACK Audio Connection Kit.
PortAudio is your one. It has a simple callback driven API. It is cross-platform and low-latency. It is the best solution if you don't need any fancy features (3D, audio-graphs,...).

Resources