This is a very simple task that I had such a hard time trying to do. My goal is pretty simple: send an mp4 file from my server to my client, and while its buffering and downloading I want to already play it. That means that I need to play a video.mp4 file while writing it, and I need it to display on some platform that I can control - like wxPython or WPF-Ironpython. Naturally, no such platform will let me play an open file for writing.
I have tried to implement and HTTP server (although totally unnecessary for my case, as I am writing an application-based Server-Client app) that would accept Range request, and when I run the server and load the URL on Chrome, it all works perfect and I can seek and buffering is great, but when I load it from WPF MediaElement it fails to play the video for some point (I cant really tell why as there is no documentation for this, any API, tutorials etc). I am really desperate.
I even thought about playing a video from a buffer and then just changing the buffer's content, but it doesn't seem like this possibility exists.
I am really stuck at this and I would love to get some suggestions. Please note that I am not a professional in this so I would appreciate if you could explain this to me in simple terms.
Thanks!
Not possible. MP4 is not the correct container for your application. You must use something like HLS/dash/fragmented MP4.
Related
I asked this question in the Raspberry PI section, so please forgive me for posting this here again. Its just there doesn't seem to be as active as this section of the forum. So, onto my question...
I have an idea and I'm working on it right now. I just wanted to see what the community's thought was on using a screensaver as digital signage. Every tutorial I've read shows someone using chromium in kiosk mode, and while that's fine and works well for some uses, it doesn't work for what I need. I have successfully completed a chromium kiosk, and it was cool. But the signage that I need to create now, has to work without internet. I've thought about installing LAMP locally on the PI, and still using chromium. I still may have to if this idea doesn't pan out. All I need from the signage is a Title Message in the top center, and a message body underneath it, with roughly 300-400 character limit. My idea is to write a screensaver module, in C, that will work with a screensaver such as xscreensaver. The module would need to be able to load messages from a directory on the pi. Then for my clients to update their signage text, I would write a simple client that sent commands as well as the text via SSH to the pi. I want to know what other people think about this. Is it a good idea? Bad idea? Should I "waste" my time doing something like this?
Thanks in advance.
I am already using a rPi as digital signage, just over a year. I am using two different setups:
version 1 uses Raspian loading xdesktop and qiv image viewer to cycle images stored on the Pi itself, synchronized with a remote server. The problem I found was power and SD stability, when the power fails, which it will do no matter what, just when... The Sd card can become corrupt due to all the writing that Raspian does all the time. Certainly does not really need to write to SD.
version 2 uses a RO-filesystem and a command line image tool. Uses the same process to show images from local, and sync with server. But power fail causes no ill effects.
I am not using screensaver to display images, that seemed redundant to me, and unnecessary to wait for the SS to start just to display the images.
Some of the images are created using imagemagik, which is nicely dynamic where needed.
I'm trying to make a server that receives RTP/H264 video streams from android clients and stores these to file.
Currently I'm using VLC in the server, which works well. However, I am worried that VLC is a heavyweight solution that may not scale well. As I'm not actually playing the video, only saving it to file, I thought there must a be a more efficient solution.
Currently I'm planning on using an Amazon ec2 instances, so the goal is to serve as many clients as possible per instance.
I'm flexible (willing to learn) on the language side, I'd like to choose the right language for the job.
So, does anyone know of a good, scalable way to store these streams to files?
Thanks in advance!
EDIT
FFmpeg or libav look promising. Looking into them now.
Basically you need an library that supports rtp stack server side, so you can extract the payload and just append to a file as it comes. ffmpeg is a great choice, and it does have rtp stack and it also it can generate containers(MP4,...) for you as well; if needed. Actually VLC uses ffmpeg's libav library under the hood.
I'm running into problems on the WP7 with MediaElement downloading a 128kbps mp3 stream from a web service for a music player app that i'm working on. The file downloads correctly when the wp7 is on a wifi connection, but downloading sometimes stops when off of wifi. The problem is that i'm not getting any errors or exceptions when the downloading fails and the MediaElement state is "playing". MediaElement runs right past the downloaded portion of the stream and acts like it is playing, but there is nothing to play since the download stopped. I can somewhat replicate this issue based upon my location and using the 3g instead of wifi, so i believe it is due to a low connection. I don't believe any code needs to be shown in this instance, but i try to post something. I want to know if I have any control over this? Are there any other events I could use to detect when the download has failed? Is there another way I could download a mp3 stream that is more reliable and play it? Is there another player/component I should try?
Thanks in advance
You could always use MediaStreamSource to try to handle the download and implement streaming, to some extent. It is a more "painful" way of doing this since you will have to work with an extra media layer, but it pays off by improving playback stability.
Here is a starter example by Tim Heuer. Take a look specifically at how he takes advantage of a custom implementation of MediaStreamSource. Here is a more complex sample.
If streaming is not a requirement, you could download the file (and store it in the Isolated Storage) and then play from there.
I know people have asked this before, but i see no answer nor people even commenting about it.
So, i'm trying to make SHOUTcast streaming in WP7, anyone have done it? I know i have to use MediaStreamSource with my MediaElement, but how exactly can i skip that header from SHOUTcast and just get the stream and use it in a MediaStreamSource? Is there any app that has done it? Someone actually has some example working code?
There is a really good SHOUTcast Player called streamything (http://www.streamything.com/page/en/default.html) . Unfortunately it is not open source nor freeware but i shows that it is definitely a way to do that.
You need to setup a mechanism to get the stream of data to be passed to the application continuously. Here is a possible implementation. In order to be able to receive the stream directly (so that the application won't be treated as a web browser), you have to call the URL with a semicolon at the end. For example: http://00.00.00.00:8000/;
I want to make project for my final year in college.
So someone suggested me to make Remote Desktop in C.
Now I know basic socket functions for windows in C i.e. I know how to make
echo server in C.
But I don't know what to do next. I searched on internet but couldn't find
something informative.
Could someone suggest me how to approach from this point..any tutorial...or any source ?
I think this is do-able. For a college project, you don't need to have something as complex and as full-featured as VNC. Even demonstrating simple keyboard and mouse control and screen feedback would be enough, in my opinion, and that's well within reach.
If you're doing everything from scratch and using Win32, you can get the remote screen using the regular "printscreen" example all around the internet.
http://www.codeproject.com/KB/cpp/Screen_Capture__Win32_.aspx has it, for one. You can then compress the image with a third-party library, or just send it raw; this wouldn't be very efficient but it would still be a viable demonstration.
Apart from capturing the screen data remotely and showing it in the local window, you'll need to listen for local window messages for mouse and keyboard events, send them to the remote host, and then play them back. http://msdn.microsoft.com/en-us/library/ms646310%28VS.85%29.aspx will probably do that for you.
Check tightvnc TightVNC is a free remote control software package. The source code is also available.
For sending the image of the screen I would probably use rtp. The JRTPLIB is really handy for that.
And yes, as KevinDTimm says, an echo server is the very easiest part.
KevinDTimm may well be right, writing an RDP client would a fairly significant undertaking. To give you some idea, the current spec, available at the top of this page, is 419 pages long and includes references to several additional documents for specific aspects of RDP like Audio Redirection and Clipboards.