I have tied a MediaElement control to a Slider control.
I am getting my stream from a binary field in a SQL Server database.
I am concerned that there may be some performance issues due to the following:
1. I am storing a byte array that is being retrieved from a web service
2. Anytime I do anything with the MediaElement (i.e. reset position, stop playback, resume playback) I have to reset the source of the MediaElement
The code I am using to set the position is as follows:
private void ResetPlayerWithPosition(double milliseconds = 0)
{
// _wmss is a WaveMediaStreamSource from WAVMss.dll
// audio is of type byte[]
this._wmss = new WaveMediaStreamSource(new MemoryStream(this.audio));
this.playbackController.SetSource(this._wmss);
this.playbackController.MediaOpened += (s, e) =>
{
this.playbackController.Position = TimeSpan.FromMilliseconds(milliseconds);
};
}
My concern is that if the file gets to be large, there will be performance degradation in that the code has to wait for the file to be loaded into the MediaElement's source before the position is reset. If this is the case, does anyone have any suggestions as to how I could make it a bit more efficient?
Thanks in advance for any suggestions.
It's been 1 year 10 months 14 days since I first posted this, and not even a comment or follow-up question. In this time, iOS has made many advancements, Android has released some new flavors, HTML5 and jQuery have matured gracefully, and Microsoft has abandoned Silverlight for Metro.
I have abandoned this solution since support is sparse and on its way to nonexistence. In exchange for Silverlight, I have opted for a HTML5 + jQuery solution. This has enabled me to develop a light weight, Web Method driven, AJAX enabled, browser + device + server independent, cross-platform client in a reduced development time with more flexibility, sustainability, and maintainability.
Silverlight: Another highly innovative yet grossly inefficient technology laid to rest. RIP.
Related
I have an app which plays a few short sound clips. To play them I simply set the source to the new clip path, which is a WMA I encoded with Expression Encoder using WP7 settings. It's not even worth sharing the code -- there's an event handler. In it, I set the ME.Source property to a new Uri. It's set to AutoPlay, so that's it! Here:
private void PlaySound(ItemViewModel sound) {
Model.CurrentSound = sound;
CurrentSound.Source = new Uri(sound.Path, UriKind.Relative);
}
private void Sounds_SelectionChanged(object sender, SelectionChangedEventArgs e) {
var list = ((ListBox) sender);
var item = (ItemViewModel) list.SelectedItem;
SelectItem(item);
}
Also I should point out that the sounds are all resources (build type = Resource). I need them to be because the app needs to discover them dynamically. The paths are all like this, "sounds/foo/bar/sound.wma". Sometimes there is a space in the path, it is url encoded with %20 (this is how the resource manager returns the path, I didn't do that).
The problem is many people, but not all, are saying that the sound auto-repeats. The sounds are very short, only a few seconds, so it's very annoying. I don't understand how this is happening, the MediaElement doesn't even have an auto repeat feature.
Perhaps related, but some have also complained that every now and then the sound does not play. They have to click it again. All I can think of is that there is something wrong with how the sounds are encoded, but they are WMA, and as I said, I encoded them using the 'playback in WP7' settings in expression encoder. How could it be that it works usually but not other times if that were the case, anyway?
I'm at a loss and my app is getting some bad reviews because of this behavior. Help!
"there's and event handler" but you don't say of what? It could be that event firing over and over or not at all in some cases. Potentially your code has a logical errors where you have failed to detach an existing handler and then added another. As usage progresses you end up with a single event being handled by multiple handlers.
Edit
The Selection changed event is notorious for firing more frequently than we'd like. I suggest you add some debounce code that makes record of the last item selected and how long ago. If the next selected item is the same as the last one and it was say less than a second ago then swallow the event without doing anything else.
It sounds like you might be trying to play Sound Effects - in which case you might be better off using the XNA SoundEffect mechanism
e.g. http://www.japf.fr/2010/08/sound-effect-in-wp7-sl-application/
SoundEffect only works for WAV files (PCM) - but I've used it in several apps and scripts including embedded content files and downloaded files (e.g. translation and ironruby scripts).
The XNA class works well within SL and allows multiple sound effects to be played at the same time.
The problem with repeating turned out to be needing to do this:
MediaPlayer.IsRepeating = false;
I think what happened was the user would be in another app that sets this to true, and upon opening my app that value was still true! That has to be a bug, it's totally unexpected. If you look at other sound-playing apps like soundboard apps, there are users complaining in the reviews about the very same thing.. "I wish it wouldn't repeat the sounds..."
I'm trying to make a guitar practice website, and a critical functionality is to loop over very short mp3 files (a few seconds long), with absolutely zero gap in between. For example, it could a 4-measures-long chord progression, and I want to allow the user to loop over it seamlessly.
I tried using the HTML5 <audio> tag with the loop attribute. Google Chrome gives a small gap between the loops, but big enough to be totally unacceptable for my purpose. I haven't tested the other browsers, but I believe it won't work.
A possible workaround is to use ffmpeg to stream repetitions the same audio as an mp3. However, this costs a lot of bandwidth.
For myself I use Audacity to loop without gaps, but unfortunately Audacity doesn't have a web version.
So, do you have any ideas how I may loop over an mp3 in a browser with zero gap? I prefer non-Flash solutions, but if nothing else works I'll use Flash.
Edit:
Thank you for all your suggestions. Flash turns out to work decently. I've made a toy demo at http://vmlucid.lcm.hk/~netvope/audio/flash.html. To my surprise (I use to associate Flash with resource hog and browser crashes only), Flash and ActionScript are rather well designed and easy to use. It took me only 3 hours on my first Flash project :)
Have a look at this page. Listening for a while using Google Chrome 7, I found Method 1 works decently, while Method 3 gives the best results, though it's a bit of a hack. Ultimately, all browsers work differently, especially since HTML5 isn't finalized yet. If possible, you should opt for a Flash version, which I would think would give you the best loop.
in flash AS3 you can extract sound data with computeSpectrum() and give it to your Sound object exactly when it's needed (SampleDataEvent is fired).
I am not sure how well this will work, but if you knew your loop lasted 800 milliseconds - you could have the browser call the play method every 800ms... it still wouldn't be guaranteed to be perfect though. I don't think the browser is natively capable of delivering reliable audio looping at this point.
setInterval(function(){
document.getElementById("loop").play();
}, 800);
Rumor has it the best way to do pull this off in the most gapless fashion to use to multiple audio tags and alternate between them.
Or check out this utility: http://www.compuphase.com/mp3/mp3loops.htm I used it successfully for my flash projects when music had to be looped without gaps, and 99% of the time it worked. It takes WAV as an input.
Basically it is a kind of front-end for LAME mp3 encoder, which uses such settings as to prevent the gaps appearing. It won't work on very short sound effects (less than 0.5 second I believe).
Afterward all you have to do is use:
var sound:Sound = new MySoundEffect();
sound.play(0, 1000);
and it will loop one thousand times.
I am trying to record a stream from a webcam using Expression Encoder 4 SDK in WPF I can capture the video & audio streams and record these to disk however they are only recording at a base resolution of 320x240 the webcam is capable of capturing at 720p, how can I record at this resolution. Any help would be appreciated, I have been pulling my hair out trying to solve this all week.
Know this is a bit late but all questions need answers:
These might be a possible solution:
Check to see if your camera has it's own settings on the camera or comes with an installation disk.
for the expression encoder 4 put the video profile quality to max.
Good luck. If you are still around tell me, how it goes.
to change the "size" you can use the following line :
LiveJob.OutputFormat.VideoProfile.Streams[0].Size = new Size(1280,1080)
Or whatever you want it to be.
Encoder also offers a setting page that you can use.
That's what I did and then after setting the outputsize you can do that :
currentJob.OutputFormat.VideoProfile.Streams[0].Size = ((LiveSource)LiveDeviceSource).CropRect.Size;
Only 1 small limitation, you can't change the size while it's recording if you are publishing the source.
The new webcam stuff in Silverlight 4 is darned cool. By exposing it as a brush, it allows scenarios that are way beyond anything that Flash has.
At the same time, accessing the webcam locally seems like it's only half the story. Nobody buys a webcam so they can take pictures of themselves and make funny faces out of them. They buy a webcam because they want other people to see the resulting video stream, i.e., they want to stream that video out to the Internet, a lay Skype or any of the dozens of other video chat sites/applications. And so far, I haven't figured out how to do that with
It turns out that it's pretty simple to get a hold of the raw (Format32bppArgb formatted) bytestream, as demonstrated here.
But unless we want to transmit that raw bytestream to a server (which would chew up way too much bandwidth), we need to encode that in some fashion. And that's more complicated. MS has implemented several codecs in Silverlight, but so far as I can tell, they're all focused on decoding a video stream, not encoding it in the first place. And that's apart from the fact that I can't figure out how to get direct access to, say, the H.264 codec in the first place.
There are a ton of open-source codecs (for instance, in the ffmpeg project here), but they're all written in C, and they don't look easy to port to C#. Unless translating 10000+ lines of code that look like this is your idea of fun :-)
const int b_xy= h->mb2b_xy[left_xy[i]] + 3;
const int b8_xy= h->mb2b8_xy[left_xy[i]] + 1;
*(uint32_t*)h->mv_cache[list][cache_idx ]= *(uint32_t*)s->current_picture.motion_val[list][b_xy + h->b_stride*left_block[0+i*2]];
*(uint32_t*)h->mv_cache[list][cache_idx+8]= *(uint32_t*)s->current_picture.motion_val[list][b_xy + h->b_stride*left_block[1+i*2]];
h->ref_cache[list][cache_idx ]= s->current_picture.ref_index[list][b8_xy + h->b8_stride*(left_block[0+i*2]>>1)];
h->ref_cache[list][cache_idx+8]= s->current_picture.ref_index[list][b8_xy + h->b8_stride*(left_block[1+i*2]>>1)];
The mooncodecs folder within the Mono project (here) has several audio codecs in C# (ADPCM and Ogg Vorbis), and one video codec (Dirac), but they all seem to implement just the decode portion of their respective formats, as do the java implementations from which they were ported.
I found a C# codec for Ogg Theora (csTheora, http://www.wreckedgames.com/forum/index.php?topic=1053.0), but again, it's decode only, as is the jheora codec on which it's based.
Of course, it would presumably be easier to port a codec from Java than from C or C++, but the only java video codecs that I found were decode-only (such as jheora, or jirac).
So I'm kinda back at square one. It looks like our options for hooking up a webcam (or microphone) through Silverlight to the Internet are:
(1) Wait for Microsoft to provide some guidance on this;
(2) Spend the brain cycles porting one of the C or C++ codecs over to Silverlight-compatible C#;
(3) Send the raw, uncompressed bytestream up to a server (or perhaps compressed slightly with something like zlib), and then encode it server-side; or
(4) Wait for someone smarter than me to figure this out and provide a solution.
Does anybody else have any better guidance? Have I missed something that's just blindingly obvious to everyone else? (For instance, does Silverlight 4 somewhere have some classes I've missed that take care of this?)
I just received this response from Jason Clary on my blog:
Saw your post on Mike Taulty's blog about VideoSink/AudioSink in Silverlight 4 beta.
I thought I'd point out that VideoSink's OnSample gives you a single uncompressed 32bpp ARGB frame which can be copied straight into a WritableBitmap.
With that in hand grab FJCore, a jpeg codec in C#, and modify it to not output the JFIF header. Then just write them out one after the other and you've got yourself an Motion JPEG codec. RFC2435 explains how to stuff that into RTP packets for RTSP streaming.
Compressing PCM audio to ADPCM is fairly easy, as well, but I haven't found a ready-made implementation as yet. RFC3551 explains how to put either PCM or ADPCM into RTP packets.
It should also be reasonably easy to stuff MJPEG and PCM or ADPCM into an AVI file. MS has some decent docs on AVI's modified RIFF format and both MJPEG and ADPCM are widely supported codecs.
It's a start anyway.
Of course, once you've gone through all that trouble, the next Beta will probably come out with native support for compressing and streaming to WMS with the much better WMV codecs.
Thought I'd post it. It's the best suggestion I've seen so far.
I thought I'd let interested folks know the approach I actually took. I'm using CSpeex to encode the voice, but I wrote my own block-based video codec to encode the video. It divides each frame up into 16x16 blocks, determines which blocks have sufficiently changed to warrant transmitting, and then Jpeg-encodes the changed blocks using a heavily modified version of FJCore. (FJCore is generally well done, but it needed to be modified to not write the JFIF headers, and to speed up initialization of the various objects.) All of this is being passed up to a proprietary media server using a proprietary protocol roughly based on RTP.
With one stream up and four streams down at 144x176, I'm currently getting 5 frames per second, using a total of 474 Kbps (~82 Kbps / video stream + 32 Kbps / audio), and chewing up about 30% CPU on my dev box. The quality's not great, but it's acceptable for most video chat applications.
Since I posted my original question, there have been several attempts to implement a solution. Probably the best is at the SocketCoder website here (and here).
However, because the SocketCoder motion JPEG-style video codec translates the entirety of every frame rather than just the blocks that have changed, my assumption is that CPU and bandwidth requirements are going to be prohibitive for most applications.
Unfortunately, my own solution is going to have to remain proprietary for the foreseeable future :-(.
Edit 7/3/10: I just got permissions to share my modifications to the FJCore library. I've posted the project (without any sample code, unfortunately) here:
http://www.alanta.com/Alanta.Client.Media.Jpeg.zip
A (very rough) example of how to use it:
public void EncodeAsJpeg()
{
byte[][,] raster = GetSubsampledRaster();
var image = new Alanta.Client.Media.Jpeg.Image(colorModel, raster);
EncodedStream = new MemoryStream();
var encoder = new JpegFrameEncoder(image, MediaConstants.JpegQuality, EncodedStream);
encoder.Encode();
}
public void DecodeFromJpeg()
{
EncodedStream.Seek(0, SeekOrigin.Begin);
var decoder = new JpegFrameDecoder(EncodedStream, height, width, MediaConstants.JpegQuality);
var raster = decoder.Decode();
}
Most of my changes are around the two new classes JpegFrameEncoder (instead of JpegEncoder) and JpegFrameDecoder (instead of JpegDecoder). Basically, the JpegFrameEncoder writes the encoded frame without any JFIF headers, and the JpegFrameDecoder decodes the frame without expecting any JFIF headers to tell it what values to use (it assumes you'll share the values in some other, out-of-band manner). It also instantiates whatever objects it needs just once (as "static"), so that you can instantiate the JpegFrameEncoder and JpegFrameDecoder quickly, with minimal overhead. The pre-existing JpegEncoder and JpegDecoder classes should work pretty much the same as they always have, though I've only done a very little bit of testing to confirm that.
There are lots of things I'd like to improve about it (I don't like the static objects -- they should be instantiated and passed in separately), but it works well enough for our purposes at the moment. Hopefully it's helpful for someone else. I'll see if I can improve the code/documentation/sample code/etc. if I have time.
I'll add one other comment. I just heard today from a Microsoft contact that Microsoft is not planning to add any support for upstream audio and video encoding/streaming to Silverlight, so option #1 appears to be off the table, at least for right now. My guess is that figuring out support for this will be the community's responsibility, i.e., up to you and me.
Stop-Gap?
Would it be possible to use the Windows Media Encoder as a compression method for the raw video Silverlight provides? After capture to ISO Storage, encode w/ WME and send to the server via the WebClient. Two big issues are:
Requires a user to install the encoder
WME will no longer be supported
It seems like that might be a stop-gap solution until something better comes along. I haven't worked w/ WME before though so I don't know how feasible this would be. Thoughts?
Have you tried the new Expression 4 Encoders?
http://www.microsoft.com/expression/products/EncoderPro_Overview.aspx
I'm attempting to use a large number of short sound samples in a game I'm creating in Silverlight 2. The samples are less than 2 seconds long.
I would prefer to load all the audio samples onto the canvas during the initualization. I have been adding the media element to the canvas and a generic list to manage it. So far, it appears to work.
When I play the sample the first time, it plays perfectly. If it has finished playing and I want to re-use the same element, it cuts off the first part of the sound. To play the sample again, I stop and play the media element.
Is there another method I should use the samples so that the audio is not clipped and good performance is obtained?
Also, it's probably a good idea to make sure that all of your audio samples are brought down to the client side initially. Depending on how you set it up, it's possible that the MediaElements are using their progressive download functionality to get the media files from the server. While there's nothing wrong with this per se (browser caching should be helping you out after the initial download), it does mean that you have to deal with the browser cache, and there are some potential issues there.
Possible steps to try:
Mark your audio files as "Content". This will get them balled up in the .xap.
Load your audio files into MemoryStreams (see Application.GetResourceStream method) and call MediaElement.SetSource().
HTH,
Erik
Some comments:
From MSDN:
Try to limit the number of MediaElement objects you have in your application at once. If you have over one hundred MediaElement objects in your application tree, regardless of whether they are playing concurrently or not, MediaFailed events may be raised. The way to work around this is to add MediaElement objects to the tree as they are needed and remove them when they are not.
You could try to seek to the start of the sample to reset the point currently being played before re-using it with:
mediaelement.Position = new TimeSpan();
See also MSDNs MediaElement.Position.
One techique you can use, although I'm not sure how well it will work in Silverlight, is create one large file with all of your samples joined together (probably with a half-second or so of silence between each). Figure out the timecode for each sample and seek the media element to that position and play. You'll only need as many media elements as simultaneous sounds you want to play.