MediaElement takes too much time during loading - wpf

i am new to wpf. i am using MediaElement for display video from web. Everything is works well Except it takes too much time before loading. i use following code for set source and play the video
Path_MediaPlayer.Source = new Uri(_videoPath, UriKind.RelativeOrAbsolute);
if (!IsVideoStart)
{
progressImage.Visibility = Visibility.Visible;
this.UpdateLayout();
}
Path_MediaPlayer.Play();
Original file size is around 85 MB. It takes around 7-10 minute to load data in buffer and after that it start. However once it play the video, then it takes less time to buffering data(around 30 sec.). So i want to ask that how can i minimize the first load time.

Related

Huge array of libSDL textures

I am developing an app that presents the user with a potentially very large user-generated image gallery, 10 or so images at the time.
The app is to be implemented in C using libSDL and 2D textures for accelerated rendering.
The overall gist of it in pseudocode is:
while cycle < MAX_CYCLES
while i < MAX_STEPS
show a gallery of 10 image thumbnails
while (poll events)
if event == user has pushed next
break
i++
scramble image galleries using a genetic algorithm
cycle++
I could load every image from disk at initialization time, creating all the required textures, so image presentation is fast. But of course this would be slow and potentially allocate a huge array of textures.
I will scale down the images for presentation, so this could mitigate the problem, but the total size of the collection depends on user preference. Surely I can cap the maximum value, but it cannot be small.
I was thinking about unloading every unused image at every step of every cycle, using SDL_FreeSurface and SDL_DestroyTexture. This would mean reloading the data from disk, recreating the surface and recreating the texture each time. Is this a viable approach?
Also I understand that SDL textures are stored in GPU memory, so the amount of available memory on the card should be my main concern. Am I right?
In summary, is there a recommended method to deal with this type of situation?
I would keep always 3 slides in memory.
Prev - Current - Next
While presenting the current slide, preload the next slide and unload the slide no (Current - 2).
Also I understand that SDL textures are stored in GPU memory, so the amount of available memory on the card should be my main concern. Am I right?
Not quite, if the GPU (Driver) seems it necessary, it will outsource unused texture data to RAM.
For Example, if you're presenting 10 Images and thus have 30 Images present in memory, then for 2K (with alpha) (1920 x 1080 x 4) you will need approx. 250 MB.
As long as you don't run on an embedded system (or very old, outdated system), this shouldn't be a big concern.

How to optimize the rendering on my threejs program

I've been working on a rubik's cube program, and the rendering works fine on my macbook pro/google chrome browser. For computers with slower GPUs, however, my cube breaks because the rendering is slower than the animations. I've tried looking up ways to optimize the rendering and haven't had any success yet.
Rubik's Cube live link
Github code repository
I'm using react and the componentDidMount function is line 1463 where the cube meshes get generated and the animate function is line 1539. Appreciate any help, thanks!
This all depends on how you're building your geometry. I ran a WebGL inspector on your live link, and I'm getting 182 drawcalls for a 3x3x3 cube. This is unnecessarily large, since a 3x3 should only have 54 faces. Each time you add a new Mesh to the scene, it creates a new drawcall on render, and having too many drawcalls is almost always the primary reason for slow performance in WebGL.
You should consider nesting each face into its corresponding "cubelet" (one black cube with 1, 2, or 3 colored faces), as done in this Roobik's demo. This way you'd only have to draw 27 cubes + 54 faces = 81 drawcalls! The pseudocode would go something like this:
for (let i = 0; i < 27; i++) {
let cubelet = new THREE.Mesh(geom, mat);
let face1 = new THREE.Mesh(geom, faceMat1);
let face2 = new THREE.Mesh(geom, faceMat2);
let face3 = new THREE.Mesh(geom, faceMat3);
cubelet.add(face1);
cubelet.add(face2);
cubelet.add(face3);
scene.add(cubelet);
}
Of course, you'd need to set the positions and rotations, to get a result like this:
Secondly, you're going to have to re-think the way your animate() function is set up. You're starting the next rotation after a set time has progressed, but instead you should start the next rotation only after the first one is complete. Have you considered using an animation library like GSAP?

Load an Image in Multiple movie clips using as3

I am really curious of asking a particular question to everyone of you. I am creating an Application in flash that is lot similar to this Application Zazzle Case Cover
I am almost ready with what i was supposed to do and How i have to do. But, still i am ain't a very big Tech_geek to handle all these . I list out some of the things which i could`nt achieve. Kindly help me if possible.
I know that inorder to load unlimited number of images in a Movie Clip, we need Array. But to go with it, I am not sure of framing it properly.
I have Merged certain codings from internet and encoded it to act as the Application in that Site for a Single image in a Single view, but when i try to add child or make it display the same image in all other views i can't frame the coding. It is not behaving properly .
Last but not the Least, I am confused of displaying the Bitmap data in as3... I wanted to show the Uploaded Panel Image in the below thmbnail area but i am not so sure of it.
The Questionnaire format of the Above problems are
How to upload unlimited number of images in a Movie Clip using Array ?
Is it is possible to Display the same image in two Movie Clips simultaneously using addChild ?
I had lots of blah and blah but this area plays the 2nd Question and even Answer for it. But i am ain't sure revealing the Answer.
function onMovieClipLoaderComplete(event:Event):void
{
// Hide progress bar
progressBar.visible=false;
var loadedContent:DisplayObject=event.target.content;
var loader:Loader=event.target.loader as Loader;
loadedContent.x=-37.625;
loadedContent.y=-37.625;
loadedContent.width=75.25;
loadedContent.height=75.25;
trace("loadedContent.width="+loadedContent.x);
trace("loadedContent.height="+loadedContent.y);
mcOnStage=true;
con1.container.addChild(loader);
clears.addEventListener(MouseEvent.CLICK, removeMC);
function removeMC(MouseEvent):void {
trace("Its Removed");
if (mcOnStage )
{
con1.container.removeChild(loader);
con1.textcontainer.removeChild(txt);
mcOnStage=false;
}
}
}
"con1.container.addChild(loader);"
Can i add "con1.container2.addChild(loader);" for the same loaded image.
How to Clone a Movieclip's Bitmap data and Display it in another area or Movieclip ???
Guide me if possible...
I have included the SWF file along with this Question...
https://docs.google.com/file/d/0B5jnHM1zpP4MOHRCeWFqX05sSTA/edit?usp=sharing
Can Someone check the first Site and gimme little notes of how can i bring all those modules in this flash as3 based application.
Here's how you would display the same image twice, with reference to the code you included in your post:
//here's your code
var loadedContent:DisplayObject=event.target.content as DisplayObject;
//create bitmap data instance same size and as the loaded content
var transparent:Boolean = true;
var fillColor:uint = 0xFFFFFFFF;
var bitmapData:BitmapData = new BitmapData(loadedContent.width, loadedContent.height, transparent, fillColor);
//draw the loaded content into the bitmap data
bitmapData.draw( loadedContent );
//create new bitmap
var bitmap:Bitmap = new Bitmap( bitmapData);
//add the loaded content
con1.container.addChild(loader);
//add your 'cloned' content
con1.container2.addChild( bitmap );

Which video encoding algorithm should I use for a video with just one static image and sound?

I'm doing video processing tasks and one of the problems I need to solve is choosing the appropriate encoding algorithm for a video that has just one static image throughout the entire video.
Currently I tried several algorithms, such as DivX and XviD, but they produce 3MB video for a 1 minute long video. The audio is 64kbit/s mp3, so the audio takes just 480KB. So the video is 2.5MB!
As the image in the video is not changing, it could be compressed really efficiently as there is no motion. The image size itself (it's a jpg) is just 50KB.
So ideally I'd expect this video to be about 550KB - 600KB and not 3MB.
Any ideas about how I could optimize the video so it's not that huge?
I hope this is the right stackexchange forum to ask this question.
Set the frames-per-second to be very low. Lower than 1fps if you can. Your goal would be to get as close to two keyframes (one at the start, and one at the end) as possible.
Whether you can do this depends on the scheme/codec you are using, and also the encoder.
Many codecs will have keyframe-related options. For example, here are some open-source encoders:
lavc (libavcodec):
keyint=<0-300> - maximum interval between keyframes in frames (default: 250 or one keyframe every ten seconds in a 25fps movie.
This is the recommended default for MPEG-4). Most codecs require regular keyframes in order to limit the accumulation of mismatch error. Keyframes are also needed for seeking, as seeking is only possible to a keyframe - but keyframes need more space than other frames, so larger numbers here mean slightly smaller files but less precise seeking. 0 is equivalent to 1, which makes every frame a keyframe. Values >300 are not recommended as the quality might be bad depending upon decoder, encoder and luck. It is common for MPEG-1/2 to use values <=30.
xvidenc:
max_key_interval= - maximum interval between keyframes (default: 10*fps)
Interestingly, this solution may reduce the ability to seek in the file, so you will want to test that.
I think this problem is related to the implementation of video encoder, not the video encoding standard itself.
Actually, most video encoder implementations are not designed for videos of static image, thus it will not produce perfect bitstream as we imagined when a video of static image is inputted. Most video encoder implementations are designed for processing "natural" video.
If you really need a better encoding result for video of static image, you may do a hack on an open source video encoder, from 2nd frame on, mark all MBs' as "skip"...

Showing processed images from an IP camera

I have an IP-camera that serves images. These images are then processed via EmguCV and then I want to display the processed images.
To show the images, I use this code:
Window1(){
...
this.Dispatcher.Hooks.DispatcherInactive
+= new EventHandler(Hooks_DispatcherInactive);
}
Hooks_DispatcherInactive(...)
{
Next()
}
Next() the calls calls the image processing methods and (should) display the image:
MatchResult? result = survey.Step();
if (result.HasValue)
{
Bitmap bit = result.Value.image.Bitmap;
ImageSource src = ConvertBitmap(bit);
show.Source = src;
...
}
This works fine when I hook up a normal 30fps webcam. But, the IPCam's images take over a second to get here, also when I access them via a browser. So, in the mean time, WPF shows nothing, not even the previous image that was processed.
How can I get WPF to at least show the previous image?
You can copy the image's buffer into a new BitmapSource image of the same format (PixelFormat, Height, Width, stride) using Create (from Array) or Create (from IntPtr) and display that BitmapSource in WPF's Image control,
or you can use DirectX to do that faster (for 30fps (and 1fps) the BitmapSource approach should do).
Also, consider NOT using events in the view, instead use bindings and commands.

Resources