mp4 Vj Animation video lagging hi res video - loops

I am trying to get a video to play inside a video tag at the top left hand corner of my page, it loads ok, the resolution is good and it seems to be looping but it is lagging very much, definatly not achieving 60fps it is in mp4 format and the resolution on the original mp4 is 1920x1080 it is a hi resolution vj free loop called GlassVein, you can see it if you search on youtube. On right clicking properties it comes up with the following inforamtion;
Bitrate:127kbs
Data rate:11270kbps
Total bitrate:11398kbs
Audio sample rate is: 44khz
filetype is:VLC media file(.mp4)
(but i do not want or need the audio)
& it also says 30fps, but I'm not sure i believe this as it runs smooth as butter on vlc media player no lagging, just smooth loop animation
I have searched on :https://trac.ffmpeg.org/wiki/Encode/AAC for encoding information but it is complete gobbldygook to me, I don't understand a word its saying
My code is so far as follows;
<video src="GlassVeinColorful.mp4" autoplay="1" preload="auto"
-movflags class="Vid" width="640" height="360" loop="1" viewport=""
faststart mpeg4 -s 320x240 -r 1080 -b 128k>
</video>
Does anyone know why this is lagging so much, or what I could do about it.
it is a quality animation and I don't really want to loose an of its resolution or crispness.. the -s section was originally set to 1920x1080 as this is what the original file is but i have changed it to try and render it quicker...
Any helpful sites, articles or answers would be great..
2020 Update
The Solution to this problem was to convert the Video to WebM, then use Javascript & a Html5 Canvas Element to render the Video to the page instead of using the video tag to embed the video.
Html
<section id="Theater">
<video src="Imgs/Vid/PurpGlassVein.webm" type="video/webm"
width="684" height="auto"
muted loop autoplay>
<source>
<source>
<source>
</video>
<canvas style="filter:opacity(0);"></canvas>
</section><!-- Closing Section for the Header -->
Css
video{
display:none !important;
visibility:hidden;
}
Javascript
const Canv = document.querySelector("canvas");
const Video = document.querySelector("video");
const Ctx = Canv.getContext("2d");
Video.addEventListener('play',()=>{
function step() {
Ctx.drawImage(Video, 0, 0, Canv.width, Canv.height)
requestAnimationFrame(step)
}
requestAnimationFrame(step);
})
Canv.animate({
filter: ['opacity(0) blur(5.28px)','opacity(1) blur(8.20px)']
},{
duration: 7288,
fill: 'forwards',
easing: 'ease-in',
iterations: 1,
delay: 728
})
I've Also Used the Vanilla Javascript .animate() API to fade the element into the page when the page loads. But one Caveat is that both the Canvas and the off-screen Video Tag must match the original videos resolution otherwise it starts to lag again, however you can use Css to scale it down via transform:scale(0.5); which doesn't seem to effect performance at all.
runs smooth as butter, and doesn't loose any of the high resolution image.
Added a slight blur 0.34px onto it aswell to smooth it even more.
Possibly could of still used ffmpeg to get a better[Smaller File Size] WebM Output file but thats something I'll have to look into at a later date.

Video over IP connections is going to be subject to network conditions and 60fps at that resolution is a quite high quality to try to maintain without any delay or buffering.
Most 'serious' video services, including YouTube. NetFlix etc provide multiple bit rate streams to allow for different network conditions, and different device capabilities.
The clients can switch between the streams through the video as they download the video chunk by chunk so can choose the best resolution possible for the current network conditions when they request a new chunk.
See here for an example: https://stackoverflow.com/a/42365034/334402

I recently went back to this project,
and went back over the Code,
Found that Converting the Video to WebM
& Using the html Canvas element to display the Vj loop
has made the performance 10x better, I will Upload the code for writing the data to the canvas when I can find it, my projects folder is kinda messy and un organised.
The main Idea though is having an Offscreen canvas with display none, and then reading that data into another Canvas that is displayed on the screen.
Seems to have fixed the issue that I was facing.
See the above edit[in the question] if you are facing any of the same issues or problems.

Related

How to get duration of Youtube video without playing the video in Reactjs

I am a newbie in ReactJS and I badly need some help.
So I have a video catalog that only shows the thumbnails of the videos with label and overlay duration. Before I was using React-Player by Pete Cook but I don't want that my video player has share, like and watch later buttons so I decided to not use it and use video-player instead. I just use Image tag for showing thumbnail and I will just pass the Youtube link to the video player if the image is clicked.
Now my problem is that I am having a hard time in getting the video duration. When I was still using React-Player, I can get it after clicking play button (not the result that I want but at least I was able to get the duration). Any solution for this?
Your problem is you're providing a Youtube URL to the video-player library's Player Component which clearly doesn't support it as mentioned here.
I even tried it you'll get no duration value nor the video will be played.
However, if you changed your Youtube link to for example this one https://www.w3schools.com/html/movie.mp4
You'll find that everything is working perfectly and you can get the duration state from the Player's states without even playing the video.
const { player } = this.player.getState();
console.log(player.duration);
I added these two lines to the a method called changeSource mentioned here under Examples title.

how to display lightmaps in the appcreator

running on macbook iOs X(yosemite) and chrome browser,
i try to view an archilogic model:
Model Used
in https://appcreator.3d.io/
but the result doesn't seem to display the same lightmaps mapping on interiors:
https://app.3d.io/lLOkYR
This might be due to the fact that the lighting system in spaces editor is a different one than the lighting system in aframe. I recommend you to adjust the lighting in your aframe scene using the following parameters (search for these values inside your html code in the app creator):
lightMapIntensity: 1.887; lightMapExposure: 0.55
also you can adjust overall lighting intensity by modifying:
<a-scene io3d-lighting="intensity:0.9">

Rendering problems when saving multiple Dygraphs as a PNG on mobile devices

I have a web application that uses Dygraphs to create charts.
The application allows a user to create multiple Dygraph charts (each with their own Y-Axis) that will be stacked on top of each other.
Here's an example of what the multiple Dygraphs look like on a PC browser: Notice that the example displays three different Dygraphs each having their own Y-axis, but the X-axis is hidden for the top 2 charts and visible on the bottom chart.
I will allow the user to save the chart to disk as a PNG. - The way I currently save the multiple Dygraphs as one PNG is:
Create a target canvas that will be used to contain all the visible Dygraphs
Extract each canvas out of each Dygraph, then add each canvas to the target canvas **
Create a PNG via the .toDataURL() function on the target canvas
Here's an example of what the above screenshot looks like when saved as one PNG: (This is exactly what I want from the PNG)
The procedure works fine on browsers on a PC. But when I attempt to save the multiple Dygraphs into one PNG on a phone/tablet browser, the resultant PNG doesn't match the graph that is visible on the screen.
Example:
Here's what the multiple Dygraphs look like on an iPad (screenshot)
And here's what the resultant PNG looks like (Notice how the width and height of each chart does not match the actual iPad display).
I don't understand why the PNG is rendered correctly when I use a PC browser, but is not rendered correctly when I use a browser on a mobile device.
I'm not sure if this problem is due to limitations of the Canvas.toDataURL() function or if this is a Dygraphs problem or something else. I'm fishing for advice that may point me in the right direction and/or shed light on this particular problem.
**I should mention that I use Juan Manuel Caicedo Carvajal's Dygraph-Export extension
I'm guessing the Problem occurs, because the generated canvas isn't rendered fully to the responsive screen of an iPad.
You can try to export the original canvas (instead of generating a new one with the said library) yourself with toDataUrl https://developer.mozilla.org/en-US/docs/Web/API/HTMLCanvasElement/toDataURL
Dygraphs generates 2 canvas, one for the legend and one for the actual graph and lays them ontop of each other. So make sure you choose the right one (not the _hidden_canvas). If the examples works you can draw the legend onto the graph canvas with canvas.drawImage(otherCanvas)
How to Copy Contents of One Canvas to Another Canvas Locally
Hope this helps. Keep me updated!
My workaround/hack for the problem stated in my OP was to make a change to the Dygraph source in the Dygraph.getContextPixelRatio function.
Notice in the code below that I set devicePixelRatio = 1
dygraph-combined.js
Dygraph.getContextPixelRatio = function (context) {
try {
//var devicePixelRatio = window.devicePixelRatio;
var devicePixelRatio = 1; // Hack!!!
var backingStoreRatio = context.webkitBackingStorePixelRatio ||
context.mozBackingStorePixelRatio ||
context.msBackingStorePixelRatio ||
context.oBackingStorePixelRatio ||
context.backingStorePixelRatio || 1;
if (devicePixelRatio !== undefined) {
return devicePixelRatio / backingStoreRatio;
} else {
// At least devicePixelRatio must be defined for this ratio to make sense.
// We default backingStoreRatio to 1: this does not exist on some browsers
// (i.e. desktop Chrome).
return 1;
}
} catch (e) {
return 1;
}
};
In my case, this hack fixed my problem (stated in the OP) and didn't negatively affect any other parts of my application that uses Dygraphs. That said, if you find a better/correct way to fix the problem stated in the OP, please share.

Recording a video with audio using GPUImageMovieWriter without jerkiness at start and end of recording?

I am using GPUImage framework to record multiple videos one after other in close intervals with having various filters enabled in real time using GPUImageVideoCamera and GPUImageMovieWriter.
When I record the video, video starts with a jerk(freeze for half a seconds) and ends with a jerk also. I know the reason behind this are the statements in which I pass the movieWriter object to VideoCamera's audioEncodingtarget.
So In my case when I record multiple videos one after other(with different objects of GPUImageMovieWriter), the video preview view freezes at start and end of each recording.
If I remove the audio encoding target statement, conditions improves significantly but of course I don't get the audio.
Currently I am using a AVAudioRecorder while recording to save audio tracks but I believe this is not a ideal work around.
Is there any way to solve this problem.
-- I looked at the RosyWriter example by Apple, their app work almost similarly but smoothly at almost constant 30 fps. I tried to use the RosyWriter code(after removing the code that add purple effect) to save the required videos while showing GPUImageVideoCamera's filtered view to user but in vain. When applied unmodified rosywriter code just records two videos and rest video fails. I also tried to pass in the rosywriter code the capture session from GPUImageVideoCamera but only gets videos with black frames and no audio.
Please help on how can I can record GPUImage filtered videos with audio without this jerkiness. Thanks in advance
I faced the same issue and here is my workaround.
As you pointed out, this problem happened because setAudioEncodingTarget method internally calls addAudioInputsAndOutputs to set audio in/output to the capture session.
To avoid this issue, I created justSetAudioEncodingTarget method for VideoCamera as below,
(on GPUImageVideoCamera.m)
// just set
-(void)justSetAudioEncodingTarget:(GPUImageMovieWriter*)newValue {
if( newValue == nil ) {
return;
}
addedAudioInputsDueToEncodingTarget = YES;
[super setAudioEncodingTarget:newValue];
}
The following steps is my scenario and I checked out it smoothly worked.
Called VideoCamera's addAudioInputsAndOutputs after the VideoCamera was created.
This is not right before starting the recording. :)
Set MovieWriter to the VideoCamera by justSetAudioEncodingTarget that I made above.

HTML5 Video Player to play a partial video file from browsers File System

I have written some HTML5 (javascript) code to download a video file from from a set of servers, I am able to do this with the FILE system API which is currently only implemented on chrome.
After I download/merge all the pieces , I then create an instance of the HTML5 player
var video = document.createElement('video');
video.src = fileEntry.toURL();
video.autoplay = true;
console.log(fileEntry.toURL());
document.body.appendChild(video);
This works fine.
But now I want to make my player start playing the file after say (20 of 300) pieces are written, when I do so, the player starts playing the file but stops after the part of the 20th piece, and if i drag the player progress bar a fwd or backward a bit , it plays the rest.
Is there someway of fixing this? to play smoothly without manual intervention from the user?

Resources