I would like to add a short outro video to a list of videos or add a "watermark" that covers the entire video during the last 5 seconds. How can I achieve this? I added ffmpeg in the title but if there is another free way of doing it in bulk then I am all ears. My OS is Windows 10.
Thanks!
Related
I am trying to get a video to play inside a video tag at the top left hand corner of my page, it loads ok, the resolution is good and it seems to be looping but it is lagging very much, definatly not achieving 60fps it is in mp4 format and the resolution on the original mp4 is 1920x1080 it is a hi resolution vj free loop called GlassVein, you can see it if you search on youtube. On right clicking properties it comes up with the following inforamtion;
Bitrate:127kbs
Data rate:11270kbps
Total bitrate:11398kbs
Audio sample rate is: 44khz
filetype is:VLC media file(.mp4)
(but i do not want or need the audio)
& it also says 30fps, but I'm not sure i believe this as it runs smooth as butter on vlc media player no lagging, just smooth loop animation
I have searched on :https://trac.ffmpeg.org/wiki/Encode/AAC for encoding information but it is complete gobbldygook to me, I don't understand a word its saying
My code is so far as follows;
<video src="GlassVeinColorful.mp4" autoplay="1" preload="auto"
-movflags class="Vid" width="640" height="360" loop="1" viewport=""
faststart mpeg4 -s 320x240 -r 1080 -b 128k>
</video>
Does anyone know why this is lagging so much, or what I could do about it.
it is a quality animation and I don't really want to loose an of its resolution or crispness.. the -s section was originally set to 1920x1080 as this is what the original file is but i have changed it to try and render it quicker...
Any helpful sites, articles or answers would be great..
2020 Update
The Solution to this problem was to convert the Video to WebM, then use Javascript & a Html5 Canvas Element to render the Video to the page instead of using the video tag to embed the video.
Html
<section id="Theater">
<video src="Imgs/Vid/PurpGlassVein.webm" type="video/webm"
width="684" height="auto"
muted loop autoplay>
<source>
<source>
<source>
</video>
<canvas style="filter:opacity(0);"></canvas>
</section><!-- Closing Section for the Header -->
Css
video{
display:none !important;
visibility:hidden;
}
Javascript
const Canv = document.querySelector("canvas");
const Video = document.querySelector("video");
const Ctx = Canv.getContext("2d");
Video.addEventListener('play',()=>{
function step() {
Ctx.drawImage(Video, 0, 0, Canv.width, Canv.height)
requestAnimationFrame(step)
}
requestAnimationFrame(step);
})
Canv.animate({
filter: ['opacity(0) blur(5.28px)','opacity(1) blur(8.20px)']
},{
duration: 7288,
fill: 'forwards',
easing: 'ease-in',
iterations: 1,
delay: 728
})
I've Also Used the Vanilla Javascript .animate() API to fade the element into the page when the page loads. But one Caveat is that both the Canvas and the off-screen Video Tag must match the original videos resolution otherwise it starts to lag again, however you can use Css to scale it down via transform:scale(0.5); which doesn't seem to effect performance at all.
runs smooth as butter, and doesn't loose any of the high resolution image.
Added a slight blur 0.34px onto it aswell to smooth it even more.
Possibly could of still used ffmpeg to get a better[Smaller File Size] WebM Output file but thats something I'll have to look into at a later date.
Video over IP connections is going to be subject to network conditions and 60fps at that resolution is a quite high quality to try to maintain without any delay or buffering.
Most 'serious' video services, including YouTube. NetFlix etc provide multiple bit rate streams to allow for different network conditions, and different device capabilities.
The clients can switch between the streams through the video as they download the video chunk by chunk so can choose the best resolution possible for the current network conditions when they request a new chunk.
See here for an example: https://stackoverflow.com/a/42365034/334402
I recently went back to this project,
and went back over the Code,
Found that Converting the Video to WebM
& Using the html Canvas element to display the Vj loop
has made the performance 10x better, I will Upload the code for writing the data to the canvas when I can find it, my projects folder is kinda messy and un organised.
The main Idea though is having an Offscreen canvas with display none, and then reading that data into another Canvas that is displayed on the screen.
Seems to have fixed the issue that I was facing.
See the above edit[in the question] if you are facing any of the same issues or problems.
I have a list of files (videos and images) I would like to show on the screen using gstreamer 1.0, means iterating over the elements (file paths) in the list and "play" them sequentially in the c application with "delays" e. I tried different examples which partly work, but I cannot get the whole picture together to implement.
So what is the conceptual solution for this? Should I use one "dynamic" pipeline or two (one for images - because I think here is imagefreeze before videoconvert necessary and one for video)? And how can I use decodebin to detect the format of the media automatically? decodebin works from the command line, but with errors like no video decoder found for 'jpeg' in c application?
Try to make universal pipeline (or two for videos and images). i.e. you put to input any file from your list and get output video or image. This pipeline(s) should works from gst-launch. After that try to implement this pipeline in to C code, or write pipeline here.
My way:
Take file from list. If image -> create image decode pipeline, if video -> create video decode pipeline. Delete pipeline. Delay. Go to next file.
ive got link like this "http://trwamtv.live.e96-jw.insyscd.net/trwamtv.smil/playlist.m3u8" and I would like to stream it inside mediaelement ? is there anyone who knows how to do it ?
The MediaElement doesn't support M3U files inherently. M3Us are playlists, not media files (http://en.wikipedia.org/wiki/M3U). It may be possible to drill down through the playlist files until you get to actual media files, queue these, then have MediaElement play them.
The playlist file you've provided contains three separate links to different resolution versions of the same video clip:
#EXTM3U
#EXT-X-VERSION:3
#EXT-X-STREAM-INF:BANDWIDTH=950000,RESOLUTION=960x540
chunklist_b950000.m3u8
#EXT-X-STREAM-INF:BANDWIDTH=500000,RESOLUTION=640x360
chunklist_b500000.m3u8
#EXT-X-STREAM-INF:BANDWIDTH=250000,RESOLUTION=320x180
chunklist_b250000.m3u8
Notice that each of the 'chunklist' references are to further M3U playlists. Replacing 'playlist.m3u8' in your link with 'chunklist_b950000.m3u8' provides a further playlist file which contains references to 3 MPEG_TS files:
#EXTM3U
#EXT-X-VERSION:3
#EXT-X-ALLOW-CACHE:NO
#EXT-X-TARGETDURATION:12
#EXT-X-MEDIA-SEQUENCE:21565
#EXTINF:10.0,
media-urj3zh3ic_b500000_21565.ts
#EXTINF:10.0,
media-urj3zh3ic_b500000_21566.ts
#EXTINF:10.0,
media-urj3zh3ic_b500000_21567.ts
The WPF MediaElement doesn't support Transport Stream (.ts) files. Unfortunately you'll need to look elsewhere for a solution. I'd recommend looking into Vlc.Dotnet, the WPF MediaKit, MPlayer.Net, or some other third party media control.
I created the music in on create like this:
music_background = Gdx.audio.newMusic(Gdx.files.internal("background_music.mp3"));
music_background.setLooping(true);
the problem that its not playing in loop.
I also tried without the loop and instead registering for setOnCompletionListener but it also doesn't play. when I tried to reload the file like this:
music_background = Gdx.audio.newMusic(Gdx.files.internal("background_music.mp3"));
Inside the event it worked but only one time.
I think that the problem is that when its done playing the file dispose itself...
How can I play music in loop? what I'm doing wrong?
You are doing it correct, but MP3s are not good for looping, use OGG instead. MP3s will add a short silence at the start, OGG or WAV doesn't have this limitation.
Here is my code that works perfectly:
menuMusic = Gdx.audio.newMusic(Gdx.files.internal("data/sounds/music_menu.ogg");
menuMusic.setLooping(true);
menuMusic.play();
If you have all your files in MP3 just download Audacity, import you MP3s, edit away the blank audio and export as OGG.
I am using GPUImage framework to record multiple videos one after other in close intervals with having various filters enabled in real time using GPUImageVideoCamera and GPUImageMovieWriter.
When I record the video, video starts with a jerk(freeze for half a seconds) and ends with a jerk also. I know the reason behind this are the statements in which I pass the movieWriter object to VideoCamera's audioEncodingtarget.
So In my case when I record multiple videos one after other(with different objects of GPUImageMovieWriter), the video preview view freezes at start and end of each recording.
If I remove the audio encoding target statement, conditions improves significantly but of course I don't get the audio.
Currently I am using a AVAudioRecorder while recording to save audio tracks but I believe this is not a ideal work around.
Is there any way to solve this problem.
-- I looked at the RosyWriter example by Apple, their app work almost similarly but smoothly at almost constant 30 fps. I tried to use the RosyWriter code(after removing the code that add purple effect) to save the required videos while showing GPUImageVideoCamera's filtered view to user but in vain. When applied unmodified rosywriter code just records two videos and rest video fails. I also tried to pass in the rosywriter code the capture session from GPUImageVideoCamera but only gets videos with black frames and no audio.
Please help on how can I can record GPUImage filtered videos with audio without this jerkiness. Thanks in advance
I faced the same issue and here is my workaround.
As you pointed out, this problem happened because setAudioEncodingTarget method internally calls addAudioInputsAndOutputs to set audio in/output to the capture session.
To avoid this issue, I created justSetAudioEncodingTarget method for VideoCamera as below,
(on GPUImageVideoCamera.m)
// just set
-(void)justSetAudioEncodingTarget:(GPUImageMovieWriter*)newValue {
if( newValue == nil ) {
return;
}
addedAudioInputsDueToEncodingTarget = YES;
[super setAudioEncodingTarget:newValue];
}
The following steps is my scenario and I checked out it smoothly worked.
Called VideoCamera's addAudioInputsAndOutputs after the VideoCamera was created.
This is not right before starting the recording. :)
Set MovieWriter to the VideoCamera by justSetAudioEncodingTarget that I made above.