Overlay text on video with MLT - mlt

My question is similar to Adding text to videos using MLT Framework but I am using a different command.
My command is:
melt SampleVideo_1280x720_5mb.mp4 -attach watermark:title.txt producer.bgcolor=transparent in=50 out=500
This produces the text in title.txt overlaid on top of the original clip, but in a box with a black background.
melt -query filters
shows that I have the dynatext filter installed, but not pango.
How can I achieve my desired effect with the dynatext filter instead?

melt SampleVideo_1280x720_5mb.mp4 -attach dynamictext:"Some text I want to show" bgcolour=0x00000000 in=50 out=100
Notes:
dynamictext can not read text from a file like watermark can
dynamictext requires pango or qtext producers to be installed

Related

ffmpeg (libav, libavfilter, etx) - modify frame with image or text using C\C++ API

After reading a huge bunch of docs and tutorials I still cant find a way to add some image or text to each frame of video. Something like logo on the frame corner, or text watermark.
Iam know how to do such things with ffmpeg from cli, but for this case, C\C++ code is required.
Looks like, ffmpeg's libav allow me to do some things with frame on decode stage, using AVFrame structure of current frame and add some modifications to it with libavfilter. But how exactly this can be done?
First, you need the image in the same raw format as the AVFrame::format. Then you can patch the image anywhere on the AVFrame. It will be also useful if the "image" has an alpha channel for transparency. Otherwise, you may resort to color keying.

PDFJS: Text layer rendering twice

Here's the context:
pdfjs-dist: v2.2.228
angualar/core: 8.2.0 (hybrid Env with an AngularJs 1.5.X)
Objective: to upgrade pdfjs-dist to the latest versions (2.3.200 or at least 2.2.X)
I upgraded from pdfjs-dist 2.0.4xx, and here's what is being rendered:
So basically, instead of having the PDF rendered correctly, with highlightable text and so on, there's like a duplicate of the text: one version of the text is rendered correctly (graphically), the other version seems to work OK for text selection and searching.
When doing any search, PDFFindController works on the highlightable layer (as you can see the area with the greenish text in the upper part of the image).
Any idea what might cause this behavior?
this is due to the new update of PDFjs requires css file to style the pdf.
you can get more details on this from the following question's answer
PDFJS: error on Text rendering for the PDF

How to increase size of text editors inside Google-Cloud-Shell?

I first open Google-Cloud-Shell from clicking console image on top right.
Welcome to Cloud Shell! Type "help" to get started.
Your Cloud Platform project in this session is set to citric-yen-197207.
Use “gcloud config set project” to change to a different project.
user_name#cloudshell:~ (citric-yen-197207)$ emacs -nw helloWorld.txt
user_name#cloudshell:~ (citric-yen-197207)$
At cloudshell, when I open emacs or other text editors whole screen is filled.
Later, I am connecting into my google-instance with following command: gcloud compute connect-to-serial-port 'INSTANCE_NAME'. After connection is completed (user_name#instance-3:~$), when I try to opened emacs, vi or nano or any other text editor their size is around 80x32, which is pretty small. I am not sure what causes this problem.
Example view could be seen as follows:
And console location is messed up as well. I am writing in one point but the character show up some other point. So text editor environment does not allow me to any adding text, after a while all characters merge together, previous lines pop up at the courser point I am writing text.
[Q] Is there any way to increase the width and height of the any text editors in Google-Cloud-Shell on a web-browser?

Getting Maya Custom Hotkeys List

I want to share my Maya hotkeys for custom commands along team. Of course, I can use "hotkeySet" command with -export, -import. But in this case, it override all of them with the file. It means that
If I change "save file" to "Ctrl + Alt + S" (sure, it's so weird). I
don't want to make my team members to use that weird hotkey.
How can I get the list of my custom hotkeys? If I can know that, I can export and import them selectively.
If you want to share your Maya's custom hotkeys along your colleagues, you'll need to copy userHotkeys.mel (or userHotkeys_Maya_Default_Duplicate.mel) and userNamedCommands.mel files from your comp to the destination team's comp. One more file named userRunTimeCommands.mel is typically empty one.
These files are located in the following directories on different OS:
macOS – ~/Library/Preferences/Autodesk/maya/2016.5/prefs/hotkeys
Linux – ~<username>/maya/2016.5-x64/prefs
Windows – \Users\<username>\Documents\maya\2016.5-x64\en_US\prefs
If you open Maya's Script Editor and check Echo all Commands option
on and then save a custom shortcut in Hotkey Editor, you'll notice that Maya saves/updates these three files when you create or edit your hotkey.
For instance, I've created hotkey Alt G for toggling a grid in Viewport.
hotkey -keyShortcut "g" -alt -name ("ToggleGridNameCommand");
This is what I can see in Script Editor:
After that you can share saved userHotkeys.mel and userNamedCommands.mel files along your team. Also you can edit these ASCII files in any Text Editor.

Gstreamer 1.0 image/video player. Which way to implement?

I have a list of files (videos and images) I would like to show on the screen using gstreamer 1.0, means iterating over the elements (file paths) in the list and "play" them sequentially in the c application with "delays" e. I tried different examples which partly work, but I cannot get the whole picture together to implement.
So what is the conceptual solution for this? Should I use one "dynamic" pipeline or two (one for images - because I think here is imagefreeze before videoconvert necessary and one for video)? And how can I use decodebin to detect the format of the media automatically? decodebin works from the command line, but with errors like no video decoder found for 'jpeg' in c application?
Try to make universal pipeline (or two for videos and images). i.e. you put to input any file from your list and get output video or image. This pipeline(s) should works from gst-launch. After that try to implement this pipeline in to C code, or write pipeline here.
My way:
Take file from list. If image -> create image decode pipeline, if video -> create video decode pipeline. Delete pipeline. Delay. Go to next file.

Resources