mlt: why is dynamictext working, but not text or qtext? - mlt

Here's a minimal MLT file that writes some text to the video output:
<?xml version="1.0"?>
<mlt>
<profile width="320" height="240"/>
<multitrack>
<playlist>
<producer in="0" out="0">
<property name="mlt_service">color</property>
</producer>
</playlist>
</multitrack>
<filter in="0" out="0">
<property name="mlt_service">dynamictext</property>
<property name="argument">Hello world!</property>
<property name="fgcolour">white</property>
</filter>
</mlt>
It creates just a single frame, but if I save the above as "text.mlt", I can check the output by extracting the frame with ffmpeg, and opening that with my image viewer (I'm using eog, so replace that with your own viewer if you run this code):
melt text.mlt -consumer avformat:text.mp4 acodec=aac vcodec=libx264 &&
ffmpeg -y -loglevel quiet -i text.mp4 -vframes 1 text.png &&
eog text.png
Here's the result:
In the documented list of MLT filter plugins, two other text-rendering filters are listed: "text" and "qtext", but if I replace "dynamictext" with either "text" or "qtext" in the mlt file above, no text appears. Is this a bug or expected behavior? If it's expected behavior, could someone please explain what's happening?
I'm on Ubuntu 18.04.4 LTS, using melt 6.6.0, downloaded from the official Ubuntu package repository. Here's my uname -a output:
Linux laptop 4.15.0-99-lowlatency #100-Ubuntu SMP PREEMPT Wed Apr 22 21:10:38 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
Thanks!

Your version is too old.
qtext was added in 6.14.0
text was added in 6.12.0
https://github.com/mltframework/mlt/releases

Related

LateX on Mac: Texmaker can't find installed package

I have a Mac and just installed LaTeX and the editor Texmaker. To use the Arial font I installed it via MiKTeX Console. I also found out that by doing so, the files for the installed packages lay here:
"/Users/Mirko/Library/Application Support/MiKTeX/texmfs/install/tex/latex"
My problem is that the Editor (TeXMaker) still doesn't find the Arial-Font.
Here is my Code:
\documentclass[pdftex,openany,11pt,twoside,a4dutch]{report}
\usepackage{uarial}
\usepackage{csquotes}
\usepackage[ngerman]{babel}
\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
\usepackage{csquotes} % Setzen von Anführungsstrichen
\begin{document}
This is a test.
\end{document}
and here is the Error:
! LaTeX Error: File `uarial.sty' not found.Type X to quit or <RETURN> to proceed,or enter new name. (Default extension: sty)Enter file name:! Emergency stop.<read > \usepackage
But the folder arial and the file arial.sty are in the above mentioned folder.
In the MikTeX-Console I already entered the folder to Directories > Settings
I thankful for every help!
Thanks and greetings!
Here is the log-file from TexMaker:
This is pdfTeX, Version 3.14159265-2.6-1.40.20 (TeX Live 2019) (preloaded format=pdflatex 2019.10.27) 27 OCT 2019 13:34
entering extended mode
restricted \write18 enabled.
%&-line parsing enabled.
**test.tex
(./test.tex
LaTeX2e <2018-12-01>
(/usr/local/texlive/2019/texmf-dist/tex/latex/base/report.cls
Document Class: report 2018/09/03 v1.4i Standard LaTeX document class
(/usr/local/texlive/2019/texmf-dist/tex/latex/base/size11.clo
File: size11.clo 2018/09/03 v1.4i Standard LaTeX file (size option)
)
\c#part=\count80
\c#chapter=\count81
\c#section=\count82
\c#subsection=\count83
\c#subsubsection=\count84
\c#paragraph=\count85
\c#subparagraph=\count86
\c#figure=\count87
\c#table=\count88
\abovecaptionskip=\skip41
\belowcaptionskip=\skip42
\bibindent=\dimen102
)
! LaTeX Error: File `uarial.sty' not found.
Type X to quit or <RETURN> to proceed,
or enter new name. (Default extension: sty)
Enter file name:
! Emergency stop.
<read *>
l.5 \usepackage
{csquotes}^^M
*** (cannot \read from terminal in nonstop modes)
Here is how much of TeX's memory you used:
221 strings out of 492616
2365 string characters out of 6129480
60608 words of memory out of 5000000
4231 multiletter control sequences out of 15000+600000
3940 words of font info for 15 fonts, out of 8000000 for 9000
1141 hyphenation exceptions out of 8191
21i,0n,22p,110b,36s stack positions out of 5000i,500n,10000p,200000b,80000s
! ==> Fatal error occurred, no output PDF file produced!
As you can see from the first line of the log file
This is pdfTeX, Version 3.14159265-2.6-1.40.20 (TeX Live 2019)
texmaker is not using the miktex distribution for which you installed the uarial package, but a texlive distribution which seems to be also installed on your computer. In the long run it would be best to only have a single tex distribution on your computer to avoid such problems. For mac I would suggest to keep the texlive distribution and get rid of miktex, because texlive is used by most mac users and therefore well tested compared to miktex for mac.
Now you could force texmaker to use your miktex installation by modifying the texmaker preferences, but the easier way to use Arial in your document is to switch from pdflatex to xelatex or lualatex and directly use the arial font installed on your computer
% !TeX TS-program = xelatex
\documentclass[openany,11pt,twoside,a4dutch]{report}
\usepackage{fontspec}
\setmainfont{Arial}
\usepackage[ngerman]{babel}
\usepackage[utf8]{inputenc}
\usepackage{csquotes}
\begin{document}
This is a test.
\end{document}

Discrepancy in behavior of Linux loaders (ld-linux-x86-64) between Glibc 2.12 and Glibc 2.17

I'm trying to compile the same lib on two x86 separate machines.
Both use the same toolchain (exactly same set of files) but have different Glibc versions.
When I run command LD_DEBUG=libs /lib64/ld-linux-x86-64.so.2 --list ./libl2ps.so I notice the following discrepancy between the 2 Linux loaders:
Machine 1 (with Glibc 2.12):
19943: find library=libm.so.6 [0]; searching
19943: search path=/ebs/frperies/repo/gnb/uplane/build/prefix-root/asik-x86_64-ps_lfs-dynamic-linker-on/toolchain/sysroots/core2-64-pc-linux-gnu/usr/lib64:...:/ebs/frperies/repo/gnb/uplane/build_bbp/l2_ps/build/. (RPATH from file ./libl2ps.so)
19943: trying file=/ebs/frperies/repo/gnb/uplane/build/prefix-root/asik-x86_64-ps_lfs-dynamic-linker-on/toolchain/sysroots/core2-64-pc-linux-gnu/usr/lib64/libm.so.6
19943:
19943: find library=libgcc_s.so.1 [0]; searching
...
In this case the Linux loader selects lib libm.so.6 from the toolchain path based on RPATH of lib libl2ps.so.
Machine 2 (with Glibc 2.17):
10699: find library=libm.so.6 [0]; searching
10699: search path=/home/frperies/repo/gnb/uplane/build/prefix-root/asik-x86_64-ps_lfs-dynamic-linker-on/toolchain/sysroots/core2-64-pc-linux-gnu/usr/lib64:/home/frperies/repo/gnb/uplane/build/prefix-root/asik-x86_64-ps_lfs-dynamic-linker-on/toolchain/sysroots/core2-64-pc-linux-gnu/lib64:/home/frperies/repo/gnb/uplane/build/prefix-root/asik-x86_64-ps_lfs-dynamic-linker-on/toolchain/sysroots/core2-64-pc-linux-gnu/usr/lib:/home/frperies/repo/gnb/uplane/build_bbp/l2_ps/build/. (RPATH from file ./libl2ps.so)
10699: trying file=/home/frperies/repo/gnb/uplane/build/prefix-root/asik-x86_64-ps_lfs-dynamic-linker-on/toolchain/sysroots/core2-64-pc-linux-gnu/usr/lib64/libm.so.6
10699: trying file=/home/frperies/repo/gnb/uplane/build/prefix-root/asik-x86_64-ps_lfs-dynamic-linker-on/toolchain/sysroots/core2-64-pc-linux-gnu/lib64/libm.so.6
10699: trying file=/home/frperies/repo/gnb/uplane/build/prefix-root/asik-x86_64-ps_lfs-dynamic-linker-on/toolchain/sysroots/core2-64-pc-linux-gnu/usr/lib/libm.so.6
10699: trying file=/home/frperies/repo/gnb/uplane/build_bbp/l2_ps/build/./libm.so.6
10699: search cache=/etc/ld.so.cache
10699: trying file=/lib64/libm.so.6
As for Machine 1, the loader attempts from RPATH of libl2ps.so to select lib libm.so.6 from toolchain path but skip it for some reason and try further other paths. Finally it selects libm.so.6from the system path /lib64/.
The RPATH of the 2 libs lib2ps.so are exactly the same. The two files libm.so.6 are also exactly the same on both machines (checked with md5sum).
I don't understand this differences of behavior between the 2 Linux loaders.
Do you see any reason what would explain this discrepancy ?
Thank you very much for your answers.
Update:
Thank you yugr for your answer.
Output of readelf -h gives only differences on fields "Entry point address" and "Start of section headers" and there is no other differences so I think it will not help.
Regarding using dlopen()/dlerror(), I've done a little executable with the following statement:
dlopen("/home/frperies/repo/gnb/uplane/build/prefix-root/asik-x86_64-ps_lfs-dynamic-linker-on/toolchain/sysroots/core2-64-pc-linux-gnu/usr/lib64/libm-2.28.so", RTLD_LAZY);
On machine 1 it works as expected:
C++ dlopen demo
Opening libm-2.28.so...
Closing library...
On machine 2 it fails and dlerror() gives the following output:
Cannot open library: /home/frperies/repo/gnb/uplane/build/prefix-root/asik-x86_64-ps_lfs-dynamic-linker-on/toolchain/sysroots/core2-64-pc-linux-gnu/usr/lib64/libm-2.28.so: cannot open shared object file: No such file or directory
but the file libm-2-28.so really exists on my file system:
$ ls -l /home/frperies/repo/gnb/uplane/build/prefix-root/asik-x86_64-ps_lfs-dynamic-linker-on/toolchain/sysroots/core2-64-pc-linux-gnu/usr/lib64/libm-2.28.so
-rwxr-xr-x 1 frperies linseeusers_lte_espoo 1682944 Oct 5 13:50 /home/frperies/repo/gnb/uplane/build/prefix-root/asik-x86_64-ps_lfs-dynamic- linker-on/toolchain/sysroots/core2-64-pc-linux-gnu/usr/lib64/libm-2.28.so
This is very weird, what could lead to this situation ???
Thanks
Update 2:
That is true that I haven't pointed out that machine 1 is a RHEL6.8 distro while machine 2 is RHEL7.4 distro. I (naively?) didn't think this really matters...
On machine 1:
$ cat /proc/sys/kernel/osrelease
4.4.115-1.NSN.el6.x86_64
$ uname -a
Linux sq24-3 4.4.115-1.NSN.el6.x86_64 #1 SMP Mon Feb 12 12:35:46 CET 2018 x86_64 x86_64 x86_64 GNU/Linux
$ readelf -n libl2ps.so
Notes at offset 0x00000270 with length 0x00000024:
Owner Data size Description
GNU 0x00000014 NT_GNU_BUILD_ID (unique build ID bitstring)
Build ID: b598468830fdf2f61eda25553b9a367c4d28cdc9
On machine 2:
$ cat /proc/sys/kernel/osrelease
3.10.0-693.el7.x86_64
$ uname -a
Linux localhost.localdomain 3.10.0-693.el7.x86_64 #1 SMP Thu Jul 6 19:56:57 EDT 2017 x86_64 x86_64 x86_64 GNU/Linux
$ readelf -n libl2ps.so
Displaying notes found at file offset 0x00000270 with length 0x00000024:
Owner Data size Description
GNU 0x00000014 NT_GNU_BUILD_ID (unique build ID bitstring)
Build ID: 5829181bc0502233748149369108915ea7b10e8f
Does it help ?
Thanks
Update 3:
$ readelf -n libm.so.6
Notes at offset 0x00000238 with length 0x00000024:
Owner Data size Description
GNU 0x00000014 NT_GNU_BUILD_ID (unique build ID bitstring)
Build ID: 0d84c7247dd76008c096719043e5592735a1c4bd
Notes at offset 0x0000025c with length 0x00000020:
Owner Data size Description
GNU 0x00000010 NT_GNU_ABI_TAG (ABI version tag)
OS: Linux, ABI: 4.4.0
So, how to interpret this ABI version number set to 4.4.0 ??
Thanks
Thank you yugr and Employed Russian for your answers!!
I will give it a try by upgrading my Kernel version on Machine 2.
Thanks
Regards
The error message that you see is the infamously confusing ENOENT errno. I see two instances of it in dl-load.c:
checking OS compatibility
loading non-setuid to setuid process
I suspect the first one fails in your case which would mean that OS kernel is incompatible between two machines. ld.so manpage indeed says that
Each shared object can inform the dynamic linker of the
minimum kernel ABI version that it requires. (This
requirement is encoded in an ELF note section that is viewable
via readelf -n as a section labeled NT_GNU_ABI_TAG.) At run
time, the dynamic linker determines the ABI version of the
running kernel and will reject loading shared objects that
specify minimum ABI versions that exceed that ABI version.
NT_GNU_ABI_TAG is 4.4.0 which means that you run a program expecting a minimum 4.4 kernel on a 3.10 kernel. Theoretically newer Glibc should run on older kernels as well but in your case Glibc was probly built with explicit --enable-kernel flag which prevents it's usage on kernels before 4.4 (see e.g. this explanation of --enable-kernel).
As a workaround, you may try to fool Glibc by overriding kernel version on machine 2 via
export LD_ASSUME_KERNEL=4.4.0
but it may not work if libm makes 4.4-specific syscalls that are not really present on 3.10.

libvirt qemu-system-arm, error: XML error: No PCI buses available

I am trying to run a linux image i created with buildroot with libvirt.
If i use qemu-system-arm directly, everything works as intended:
/usr/bin/qemu-system-arm \
-M versatilepb \
-kernel output/images/zImage \
-dtb output/images/versatile-pb.dtb \
-drive index=0,file=output/images/rootfs.ext2,if=scsi,format=raw \
-append "root=/dev/sda console=ttyAMA0,115200" \
-net nic,model=rtl8139 \
-net user \
-nographic
However, when i try to create the xml from my qemu cmdline, it fails:
$ virsh domxml-from-native qemu-argv qemu.args
error: XML error: No PCI buses available
I also tried to create a basic XML by hand:
<?xml version='1.0'?>
<domain type='qemu'>
<name>Linux ARM</name>
<uuid>ce1326f0-a9a0-11e3-a5e2-0800200c9a66</uuid>
<memory>131072</memory>
<currentMemory>131072</currentMemory>
<vcpu>1</vcpu>
<os>
<type machine='versatilepb'>hvm</type>
<kernel>zImage</kernel>
<cmdline>root=/dev/sda console=ttyAMA0,115200</cmdline>
<dtb>versatile-pb.dtb</dtb>
</os>
<devices>
<disk type='file' device='disk'>
<source file='rootfs.ext2'/>
<target dev="sda" bus="scsi"/>
</disk>
<interface type='network'>
<source network='default'/>
</interface>
</devices>
</domain>
which fails with the same error:
$ virsh create guest-test.xml
error: Failed to create domain from guest-test.xml
error: XML error: No PCI buses available
I already tried with the brand-new and latest libvirt-3.0.0, without any success
What do i need to change in my cmdline/xml?
virsh domxml-from-native issue
The reason why the domxml-from-native command does not work is because the underlying code in libvirt that does the parsing expects the suffix of qemu-system- to be a canonical architecture name, and arm is not. In your case it would seem you want arm to map to armv7l which is a cannonical architecture name. You can pull this of by creating a soft-link qemu-system-armv7l that points you your system's qemu-system-arm and then use the location of the softlink in your qemu.args
code references
https://github.com/libvirt/libvirt/blob/v4.0.0/src/qemu/qemu_parse_command.c#L1906
https://github.com/libvirt/libvirt/blob/v4.0.0/src/util/virarch.c#L135
xml issues
Your xml is giving you the same error for multiple unrelated reasons. In the type element under os you need to specify arch="armv7l" (or some other canonical arm arch name). Note also that the kernel and dtb references need to be absolute paths or prefixed with a .. Finally some of the devices you have require a PCI bus and will not work with the machine you are going for. Consider the following alternative.
<domain type='qemu'>
<name>Linux ARM</name>
<uuid>ce1326f0-a9a0-11e3-a5e2-0800200c9a66</uuid>
<memory>131072</memory>
<currentMemory>131072</currentMemory>
<vcpu>1</vcpu>
<os>
<type arch="armv7l" machine='versatilepb'>hvm</type>
<kernel>/path/to/zImage</kernel>
<cmdline>root=/dev/sda console=ttyAMA0,115200</cmdline>
<dtb>/path/to/versatile-pb.dtb</dtb>
</os>
<devices>
<disk type="file" device="disk">
<driver name="qemu" type="qcow2"></driver>
<source file="/path/to/root.qcow2"></source>
<target dev="sda" bus="sd"></target>
</disk>
<serial type="tcp">
<source mode="bind" host="localhost" service="4000"></source>
<protocol type="telnet"></protocol>
</serial>
</devices>
</domain>

Xdebug - command is not available

I'm debugging remotely my project in PhpStorm. IDE shows 'Connected' for a moment and immediately goes into 'Waiting for incoming connection...'
Below is Xdebug log from this session
I: Connecting to configured address/port: X.x.x.x:9000.
I: Connected to client. :-)
> <init xmlns="urn:debugger_protocol_v1" xmlns:xdebug="http://xdebug.org/dbgp/xdebug" fileuri="file:///xxx/info.php" language="PHP" protocol_version="1.0" appid="4365" idekey="10594"><engine version="2.2.2"><![CDATA[Xdebug]]></engine><author><![CDATA[Derick Rethans]]></author><url><![CDATAhttp://xdebug.org]></url><copyright><![CDATA[Copyright (c) 2002-2013 by Derick Rethans]]></copyright></init>
<- feature_set -i 0 -n show_hidden -v 1
> <response xmlns="urn:debugger_protocol_v1" xmlns:xdebug="http://xdebug.org/dbgp/xdebug" command="feature_set" transaction_id="0" feature="show_hidden" success="1"></response>
<- feature_set -i 1 -n max_depth -v 1
> <response xmlns="urn:debugger_protocol_v1" xmlns:xdebug="http://xdebug.org/dbgp/xdebug" command="feature_set" transaction_id="1" feature="max_depth" success="1"></response>
<- feature_set -i 2 -n max_children -v 100
> <response xmlns="urn:debugger_protocol_v1" xmlns:xdebug="http://xdebug.org/dbgp/xdebug" command="feature_set" transaction_id="2" feature="max_children" success="1"></response>
<- status -i 3
> <response xmlns="urn:debugger_protocol_v1" xmlns:xdebug="http://xdebug.org/dbgp/xdebug" command="status" transaction_id="3" status="starting" reason="ok"></response>
<- step_into -i 4
> <response xmlns="urn:debugger_protocol_v1" xmlns:xdebug="http://xdebug.org/dbgp/xdebug" command="step_into" transaction_id="4" status="stopping" reason="ok"></response>
<- breakpoint_set -i 5 -t line -f file://xxx/info.php -n 3
> <response xmlns="urn:debugger_protocol_v1" xmlns:xdebug="http://xdebug.org/dbgp/xdebug" command="breakpoint_set" transaction_id="5"><error code="5"><message><![CDATA[command is not available]]></message></error></response>
"
According to Xdebug documentation status "stopping" is
'State after completion of code execution. This typically happens at the end of code execution, allowing the IDE to further interact with the debugger engine (for example, to collect performance data, or use other extended commands).'
So my debugger stops before reaching first breakpoint (set on first line).
Could it be a question of server configuration?
You should go to php.ini and delete a line like this
extension=php_xdebug-...
How did this line was created.
You put a xdebug's file into PHP extensions path like this
.../php5.X.XX/ext/
Now you may turn on this PHP extension by any _AMP UI tools like WAMP, XAMPP etc.
To prevent this painful misfortune you must put the Xdebug file into
.../php5.X.XX/zend_ext/
It'll make Xdebug hidden from any _AMP tool.
And correct your zend_extension parameter too.
zend_extension = .../php5.X.XX/ext/php_xdebug-...
to
zend_extension = .../php5.X.XX/zend_ext/php_xdebug-...
It's common default path for it.
Please, remember!
With PHPStorm, Eclipse, Zend etc., possibly you should consider to correct two php.ini files.
The first one for your web server. Commonly under Apache folder
...\Apache2.X.XX\bin\
The second one is for the direct PHP-script debugging. It lies in the PHP hosting folder:
...\php\php5.X.XX\
In my case, the cause of the "breakpoint_set" / "command is not available" problem was disabled xdebug.extended_info option (it is enabled by default but I disabled it for profiling).
Breakpoints do not work then xdebug.extended_info is disabled.
I have got breakpoints worked after reenabling xdebug.extended_info.
I had same problem under windows, with phpstorm, i was googling many time. Eventually, my decision is the:
in php.ini:
xdebug.remote_mode = "jit"
From phpstorm tutorial, JIT - "Just-In-Time" Mode
https://www.jetbrains.com/help/phpstorm/2016.2/configuring-xdebug.html#d43035e303
UPD
No, this option does not helped me actually. But, i resolve my issue in the end:
I use phpstrom for win 7, and i configured the path mapping this way:
d:\serverroot\vhost\www => d:\serverroot\vhost\www
but in my old config i spied such mapping:
d:\serverroot\vhost\www => d:/serverroot/vhost/www
Finally
On windows machines in path mapping in server configuration replace the \ by /
I think the only reason why this could happen is that your info.php has a syntax error. In that case, there is no code to execute and the script goes directly to "stopping" upon issue of the "step_into".
Zend_Opcache / OPCache can cause this issue as well, if you have it enabled try disabling it.
This error can be emitted when the XDebug extension is compiled into a non-debug build of the PHP runtime. The process will not fail (as it shouldn't), but the XDebug extension will stop doing anything for the duration of that process

ffmpeg - operation not permitted error while conversion

I am developing an android app. My requirement is that to implement an rtsp streaming server on android. It has to live stream video and audio captured using MediaRecorder. Another requirement is that I have to use live555 as the streaming server. What I get from MediaRecorder is in MP4 or 3GP format. live555 cannot able to stream both. But it can stream audio if I recorded it only in 'RAW_AMR' format. Since live555 support 'mpg' format for streaming, I decided to put someone in middle who can convert 'mp4' or '3gp' to 'mpg', and I chose ffmpeg.
I have ported live555 and ffmpeg to android. ffmpeg is able to convert the file recorded by MediaRecorder once it is finished. But the problem is that ffmpeg cannot be able to do it concurrently. That is, ffmpeg is not able to convert the file while recording. It shows an Operation not permitted error. I tried the same on my linux machine, using VLC to record instead of MediaRecorder on android. The result is same. ffmpeg is able to convert once the recording is finished, and not able to do the same while recording.
Here is the ffmpeg command I issued on my linux box:
ffmpeg -v 9 -loglevel 99 -i test.mp4 test.mpg
Where test.mp4 is the file to which VLC is recording in mp4 format. and test.mpg is my destination file. The following is the output by ffmpeg on terminal.
ffmpeg version 0.8.9, Copyright (c) 2000-2011 the FFmpeg developers
built on Feb 1 2012 18:29:27 with gcc 4.6.2 20111027 (Red Hat 4.6.2-1)
configuration: --prefix=/usr --bindir=/usr/bin --datadir=/usr/share/ffmpeg --incdir=/usr/include/ffmpeg --libdir=/usr/lib64 --mandir=/usr/share/man --arch=x86_64 --extra-cflags='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic' --enable-bzlib --enable-libcelt --enable-libdc1394 --enable-libdirac --enable-libfreetype --enable-libgsm --enable-libmp3lame --enable-libopenjpeg --enable-librtmp --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxvid --enable-x11grab --enable-avfilter --enable-postproc --enable-pthreads --disable-static --enable-shared --enable-gpl --disable-debug --disable-stripping --shlibdir=/usr/lib64 --enable-runtime-cpudetect
libavutil 51. 9. 1 / 51. 9. 1
libavcodec 53. 8. 0 / 53. 8. 0
libavformat 53. 5. 0 / 53. 5. 0
libavdevice 53. 1. 1 / 53. 1. 1
libavfilter 2. 23. 0 / 2. 23. 0
libswscale 2. 0. 0 / 2. 0. 0
libpostproc 51. 2. 0 / 51. 2. 0
[mov,mp4,m4a,3gp,3g2,mj2 # 0x1672600] Format mov,mp4,m4a,3gp,3g2,mj2 probed with size=2048 and score=100
[mov,mp4,m4a,3gp,3g2,mj2 # 0x1672600] ISO: File Type Major Brand: isom
[mov,mp4,m4a,3gp,3g2,mj2 # 0x1672600] moov atom not found
test.mp4: Operation not permitted
Would anyone please tell me what is causing the problem? Or is the scenario above is possible by ffmpeg. That is, is ffmpeg is able to do the conversion at the same time as that of recording? If it is not possible by ffmpeg, would you please suggest any alternative solutions?
NOTE: I am putting a C tag because if it possible by some tweaking in C on ffmpeg, I am ready to do that(I want the solution that badly). But please provide some pointers to the right direction.
Both 3gp and mp4 formats include moov atom (chunk of data) that is written when the file is finalized. Until then the file is incomplete.
You can use FLV as the "middle" format. Other formats that support live streaming can be used too. Option -re may be helpful to tell encoder to run at the stream rate, if needed.
See also Is it possible to play an output video file from an encoder as it's being encoded?.

Resources