ffmpeg won't execute properly in google app engine standard nodejs - google-app-engine

I have tried for three full days to get GAE (standard - nodejs) to run a simple video transcoder from MOV to MP4 using ffmpeg. I have tried using ffluent-ffmpeg, kicking off a child process (e.g. spawn), and nothing works. As soon as it hits the call to the executable it always errors. I have confirmed ffmpeg is installed and even tried using ffmpeg-static. Moreover, I have it working on my local machine with no problems (using all of the aforementioned ways).
I have also tried logging the errors and nothing is really all that helpful. I can see its working through any installed package including ffmpeg (system package).
Below is the pseudo code...step three is where the problem occurs.
Send file name to GAE endpoint
Download the file from google cloud storage to a temp file
Transcode using ffmpeg
Upload temp file to google cloud storage
Remove old google cloud storage file
Remove temp file
The file I am using to test is 6MB...a 5 second video I took on my iPhone. Thank you in advance.
UPDATE: I successfully deployed the exact same code to Node Flex environment and everything works great. I wasn't able to get any errors in the standard environment that directed me where to look but my guess is it has something to do with how it stores the file I pipe into FFMPEG on GAE Node Standard. The docs say its a virtual file system that uses RAM. I'd love to hear if anybody managed to get it working in the standard environment.

After a long battle, I finally figured out what was going on. I did not have enough compute resources. If anyone out there is going to build a transcoding service for images and videos, be sure you up your cores to at least 4 out of the gate. My jobs were randomly failing (but not repeatable for processing the same files), web sockets were disconnecting and reconnecting, etc.
To the person who downgraded my question because I did not post an error (which I stated I did not really have)...well, there isn't going to necessarily be an error in the logs when your CPU starts dropping jobs because it can't keep up with the load. Like I mentioned in my question, I would get errors but nothing meaningful.

You're right, ffmpeg is listed in the pre-installed packages for the Node.js Runtime.
However they don't mention which ffmpeg version is installed.
I looked into the fluent-ffmeg prerequisites and it requires ffmpeg >= 0.9 to work.
Try to update your ffmpeg version running the command:
apt-get update ffmpeg
in your instance's console. Tell us how it goes.

const outFile = bucket.file(`${storagePath}`)
//create a referance to your storage bucket path
const outStream = outFile.createWriteStream()
You can always put a on 'stderr' listener to your ffmpeg command.
I had similar problems transcoding on google app engine so fluent ffmpeg stderr listener helped me a lot debugging it.
ffmpeg.addInput(`tmp/${app_engine_filepath})
.format('mp3')
.on('stderr', function(stderrLine) {
console.log('Stderr output: ' + stderrLine);
})
.on('error, (error) => {
console.log(error)
})
.pipe(outStream)
.on('end', () => {
fs.remove(`tmp/${app_engine_filepath}`)
})
You might also want to check your ffmpeg version on standart environment. (that should also be viewable through stderrlogs)

Related

Why might Apache Flink write files on a Windows box, but not write files on a Linux container using simple FileSink and SimpleStringEncoder?

I'm working with the examples provided in 'flink-training' in the GitHub repository here. Specifically, I'm working on the 'ride-cleansing' example.
I've replaced the PrintSinkFunction with a simple FileSink configured as follows:
FileSink fileSink =
FileSink.forRowFormat(new Path(args[0]),
new SimpleStringEncoder<String>("UTF-8"))
.withRollingPolicy(DefaultRollingPolicy.builder()
.withRolloverInterval(Duration.ofMinutes(1))
.withInactivityInterval(Duration.ofSeconds(30))
.withMaxPartSize(512 * 512 * 512)
.build())
.build();
When I run this example on my local machine in Intellij, the expected directory are created and files are created to reflect the data streamed to the sink.
However, when I run this same example on a Linux box (on Google Colab), the directory is created, but no files are created, regardless of how long I leave it running (I've tried 10+ minutes).
On the Linux Container, I'm running the example using the gradle setup and the following command:
./gradlew :ride-cleansing:runJavaSolution --args="/content/datastream"
On the Windows box, I'm just executing the RideCleansingSolution 'main' with a simple 'Application' run configuration.
What might be different about my setup on the two systems that would decide whether data is written?
it might not work, but if you set up mono develop on whatever nx your using and write it all in c# via Xamarin in VS.NET23 it MIGHT work seamlessly across all platforms and arches... but I'm just spitballing here so `_o_/'

WebkitGtk application is not loading file URL

I am building a kiosk application using webkitgtk on the raspberry pi 4.
This application will not be connected to the internet and all the html,css, javascript for the UI are all located on the local filesystem.
I am using buildroot to setup the Linux system, starting with the pi 4 defconfig provided in buildroot.
I have enabled all the packages needed to get webkitgtk running.
Also, the kiosk application has been tested on my desktop, using the same software stack and it works
However, when i try to launch the application on the raspberry pi, a blank page pops up. I have played around with the WebKitWebSettings object associated with my WebKitWebView by enabling local file access. It still shows up a blank screen.
Also included in my pi4 application bundle is a simple gtk3+ application. This launches successfully!
I will really appreciate some pointers as to why this is happening as i have sort of reached a dead end
UPDATE
I enabled the MiniBrowser app that comes with the Webkitgtk package.
Entering the local url, The page does not load. It only gives me a message at the top saying "Successfully downloaded".
It seems to be treating my input as a download
UPDATE 2
After some more experimenting, i was finally able to get webkitgtk working on the pi 4.
The problem seems to originate from using the webkit_web_view_load_uri() api.
It does not seem to recognize my html document as a web page.
I got around it using the webkit_web_view_load_html() call. This included some hacks by first reading in the contents of the html doc into a character buffer, and passing it to webkit_web_view_load_html().
You also have to provide a base path to this function call to be able to resolve all the urls (scripts, css, images etc) in your html document.
Another problem i haven't been able to work around is, SVG images are not loading in webkitgtk. I have used jpg formats and they work. I suspect this my be due to a configuration switch in building webkigtk
It's hard for me to figure out what might be happening without having access to your environment and settings. My gut feeling is that pages are showing blank because perhaps some shared libraries are missing. You can check that with:
$ ldd WebKitBuild/GTK/Release/bin/MiniBrowser
I am using buildroot to setup the Linux system, starting with the pi 4 defconfig provided in buildroot.
There's a buildroot repository for building WPE for RPi. WPE (WebPlatform for Embeded) is like WebKitGTK but doesn't depend on GTK toolkit. Another important difference is that WPE runs natively on Wayland.
If you're interested in having a webapp embedded in a browser running in a device with limited capabilities, WPE is a better choice than WebKitGTK. The buildroot repo for building WPE for RPi is here:
https://github.com/WebPlatformForEmbedded/buildroot
There's is also this very interesting step-by-step guide on how to build WPE for RPi3:
https://samdecrock.medium.com/building-wpe-webkit-for-raspberry-pi-3-cdbd7b5cb362
I'm not sure whether the buildroot recipe would work for RPi4. It seems to work for all previous versions, so you might be stepping in new land if you try to build WPE on RPi4.
If you have an RPi3 available I'd try to build WPE for RPi3 first, and make sure that works. Then try for RPi4.

Flink 1.5-SNAPSHOT web interface doesn't work

I recently came across a bug in Flink, reported (https://issues.apache.org/jira/browse/FLINK-8685) and found out that it has been reported and a pull request has been created (https://github.com/apache/flink/pull/5174).
Now I clone 1.5-SNAPSHOT, apply the patch and build Flink. Even though it builds (no matter patch is applied or not), when I run Flink (using start-cluster.sh), web dashboard doesn't work and command
tail log/flink-*-jobmanager-*.log returns "tail: cannot open 'log/flink-*-jobmanager-*.log' for reading: No such file or directory"
. I tested with a batch programs and surprisingly it returned results on terminal, but streaming programs and other things still don't work.
Any suggestions on this issue?
Thank you.
In case flink dashboard does not start change port in conf file and restart. Default port of flink could be occupied by other process in windows.
Also change log level for flink to debug.

Saving Files in CodenameOne

I'm trying to save some screen dumps to internal storage for debugging purposes, but I can't seem to get access to them. When I call FileSystemStorage.getInstance().getAppHomePath(), I get a path that looks something like this:
/data/data/com.mycompany.myapp/files/
But I can't see this folder in the Android File Transfer tool, so I can't drag the files to my Mac. I also tried attaching them to an email using the Message class, but for some reason the attachments never showed up. I notice that a lot of applications store data in folders like this:
/Android/data/com.doubletwist.androidplayer/
If I try to create a folder like this, I run into two problems. First, it's not platform independent. (This doesn't matter much because I'm just doing this for debugging.) Second, it doesn't work. I get an error telling me I need to use the directory returned by FileSystemStorage.getInstance().getAppHomePath()
Is there any way I can save files to a folder that I can actually retrieve them from? It would be more helpful if I had a platform-independent way, but any way that works is fine for now.
File system is a very "unportable" notion. By default app home is a private folder which some mobile OS's including Android 4+ keep private and inaccessible.
Android has a concept of "sdcard" which used to be a physical storage where you could write anything in any directory without a problem. This is no longer applicable for later versions of Android but you can read from the sdcard directory and detect it.
FileSystemStorage has an API to get roots and their types, if you have an sdcard type you can read from there. You can use the FileTree to see the file hierarchy as exposed to your application which can be useful for debugging.

How to use Sphinx3 in an application

I used Sphinx4 for some time which really fits my needs. I load a recognizer, pass the audio data to it and use the recognized String in my application.
Right now I'm working on a C application (C++ is unfortunately not an option) where I need something similar and thought that I could use Sphinx3 which is written in C.
The problem is that I don't really know how it is used inside an application and there is no "Hello World"-example as Sphinx4 provides it.
I already compiled and installed sphinxbase and sphinx3 and now I can include the sphinx header files in my application.
Now to my questions:
Is there a "simple" and well documented example application that uses sphinx3 from a C environment?
How can I load up the sphinx3 engine and call a recognizer with my binary audio data?
OR: Do I need to start an application like "sphinx3_decode" and call it from my own application? If so, is there an example application for that?
Thank you in advance!
Best regards,
Robert
It's not recommended to use Sphinx3. From the website:
Sphinx-3 is CMU’s large vocabulary speech recognition system. It’s
older C based decoder that we continue to maintain. It’s planned to
make it obsolete in the future, it’s still most accurate decoder for
large vocabulary tasks. We are using it as a baseline to check the
recognizer accuracy. This decoder is only intended for researchers who
want to evaluate bleeding edge methods in ASR like tree search method.
If you need to use a decoder you should use pocketsphinx. You can find the tutorial and the API documentation on the website
http://cmusphinx.sourceforge.net/wiki/tutorialpocketsphinx
http://cmusphinx.sourceforge.net/api/pocketsphinx/pocketsphinx_8h.html
I Recently worked on an Intregated Project on Punjabi Language.
Here are some steps that we used...
First we recorded the punjabi audio data in a vaccumed room in 16000 hz sample rate.
Then we took the recorded data and segmented it using Praat Software into small wav and raw files of 2 to 30 sec and saved them in a folder named train.
Then we took a system having Linux ie. Ubuntu and installed the required plug in like autoconfig, automake etc and untarred Sphinx 3 along with 4 packages that are cmuclmtk, pocketsphinx, sphinxbase, sphinxtrain.
Then according to the small wav files we made many files like transcription, dic, phone, filler, file id, ccs etc.
Then we opened the terminal and typed –"sphinx_fe” to check the whether the sphinx is functional or not.
Then we created an folder named “man” and then in terminal wrote its path.
Then we run the command- “sphinxtrain –t man setup”. By running this command an folder named “etc” will be formed in “man” folder containing files “feat_paramas” & ”config”.
Changes were made in the in the config file according to our data.
Then we moved all the files that we created before ie. transcription, dic in the etc folder in that is located in man folder.
Then we placed ‘lang1.sh” script in etc folder and remaining 4 scripts in man folder.
Then we opened the path for etc folder in terminal and run command- “lang1.sh”
Then we run series of commands in terminal – “mfcgen2.sh” then “verify3.sh” then “hmm4.sh” and at last “end-test.sh” to get the final result.
Rest if you have worked on Sphinx 4 then you may know about the files that are mentioned above in the steps. I hope this helps you.

Resources