I have a C program that runs a scientific simulation and displays a visualisation in an OpenGL window. I want to make this visualisation into a video, which will eventually go on YouTube.
Question: What's the best way to make a video from a C / OpenGL program?
The way I've done it in the past is to use a screen capture program, but this is very labour-intensive (have to start/stop the screen capture program, save the video file, etc...). It seems like there should be a way to automate the process of making a video from within the C program. Then I can leave it running overnight and have 20 videos to look through in the morning, and choose the best one to put on YouTube.
YouTube recommend "MPEG4 (Divx, Xvid) format at 640x480 resolution".
I'm using GLUT 3.7.6_3, if that makes a difference. I can change windowing system if there's a good reason.
I'm running Windows (XP), so would prefer answers that work on Windows, but Linux answers are ok too. I can use Linux if it's not possible to do the video stuff easily on Windows. I have a friend who makes a .png image for each frame of the video and then stitches them together using "mencoder" on Linux.
you can use the glReadPixels function (see example)
But if the stuff you are trying to display is made of simple objects (i.e. spheres, rods, etc..), I would "export" each frame into a POV-ray files, render these, and then make a video out of these pictures. you will reach a much higher quality like that.
Use a 3rd party application like FRAPS to do the job for you.
Fraps can capture audio and video up
to 2560x1600 with custom frame rates
from 1 to 120 frames per second!
All movies are recorded in outstanding
quality.
They have video samples on the site. They seem good.
EDIT:
You could execute a tool to record the screen from your C application by calling it like system("C:\screen_recorder_app.exe -params"). Check camstudio, it has a command line version.
Related
I am editing a large batch of photos using the same steps, and want to create a program to run through terminal that will run the process for me. I am comfortable with writing in C, but I am unsure of how to start on the code/what commands to use.
When I am in GIMP, I start by opening a .xcf file, and importing the photo I wish to edit in as the bottom layer. Next, I resize the layer to 1000px wide. After that, I edit the curves with a preset I have saved, and then do the same with the brightness controls. Finally, I export the file as a .png with a specific name: 01-0xx.png, based on the number of the photo in the set.
This sounds like a job for macros or the automation tools available in Gimp:
Ref: Gimp Automate Editing https://www.gimp.org/tutorials/Automate_Editing_in_GIMP/
This tutorial will describe and provide examples for two types of
automation functions. The first function is a tool to capture and
execute “Macro” commands. The second function is a set of Automation
Tools to capture and run a “Flow” or “Process”. The code for this
tutorial is written using Gimp-Python and should be platform portable
– able to run on either Linux or Windows operating systems. *
The goal of these functions is to provide tools that speed up the
editing process, make the editing process more repeatable, and reduce
the amount of button pushing the user has to do. Taking over the
button pushing and book-keeping chores allows the user to focus on the
more creative part of the editing process.
I haven't ever used GIMP, but programs of this sort typically have automation scripting support, and this is the right place to start.
Could be done with C, but the learning curve is steep.
You can write Gimp scripts in Scheme (Lisp) or Python, and if you know C you can learn enough Python in a couple of hours. See an example of a Python batch script here.
Side note #1: Curves+Brightness contrast can be done in one single call to Curves (with a different curve of course). Each operation entails some color loss, so the fewer, the better.
Side note #2: It may be simpler to do this with without Gimp using:
The ImageMagick tool box (command called from a shell script)
An image library with any language ("pillow" for python).
Your Curves preset is just what is called a "CLUT" (Color Look-Up Table).
This is a very simple task that I had such a hard time trying to do. My goal is pretty simple: send an mp4 file from my server to my client, and while its buffering and downloading I want to already play it. That means that I need to play a video.mp4 file while writing it, and I need it to display on some platform that I can control - like wxPython or WPF-Ironpython. Naturally, no such platform will let me play an open file for writing.
I have tried to implement and HTTP server (although totally unnecessary for my case, as I am writing an application-based Server-Client app) that would accept Range request, and when I run the server and load the URL on Chrome, it all works perfect and I can seek and buffering is great, but when I load it from WPF MediaElement it fails to play the video for some point (I cant really tell why as there is no documentation for this, any API, tutorials etc). I am really desperate.
I even thought about playing a video from a buffer and then just changing the buffer's content, but it doesn't seem like this possibility exists.
I am really stuck at this and I would love to get some suggestions. Please note that I am not a professional in this so I would appreciate if you could explain this to me in simple terms.
Thanks!
Not possible. MP4 is not the correct container for your application. You must use something like HLS/dash/fragmented MP4.
I am using Codename One to record the microphone input and play it back to the connected earphones.
First of all if I record audio from mic to a file, and play it back when the recording is over, it works as expected. That's why based on this 2014 question I implemented 2 periodic tasks (timer and timertask), as long as 2 files : one for recording, one for playing. I set the periodic tasks period to values between 100 ms and some seconds, but the result was awful on the Android device. Indeed there were random gaps, it was not smooth at all, nor understandable.
I assume the overhead of writing to a file every period is too high and consequently is causing that behaviour. So using proper high-level Codename One methods does not seem the way to go.
Then in the same question from 2014, the requester is suggesting to create an inputstream from the recording Media and use it as input for the playing Media. However the method MediaManager.createMediaRecorderStream() does not seem to be available anymore. I tried to use the file used to record audio as InputStream for the playing Media through fs.openInputStream(recFilepath) but it did not output any sound nor error on the device.
So my question is whether or not I can achieve my goal with bare Codename One or do I have to use the native interface ? Moreover Shai (in the 2014 above mentioned question) wrote that the second approach with MediaManager.createMediaRecorderStream() might work on some platforms : is the android platform among these, or only iOS platform was aimed at ?
Any help appreciated and sorry for not posting code since I cleared it as soon as an attempt did not appear to work. So I really messed up with my code which now is not doing anything I targetted initially.
Cheers,
As far as I recall Android back in the day didn't support input stream for media and later only allowed capturing input directly as uncompressed WAV which makes full duplex usage impractical. This might have changed since as I recall they did some overhaul of their media libraries.
I'm not sure if this is exposed in our higher level code. Besides using native interfaces you can also help us improve Codename One by forking and hacking it e.g. this is the relevant code in the Android project:
https://github.com/codenameone/CodenameOne/blob/master/Ports/Android/src/com/codename1/impl/android/AndroidImplementation.java#L2804-L2858
This is a contribution guide to Codename One, it covers running in the simulator but that's a good start: https://www.codenameone.com/blog/how-to-use-the-codename-one-sources.html
You can test your changes on an Android device with instructions here: https://www.codenameone.com/blog/debug-a-codename-one-app-on-an-android-device.html
I asked this question in the Raspberry PI section, so please forgive me for posting this here again. Its just there doesn't seem to be as active as this section of the forum. So, onto my question...
I have an idea and I'm working on it right now. I just wanted to see what the community's thought was on using a screensaver as digital signage. Every tutorial I've read shows someone using chromium in kiosk mode, and while that's fine and works well for some uses, it doesn't work for what I need. I have successfully completed a chromium kiosk, and it was cool. But the signage that I need to create now, has to work without internet. I've thought about installing LAMP locally on the PI, and still using chromium. I still may have to if this idea doesn't pan out. All I need from the signage is a Title Message in the top center, and a message body underneath it, with roughly 300-400 character limit. My idea is to write a screensaver module, in C, that will work with a screensaver such as xscreensaver. The module would need to be able to load messages from a directory on the pi. Then for my clients to update their signage text, I would write a simple client that sent commands as well as the text via SSH to the pi. I want to know what other people think about this. Is it a good idea? Bad idea? Should I "waste" my time doing something like this?
Thanks in advance.
I am already using a rPi as digital signage, just over a year. I am using two different setups:
version 1 uses Raspian loading xdesktop and qiv image viewer to cycle images stored on the Pi itself, synchronized with a remote server. The problem I found was power and SD stability, when the power fails, which it will do no matter what, just when... The Sd card can become corrupt due to all the writing that Raspian does all the time. Certainly does not really need to write to SD.
version 2 uses a RO-filesystem and a command line image tool. Uses the same process to show images from local, and sync with server. But power fail causes no ill effects.
I am not using screensaver to display images, that seemed redundant to me, and unnecessary to wait for the SS to start just to display the images.
Some of the images are created using imagemagik, which is nicely dynamic where needed.
I currently send a live video mix to a output screen (a form on a particular screen). Consider it like a really advanced version of PowerPoint. I call it a video control room for the pc. I want to take 30 frames a second from a screen (of my choice, I allow multiple screens) and the audio from the computer (stereo) set, save it to a hard disks. How do I do that?
I know I can draw the image of the interface using the RenderTargetBitmap class, but How do I put those images (as frames) in an AVI file or push it to a video server? An SDK? or a Code Example to point me in the right direction, would be nice! I also want to capture the sound of the current Stereo mix, or microphone (as determined by the user).
I don't want to use a third-party and I'd prefer doing it in the program to take maximum control over it. I'm ok, with using a second program to do compression and just saving a raw AVI file (with audio stream). Disk is cheap, as any programmer would say. If I have to, I'll save the video and audio streams separately, but I'd prefer not to.
Let me know.
Check out the RenderTargetBitmap class. it allows you to turn any visual into pixels that you can then pass along to an encoder/network stack, like the ones described here
also check out Windows media foundation for turning your pixels into an avi or stream