Detect if FFmpeg is running or has stopped running - c

I'm using C to scan a directory which contains frames extracted by ffmpeg. Now in the event that the last file is reached during the scan, I need to check for two conditions:
It's the last file as the video duration is over.
It's the last file as ffmpeg terminated abruptly and is no longer able to populate the directory
My C program workflow is like:
while(<there exists files in the directory>)
{
// iterate over them and do something with each frame
}
// coming out of the loop means no more files available...so I need a if condition
if(<check if ffmpeg is stopped>) // <-- need to know what to put inside the condition
{
// start it again
}
else
{
// video is over, nothing more left to do
}
I'm thinking I can do this using Process ID of ffmpeg, but how would I get that info? Any other alternative way of checking if ffpmeg has stopped?
Some metadata
OS : Windows 7
IDE : Dev C++
Language Used : C

You can definitely wait for FFmpeg process to finish. You normally obtain ffmpeg.exe process handle and either wait for it using wait functions, check its GetExitCodeProcess or both.
If you don't have a handle, but you do have a process identifier, OpenProcess will get you the handle.
Of course, you would not have to go into that trouble if you used native Windows APIs where you would be dealing with frames directly, not through files and external processes.

Not sure if this is a way to go in your case, but... in unix world a program that runs in background normally creates a publicly readable file with its process id. And then removes it when finished.
You could create this file from your batch script and then in your C program you could check if a pid from that file is running or not.

Related

How to know from within a Fortran code if a file is opened by any program (Windows 7)

I have a Fortran code that updates in real time the content of a text file by adding some new real time measuraments at the very bottom of it. This text file is used (reading only) both by a fluid dynamics code (that continuously runs in real time) and by another executable built from matlab code (that performs plotting). I want to add a line in the Fortran code that says: update the text file ONLY IF it is not opened by any other program. I tried using INQUIRE:
do
INQUIRE(FILE = filename,OPENED = ISopen)
if (.not.ISopen) then
ADD NEW MEASUREMENTS HERE
exit
endif
endif
enddo
and before running this fortran program I opened the file with textpad. However, the variable ISopen is false. So I guess maybe INQUIRE only works for testing if file is opened within the fortran program itself. In fact if I add at the beginning of the above snippet of code:
OPEN (33,FILE = filename)
then ISopen is true. I then created an executable from a fortran code containing only:
OPEN (33,FILE = filename)
pause
CLOSE(33)
and I run it and let it in a paused status. I then run the first code I posted above and ISopen is still false. Any idea how to test if a file is open by any other program from within Fortran? My operative system is windows 7.
thanks
At the end I solved in this way. I could not find a way to know if the file is opened, and even if there was a way it could still happen that some other program opens it right between the time I check if its open and then I modify it. Therefore, I just create a temporary copy of the file to modify, I modify this temporary copy and then I move the file back by overwriting the original one. The latter operation is performed only if the file is not locked (i.e. no other program opened the file to read the data), so I keep trying to copy it until it succeeds. I tested in many situations and it works. The code is:
USE IFPORT
IMPLICIT NONE
character*256 :: DOScall
logical :: keepTRYING
integer :: resul
DOScall = 'copy D:\myfile.txt D:\myfile_TMP.txt' !create temporary copy
resul = SYSTEM(DOScall)
open(15,file ='D:\myfile_TMP.txt',form = 'formatted')
!.... perform here some writing operations on myfile_TMP........
close(15)
do
resul = SYSTEM('move D:\myfile_TMP.txt D:\myfile.txt')
if (resul==0) then
exit
else
pause(10)
endif
enddo
Note that works perfectly fine for multiple programs performing reading and one program writing in the file (as in my case where I have 2 programs reading and only one writing). If multiple programs write the same file I guess some other parallel techniques have to be used.
I do not think it is possible to check if a file is opened by an external process. Some programming languages allow you to check if the file is locked or not, and that is still far from telling you if the file is opened or not, both program must acquire and release system lock for it to really work. To my knowledge, the standard fortran does not have that feature directly, but you can use the semaphore from C with the interoperability stuff.
Most user application (editors mostly) however, before updating a file, usually check if the content on the disk has changed since they capture a copy, and alert the user. They also do that if they lost and acquire the focus. If you restrict your goal to updating only if the content has not changed since you opened it, you can do the same or simply open-add and close any time you want to add a new entry. A good editor will notify the user on the other side that the content had been change by another process.
An alternative is to simulate a lock yourself and buffer the data in fortran. By buffering I mean, collect some new data (let say 100, 1000 or whatever number that is convenient) and send them to the file at once. For each update, you open, update and close the file.
Your lock can be made of two simple files (emptys one for example), one created by the reader (matlab) and the other created by writer (fortran program). Let name them reading.ongoing for the reader and writing.ongoing for the writer.
On the fortran side, you do the following anytime you have collected enough data to write:
check for the existence of reading.ongoing (using inquire function), proceed only if it does not exist
create writing.ongoing
check for the existence of reading.ongoing again, if it exists, delete writing.ongoing and go back to step 1. If it does not exist, proceed forward.
open, write the data and close the data file
delete writing.ongoing
On matlab side, do similar thing, inverting the role of reading.ongoing and writing.ongoing.
In an exceptional race condition you could be blocked because they are all trying at the same time. In that case, you could modify the step 1. of matlab to force it to wait for few millisecond before proceeding. This will get you on the road as long as none of the program get killed between step 1 and 5.
You can also use semaphore with the interoperability stuff in fortran. See the following post for example. You can also similar think on the side of matlab, I do not have any example. That will lower your headhache and let the system manage the lock for you.

Make a file availabe on all nodes

I'm writing a MPI application that takes a filename as an argument and tries to read from the file using regular C functions. I run this application on several nodes of a cluster by using qsub, which in turn uses mpiexec.
The application runs just fine on a local node where the file is. For this I just call mpiexec directly:
mpiexec -n 4 ~/my_app ~/input_file.txt
But when I submit it with qsub to be run on other nodes of the cluster, the file reading part fails. The application errors at fopen call -- it can't open the file (likely because it's not present).
The question is, how do I make the file available to all nodes? I have looked over qsub manpage and couldn't fine anything relevant.
I guess Vanilla Gorilla doesn't need an answer any more? However, let's consider the case of a pathological system with no parallel file system and a file system available only at one node. There is a way in ROMIO (a very common MPI-IO implementation) to achieve your goal:
how can i transfer file from one proccess to all other with mpi?

Less Hacky Way Than Using System() Call?

So I have this old, nasty piece of C code that I inherited on this project from a software engineer that has moved on to greener pastures. The good news is... IT RUNS! Even better news is that it appears to be bug free.
The problem is that it was designed to run on a server with a set of start up parameters input on the command line. Now, there is a NEW requirement that this server is reconfigurable (didn't see that one coming...). Basically, if the server receives a command over UDP, it either starts this program, stops it, or restarts it with new start up parameters passed in via the UDP port.
Basically the code that I'm considering using to run the obfuscated program is something like this (sorry I don't have the actual source in front of me, it's 12:48AM and I can't sleep, so I hope the pseudo-code below will suffice):
//my "bad_process_manager"
int manage_process_of_doom() {
while(true) {
if (socket_has_received_data) {
int return_val = ParsePacket(packet_buffer);
// if statement ordering is just for demonstration, the real one isn't as ugly...
if (packet indicates shutdown) {
system("killall bad_process"); // process name is totally unique so I'm good?
} else if (packet indicates restart) {
system("killall bad_process"); // stop old configuration
// start with new parameters that were from UDP packet...
system("./my_bad_process -a new_param1 -b new_param2 &");
} else { // just start
system("./my_bad_process -a new_param1 -b new_param2 &");
}
}
}
So as a result of the system() calls that I have to make, I'm wondering if there's a neater way of doing so without all the system() calls. I want to make sure that I've exhausted all possible options without having to crack open the C file. I'm afraid that actually manipulating all these values on the fly would result in having to rewrite the whole file I've inherited since it was never designed to be configurable while the program is running.
Also, in terms of starting the process, am I correct to assume that throwing the "&" in the system() call will return immediately, just like I would get control of the terminal back if I ran that line from the command line? Finally, is there a way to ensure that stderr (and maybe even stdout) gets printed to the same terminal screen that the "manager" is running on?
Thanks in advance for your help.
What you need from the server:
Ideally your server process that you're controlling should be creating some sort of PID file. Also ideally, this server process should hold an exclusive lock on the PID file as long as it is still running. This allows us to know if the PID file is still valid or the server has died.
Receive shutdown message:
Try to get a lock on the PID file, if it succeeds, you have nothing to kill (the server has died, if you proceed to the kill regardless, you may kill the wrong process), just remove the old PID file.
If the lock fails, read the PID file and do a kill() on the PID, remove the old PID file.
Receive start message:
You'll need to fork() a new process, then choose your flavor of exec() to start the new server process. The server itself should of course recreate its PID file and take a lock on it.
Receive restart message:
Same as Shutdown followed by Start.

Safely writing to and reading from the same file with multiple processes on Linux and Mac OS X

I have three processes designed to run constantly in both Linux and Mac OS X environments. One process (the Downloader) downloads and stores a local copy of a large XML file every 30 seconds. Two other processes (the Workers) use the stored XML file for input. Each Worker starts and runs at random times. Since the XML file is big, it takes a long time to download. The Workers also take a long time to read and parse it.
What is the safest way to setup the processes so the Downloader doesn't clobber the stored file while the Workers are trying to read it?
For Linux and Mac OS X machines that use inode based file systems, use temporary files to store the data while its being downloaded (and is an incomplete state). Once the download is complete, move the temporary file into its final location with an atomic action.
For a little more detail, there are two main things to watch out for when one process (e.g. Downloader) writes a file that's actively read by other processes (e.g. Workers):
Make sure the Workers don't try to read the file before the Downloader has finished writing it.
Make sure the Downloader doesn't alter the file while the Workers are reading it.
Using temporary files accommodates both of these points.
For a more specific example, when the Downloader is actively pulling the XML file, have it write to a temporary location (e.g. 'data-storage.tmp') on the same device/disk* where the final file will be stored. Once the file is completely downloaded and written, have the Downloader move it to its final location (e.g. 'data-storage.xml') via an atomic (aka linearizable) rename command like bash's mv.
* Note that the reason the temporary file needs to be on the same device as the final file location is to ensure the inode number stays the same and the rename can be done atomically.
This methodology ensures that while the file is being downloaded/written the Workers won't see it since it's in the .tmp location. Because of the way renaming works with inodes, it also make sure that any Worker that opened the file continues to see the old content even if a new version of the data-storage file is put in place.
Downloader will point 'data-storage.xml' to a new inode number when it does the rename, but the Worker will continue to access 'data-storage.xml' from the previous inode number thereby continuing to work with the file in that state. At the same time, any Worker that opens a new copy 'data-storage.xml' after Downloader has done the rename will see contents from the new inode number since it's now what is referenced directly in the file system. So, two Workers can be reading from the same filename (data-storage.xml) but each will see a different (and complete) version of the contents of the file based on which inode the filename was pointed to when the file was first opened.
To see this in action, I created a simple set of example scripts that demonstrate this functionality on github. They can also be used to test/verify that using a temporary file solution works in your environment.
An important note is that it's the file system on the particular device that matters. If you are using a Linux or Mac machine but working with a FAT file system (for example, a usb thumb drive), this method won't work.

Check if another program has a file open

After doing tons of research and nor being able to find a solution to my problem i decided to post here on stackoverflow.
Well my problem is kind of unusual so I guess that's why I wasn't able to find any answer:
I have a program that is recording stuff to a file. Then I have another one that is responsible for transferring that file. Finally I have a third one that gets the file and processes it.
My problem is:
The file transfer program needs to send the file while it's still being recorded. The problem is that when the file transfer program reaches end of file on the file doesn't mean that the file actually is complete as it is still being recorded.
It would be nice to have something to check if the recorder has that file still open or if it already closed it to be able to judge if the end of file actually is a real end of file or if there simply aren't further data to be read yet.
Hope you can help me out with this one. Maybe you have another idea on how to solve this problem.
Thank you in advance.
GeKod
Simply put - you can't without using filesystem notification mechanisms, windows, linux and osx all have flavors of this. I forget how Windows does it off the top of my head, but linux has 'inotify' and osx has 'knotify'.
The easy way to handle this is, record to a tmp file, when the recording is done then move the file into the 'ready-to-transfer-the-file' directory, if you do this so that the files are on the same filesystem when you do the move it will be atomic and instant ensuring that any time your transfer utility 'sees' a new file, it'll be wholly formed and ready to go.
Or, just have your tmp files have no extension, then when it's done rename the file to an extension that the transfer agent is polling for.
Have you considered using stream interface between the recorder program and the one that grabs the recorded data/file? If you have access to a stream interface (say an OS/stack service) which also provides a reliable end of stream signal/primitive you could consider that to replace the file interface.
There is no functions/libraries available in C to do this. But a simple alternative is to rename the file once an activity is over. For example, recorder can open the file with name - file.record and once done with recording, it can rename the file.record to file.transfer and the transfer program should look for file.transfer to transfer and once the transfer is done, it can rename the file to file.read and the reader can read that and finally rename it to file.done!
you can check if file is open or not as following
FILE_NAME="filename"
FILE_OPEN=`lsof | grep $FILE_NAME`
// if [ -z $FILE_NAME ] ;then
// "File NOT open"
// else
// "File Open"
refer http://linux.about.com/library/cmd/blcmdl8_lsof.htm
I think an advisory lock will help. Since if one using the file which another program is working on it, the one will get blocked or get an error. but if you access it in force,the action is Okey, but the result is unpredictable, In order to maintain the consistency, all of the processes who want to access the file should obey the advisory lock rule. I think that will work.
When the file is closed then the lock is freed too.Other processes can try to hold the file.

Resources