Where does pure-ftpd call its uploadscript? - c

I've been looking through pure-ftpd-1.0.42 source code:
https://github.com/jedisct1/pure-ftpd
Trying to find when it triggers:
https://github.com/jedisct1/pure-ftpd/blob/master/src/pure-uploadscript.c
i.e. when does it run the uploadscript after a file has been uploaded.
If you look in src/ftp_parser.c the dostor method is how a file starts the upload journey. Then it goes to ul_send then ul_handle_data but I get lost at this point. I never see when it says, okay, this file is now uploaded, time to call uploadscript. Can someone show me the line?

In the pureftpd_start() function in src/ftpd.c, pure-ftpd starts up and parses all of its command-line options. It also opens a pipe to the pure-uploadscript, if configured; here. Rather than invoking the upload script on each upload (and incurring the fork() and exec() overhead per-upload), pure-ftpd keeps the upload script process running separately, and sends it the uploaded file path via the pipe.
Knowing this, then, we look for where that pipe is written to, using the upload_pipe_push() function. Interestingly, that function is called here, by the displayrate() function, which is called by both dostor() and doretr() in the src/ftpd.c file.
Hope this helps!

Related

Camel File Consumer - leave file after processing but accept files with same name

So this is the situation:
I have a workflow, that waits for files in a folder, processes them and then sends them to another system.
For different reasons we use an ActiveMQ Broker between "sub-processes" in the workflow, where each route alters the message in some way before it is sent in the last step. Each "sub-processes" only reads and writes to/from the ActiveMQ, except the first and last route.
It is also part of the workflow, that there is a route after sending the message, that takes care of initial file, moving or deleting it. Only this route knows that to do with the file.
This means, that the file has to stay in the folder after the consumer-route has finished, because the meta-data is just written into the ActiveMQ, but the actual workflow is not done yet.
It got this to work using the noop=true parameter on the file consumer.
The problem with this is, that after the "After Sending Route" deletes (or moves) the file, the file consumer will not react to new files with the same name again until I restart the route.
It is clear, that this is the expected and correct behavior, because its the point of the noop parameter to ignore a file that was consumed before, but this doesn´t help me.
The question is now how I get the file consumer to only process a file once as long as it is present in the folder, but "forget" about it as soon as some other process (in this case a different route) removes the file.
As an alternative I could let the file component move the file into a temp folder, from where it gets processed later and leave the cosuming folder empty, but this introduces new problems, that I'd like to avoid (e.g. moving a file with the same name into the folder, as long as the first one is not yet processed completely)
I'd love to hear some ideas on how to handle that case.
Greets Chris
You need to tell Camel to not only use the filename for idempotency checking.
In a similar situation, where I wanted to pick up changes to a file that was otherwise no-oped, I have the option
idempotentKey=${file:name}-${file:modified}
in my url, which ensures if you change the file, or a new file is created, it treats that as a different file and processes it.
Do be careful to check how many files you might be processing because the idempotent buffer is limited by default (to 1000 records I think), so if you were potentially processing more than 1000 files at a time, it might "forget" it's already processed file 1, when file 1001 arrives, and try and reprocess file 1 again.

Redirect program output to my program

My program launches a helper program using fork() / execvp() and I'd like to show the helper program's output in my program's GUI. The helper's output should be shown line by line in a listview widget embedded in my program's GUI. Of course, I could just redirect the output to a file, wait until the helper has finished, and then read the whole file and show it. But that's not an optimal solution. Ideally, I'd like to show the helper's output as it is sent to stdout, i.e. line by line, while the helper is still working.
What is the suggested way of doing this?
From the top of my head, what comes to mind is the following solution but I'm not sure whether it will work at all because one process will write to the file while the other is trying to read from it:
1) Start the helper like this using execvp() after a fork():
./helper > tmpfile
2) After that, my program tries to open "tmpfile" using open() and then uses select() to wait until there's something to read from that file. Once my program has obtained a line of output, it sends it to my GUI's listview widget.
Is this how it should be done or am I totally on the wrong track here?
Thanks!
You should open a pipe and monitor the progress of the child process using select. You can also use popen if you only need a one way communication, in that case you will get the file descriptor by a call to fileno on the returned FILE*.
See:
pipe
popen
select
fileno

How to get modified data from a file in linux?

I am designing a logger plugin for my tool.I have a busybox syslog on a target board, and i want to get syslog data from it so i can forward to my host(not via remote port forwarding of syslog) via my own communication framework.Initially i had made use of syslog's ability to forward messages it receives to a named pipe but this only works via a patch addition which is not feasible in my case.So now my idea is to write a configuration file in syslog to forward all log messages it receives to a file and track the file to get my data.I can use tail function to monitor my file changes but my busybox tail does not support "--follow" option since syslog performs logrotate which causes "tail -f" to fail.And also i am not sure if this is a good method to do it.So what i wanted to ask is there another way in which i can get modified data from a file.I can use inotify, but that can only be used to track file changes.So is there a way to do this?
You could try the "diff" utility (or git-diff, which has more facilities).
You may write a script/program which can receive an inotify event. And the script reopens the file and starts to read till EOF, from the previously saved last read file position.

Check if another program has a file open

After doing tons of research and nor being able to find a solution to my problem i decided to post here on stackoverflow.
Well my problem is kind of unusual so I guess that's why I wasn't able to find any answer:
I have a program that is recording stuff to a file. Then I have another one that is responsible for transferring that file. Finally I have a third one that gets the file and processes it.
My problem is:
The file transfer program needs to send the file while it's still being recorded. The problem is that when the file transfer program reaches end of file on the file doesn't mean that the file actually is complete as it is still being recorded.
It would be nice to have something to check if the recorder has that file still open or if it already closed it to be able to judge if the end of file actually is a real end of file or if there simply aren't further data to be read yet.
Hope you can help me out with this one. Maybe you have another idea on how to solve this problem.
Thank you in advance.
GeKod
Simply put - you can't without using filesystem notification mechanisms, windows, linux and osx all have flavors of this. I forget how Windows does it off the top of my head, but linux has 'inotify' and osx has 'knotify'.
The easy way to handle this is, record to a tmp file, when the recording is done then move the file into the 'ready-to-transfer-the-file' directory, if you do this so that the files are on the same filesystem when you do the move it will be atomic and instant ensuring that any time your transfer utility 'sees' a new file, it'll be wholly formed and ready to go.
Or, just have your tmp files have no extension, then when it's done rename the file to an extension that the transfer agent is polling for.
Have you considered using stream interface between the recorder program and the one that grabs the recorded data/file? If you have access to a stream interface (say an OS/stack service) which also provides a reliable end of stream signal/primitive you could consider that to replace the file interface.
There is no functions/libraries available in C to do this. But a simple alternative is to rename the file once an activity is over. For example, recorder can open the file with name - file.record and once done with recording, it can rename the file.record to file.transfer and the transfer program should look for file.transfer to transfer and once the transfer is done, it can rename the file to file.read and the reader can read that and finally rename it to file.done!
you can check if file is open or not as following
FILE_NAME="filename"
FILE_OPEN=`lsof | grep $FILE_NAME`
// if [ -z $FILE_NAME ] ;then
// "File NOT open"
// else
// "File Open"
refer http://linux.about.com/library/cmd/blcmdl8_lsof.htm
I think an advisory lock will help. Since if one using the file which another program is working on it, the one will get blocked or get an error. but if you access it in force,the action is Okey, but the result is unpredictable, In order to maintain the consistency, all of the processes who want to access the file should obey the advisory lock rule. I think that will work.
When the file is closed then the lock is freed too.Other processes can try to hold the file.

Semantics of Windows batch programming redirection

I have a Windows batch script (my.bat) which has the following line:
DTBookMonitor.exe 2>&1 > log\cmdProcessLog.txt
So, from my understanding, this runs DTBookMonitor, redirects STDERR to STDOUT and then redirects STDOUT to the file log\cmdProcessLog.txt.
I then run my.bat. DTBookMonitor runs for a significant amount of time, and when I run my.bat a second time (while it is already running), it immediately exits from the second instance of my.bat.
Is this purely because of the redirection to cmdProcessLog?
Better late then never :)
Windows redirection locks the output file so that no other process can open the file for writing at the same time. That is why the second instance fails when it tries to redirect output to the same file.
I'd guess it's either due to that, or because DTBookMonitor only allows one instance of it to run at a time. The following test should shed some light on the situation:
Run the first (long) instance of DTBookMonitor
Run a second instance without redirecting any of its output
Alternatively, run a second instance, but redirect the output to a file other than log\cmdProcessLog.txt
Do you get similar results? Different results?

Resources