Estimating the execution time of code on local linux system - c

I spend most of my time solving problems on Topcoder/SPOJ. So definitely I thought of performance (execution time) of my code on my system before submitting the code.
So, on searching I found time command in linux. But the problem is that it also includes the time for inputting the values for several test cases, in addition to processing time. So I thought of making an input file and sending that content to my code.
Something like
cat input.txt > ./myprogram
But this doesn't work. (I am not good at linux pipelining). Can anyone point out the mistake, or a better approach to judge my code execution time?
EDIT
All of my programs read from stdin

You need this:
./myprogram < input.txt
Or if you insist on the Useless Use of Cat:
cat input.txt | ./myprogram
You can put time in front of ./myprogram in either case.

You might want to look at xargs.
Something along the lines of
cat input.txt | xargs ./myprogram

You can add below code in your script for
assign the file descriptor to file for input and output fd # 3 is
Input file
exec 3< input.txt
Use read command in while loop to read the file line by line
while read -u 3 -r a
do

Related

Output to terminal different than I redirect output to file AND output to terminal

So for some reason when I run my script and have it output to the terminal just as it would, I get my intended output. Yet when I redirect the output to a file, I don't receive full output.
Let's say I have an executable named "filename" and run it "./filename", the output on the terminal is, let's say :
a
b
c
Yet if I do "./filename > output.txt" or "./filename |& tee output.txt", the output on the terminal AND the output.txt text file is just, let's say:
a
b
I know this isn't very specific, but my output is huge. I was thinking this would be general enough to provide general solutions/ possible problems.
I'm using a program someone else made, so I don't know where this additional output is called. Yet, it shouldn't matter since the functionality of the program doesn't change, just what's being output.
Without a minimal code sample to reproduce, it's very hard to guess what's going on.
But some things you could try:
Redirect all output streams to your file, i.e. your-script &> output.txt
Run it through strace and look for write and open calls to see what's going on
Read and debug the source code to figure out what's going on

Another Linux command output (Piped) as input to my C program

I'm now working on a small C program in Linux. Let me explain you what I want to do with a sample Linux command below
ls | grep hello
The above command is executed in the below passion (Let me know if I've got this wrong)
ls command will be executed first
Output will be given to grep command which will again generate output by matching "hello"
Now I would like to write a C program which takes the piped output of one command as input. Means, In the similar passion of how "grep" program was able to get the input from ls command (in my example above).
Similar question has been asked by another user here, but for some reason this thread has been marked as "Not a valid question"
I initially thought we can get this as a command line argument to C program. But this is not the case.
If you pipe the output from one command into another, that output will be available on the receiving process's standard input (stdin).
You can access it using the usual scanf or fread functions. scanf and the like operate on stdin by default (in the same way that printf operates on stdout by default; in the absence of a pipe, stdin is attached to the terminal), and the C standard library provides a FILE *stdin for functions like fread that read from a FILE stream.
POSIX also provides a STDIN_FILENO macro in unistd.h, for functions that operate one file descriptors instead. This will essentially always be 0, but it's bad form to rely on that being the case.
If fact, ls and grep starts at the same time.
ls | grep hello means, use ls's standard output as grep's standard input. ls write results to standard output, grep waits and reads any output from standard input at once.
Still have doubts? Do an experiment. run
find / | grep usr
find / will list all files on the computer, it should take a lot of time.
If ls runs first, then OS gives the output to grep, we should wait a long time with blank screen until find finished and grep started. But, we can see the results at once, that's a proof for that.

Potential Dangers of Running Code in Parallel

I am working in OSX and using bash for my shell. I have a script which calls an executable hundreds of times, and each call is independent of the other. Therefore I am going to run this code in parallel. However, each call to the executable appends output to a community text file on a new line.
The ordering of the text file is not of importance (although it would be nice, but totally not worth over complicating since I can just use unix sort command), but what is, is that every call of the executable properly printed to the file. My concern is that if I run the script in parallel that the by some freak accident, two threads will check out the text file, print to it and then save different copies back to the original directory of the text file. Thus nullifying one of the writes to the file.
Does this actually happen, or is my understanding of printing to a file flawed? I don't fully know if this would also be a case by case bases so I will provide some mock code of what is being done in my program below.
Script:
#!/bin/sh
abs=$1
input=$(echo "$abs" | awk '{print 0.004 + 0.005*$1 }')
./program input
"./program":
~~Normal .c file stuff here~~
~~VALUE magically calculated here~~
~~run number is pulled out of input and assigned to index for sorting~~
FILE *fpp;
fpp = fopen("Doc.txt","a");
fprintf(fpp,"%d, %.3f\n", index, VALUE);
fclose(fpp);
~Closing events of program.c~~
Commands to run script in parallel in bash:
printf "%s\n" {0..199} | xargs -P 8 -n 1 ./program
Thanks for any help you guys can offer.
A write() call (like fwrite()) with the append flag set in open() (like during fopen()) is guaranteed to avoid the race condition you describe.
O_APPEND
If set, the file offset shall be set to the end of the file prior to each write.
From: POSIX specifications for open:
opengroup.org open
Race conditions are what you are thinking of.
Not 100% sure but if you simple append to the end of the file rather than opening it and editing it should be right
If you have the option, make your program write to standard output instead of directly to a file. Then you can let the shell merge the output of your programs:
printf "%s\n" {0..199} | parallel -P 8 -n 1 ./program > merged_output.txt
Yeah, that looks like a recipe for disaster. If those processes both hit opening the file at the roughly the same time, only one will "take".
I suggest either (easier) writing to separate files then catting them together when the processing is done, or (harder) sending all results to a consumer process that will write the file for everyone.

Transferring output of a program to a file in C

I have written a C program to get all the possible combinations of a string. For example, for abc, it will print abc, bca, acb etc. I want to get this output in a separate file. What function I should use? I don't have any knowledge of file handling in C. If somebody explain me with a small piece of code, I will be very thankful.
Using function fopen (and fprintf(f,"…",…); instead of printf("…",…); where f is the FILE* obtained from fopen) should give you that result. You may fclose() your file when you are finished, but it will be done automatically by the OS when the program exits if you don't.
If you're running it from the command line, you can just redirect stdout to a file.
On Bash (Mac / Linux etc):
./myProgram > myFile.txt
or on Windows
myProgram.exe > myFile.txt
Been a while since I did this, but IIRC there is a freopen that lets you open a file at given handle. If you open myfile.txt at 1, everything you write to stdout will go there.
You can use the tee command (available in *nix and cmd.exe) - this allows output to be sent to both the standard output and a named file.
./myProgram | tee myFile.txt

How to redirect output away from /dev/null

I have an application that runs the a command as below:
<command> <switches> >& /dev/null
I can configure <command>, but I have no control over <switches> . All the output generated by this command goes to /dev/null. I want the output to be visible on screen or redirected to a log file.
I tried to use freopen() and related functions to reopen /dev/null to another file, but could not get it working.
Do you have any other ideas? Is this possible at all?
Thanks for your time.
PS: I am working on Linux.
Terrible Hack:
use a text editor in binary mode open the app, find '/dev/null/' and replace it with a string of the same length
e.g '~/tmp/log'
make a backup first
be carefull
be very carefull
did I mention the backup?
Since you can modify the command you run you can use a simple shell script as a wrapper to redirect the output to a file.
#!/bin/bash
"$#" >> logfile
If you save this in your path as capture_output.sh then you can add capture_output.sh to the start of your command to append the output of your program to logfile.
Append # at the end of your command so it becomes <command> # >& /dev/null, thus commenting out the undesired part.
Your application is probably running a shell and passing it that command line.
You need to make it run a script written by you. That script will replace >/dev/null in the command line with >>/your/log and call the real shell with the modified command line.
The first step is to change the shell used by the application. Changing the environment variable SHELL should suffice, i.e., run your application as
SHELL=/home/user/bin/myshell theApp
If that doesn't work, try momentarily linking /bin/sh to your script.
myshell will call the original shell, but after pattern-replacing the parameters:
#!/bin/bash
sh ${1+"${#/\>\/dev\/null/>>\/your\/log}"}
Something along these lines should work.
You can do this with an already running process by using gdb. See the following page: http://etbe.coker.com.au/2008/02/27/redirecting-output-from-a-running-process/
Can you create an alias for that command? If so, alias it to another command that dumps output to a file.
The device file /dev/tty references your application's controlling terminal - if that hasn't changed, then this should work:
freopen("/dev/tty", "w", stdout);
freopen("/dev/tty", "w", stderr);
Alternatively, you can reopen them to point to a log file:
freopen("/var/log/myapp.log", "a", stdout);
freopen("/var/log/myapp.err", "a", stderr);
EDIT: This is NOT a good idea and certainly not worth trying unless you know what this can break. It works for me, may work for you as well.
Ok, This is a really bad hack and probably not worth doing. Assuming that none of the other commands works, and you simply do not have access to the binary/application (which contains the command with /dev/null) and you cannot re-direct the output to other file (by replacing /dev/null).
Then, you can delete /dev/null ($> rm /dev/null) and create your own file at its place (preferably with a soft link) where all the data can be directed. When you are done, you can create the /dev/null once again using following command:
$> mknod -m 666 /dev/null c 1 3
Just to be very clear, this is a bad hack and certainly requires root permissions to work. High chances that your re-directed file may contain data from many other applications/binaries which are running and use /dev/null as sink.
It may not exactly redirect, but it allows to get the output wherever it's being sent
strace -ewrite -p $PID
It's not that cleen (shows lines like: write(#,) ), but works! (and is single-line :D ) You might also dislike the fact, that arguments are abbreviated. To control that use -s parameter that sets the maxlength of strings displayed.
It catches all streams, so You might want to filter that somehow.
You can filter it:
strace -ewrite -p $PID 2>&1 | grep "write(1"
shows only descriptor 1 calls. 2>&1 is to redirect stderr to stdout, as strace writes to stderr by default.
In perl, if you just want to redirect STDOUT to something slightly more useful, you can just do something like:
open STDOUT, '>>', '/var/log/myscript.log';
open STDERR, '>>', '/var/log/myscript.err';
at the beginning of your script, and that'll redirect it for the rest of your script.
Along the lines of e-t172's answer, can you set the last switch to (or append to it):
; echo
If you can put something inline before passing things to /dev/null (not sure if you are dealing with a hardcoded command), you could use tee to redirect to something of your choice.
Example from Wikipedia which allows escalation of a command:
echo "Body of file..." | sudo tee root_owned_file > /dev/null
http://en.wikipedia.org/wiki/Tee_(command)

Resources