I'm using Ubuntu 14.04.
There are 4 files involved: 'compile.sh', 'execute.sh', 'work.c', 'tester.sh'.
In 'compile.sh', it compiles the 'work.c' file and outputs an executable file called 'execute.sh'. In my own testing process, I do ./compile.sh, then ./execute.sh to run my C program. This works.
Now, the 'tester.sh' is a script that calls a Java program and this Java program does the same thing. It will run my 'compile.sh' first and then excute 'execute.sh'. It checks the correctness of my program outputs.
The problem is that when I do ./tester.sh, I get the error below
Reading first line from program...
./execute.sh: ./execute.sh: cannot execute binary file
First line of execution should match: Created \d heaps of sizes .+
Failed to execute (error executing ./execute.sh)
You can ignore the third line "First line of execution...."; it tries to check whether my output matches exactly with the tester. Since the binary file cannot be executed, then the first line does not match for sure.
So why does it say "cannot execute binary file"?
Content in compile.sh
#!/bin/bash
gcc -Wall work.c -o execute.sh
Content in tester.sh
#!/bin/bash
java -cp bin/tester.jar edu.ssu.cs153.work1.Tester
(bin/tester.jar is in my local machine; we can assume there is nothing wrong with the tester script.)
Diagnosis
It is weird, but not disallowed, to name an executable with the .sh extension. Your problem is that the Java code is trying to run it as a shell script (e.g. bash ./execute.sh), and it isn't a shell script so it fails. You need to change the Java to run the .sh file as an executable instead of as a shell script. Or, better (since you probably can't fix the Java), fix the compilation so that it produces an executable with a different name (e.g. work), and have execute.sh execute ./work.
File execute.sh is just an output file from compiling the work.c file. It is just like a.out by default from gcc. I can run ./execute.sh from the terminal and see all the correct outputs.
The trouble is, when you run it, you do ./execute.sh and the shell executes directly. The Java is running it as bash ./execute.sh, and that generates the error. Try it at the command line.
Prescription
On the face of it, you need to change compile.sh, perhaps like this (generating a program work from work.c):
#!/bin/bash
gcc -o work -Wall work.c
And you write a shell script called executable.sh that reads:
#!/bin/bash
exec ./work "$#"
This script runs your program with any command line arguments it is given. The exec means the shell replaces itself with your program; there are minor advantages to doing it that way, though it'll be OK if you omit the exec from the script.
Related
I have a simple question. I want to execute a C program in a shell script. How do I do that? Thanks for your help in advance.
Assuming this is linux/unix we're talking about:
#!/bin/sh
/path/to/executable arg1 arg2
cc hello_world.c #produces a.out
./a.out #run your program
IMHO, your problem is the $PATH. Your current directory is not in PATH, so when you enter
a.out
your shell respond:
-bash: a.out: command not found
you should execute it as
./a.out
(or add "." to your PATH, but this is not recommended.)
Almost every program that you execute in a shell script is a C program (but some, often many, of the commands you execute may be built into the shell). You execute a C program in the same way as any other program:
By basename: command [arg1 ...]
The command must be in a directory searched by the shell - on your PATH, in other words.
By relative name: ./command [arg1 ...] or ../../bin/command [arg1 ...]
The program must exist and be executable (by you)
By absolute name: /some/directory/bin/command [arg1 ...]
The program must exist and be executable (by you)
One of the beauties of Unix is that programs you create, whether in C or any other language, attain the same status as the system-provided commands. The only difference is that the system-provided commands are in a different place (such as /bin or /usr/bin) from commands you create (such as usr/local/bin or $HOME/bin).
I have an occasion where a C program invokes a shell script, which in-turn does some copying stuff from the CD mount location to an installation directory.
Now my question is that, is there a straightforward approach to get the absolute path of this C program inside this shell script ?.
I tried a couple of approaches that includes using "$(ps -o comm= $PPID)" from within the script, but nothing did work out till now. I know that I can create a temporary file from the C program which contains its own name (argv[0]) and then make the shell script to read that file, but I don't want to follow that approach here.
Of course, it can be passed as an argument to the script, but I was thinking why the bash built-in macros or something cannot be used here
On linux there is a /proc/self/exe path that points the absolute path of the current executed file. So you can push an environment variable that contains the path before spawning the shell. Something like:
readlink("/proc/self/exe",...,buf);
putenv("MYEXE",buf);
system("thescript");
and accessing the variable in the script:
echo $MYEXE
Before running a foo command you could use which like
fooprog=$(which foo)
to get the full path of the program (scanning your $PATH). For example which ls could give /bin/ls ....
On Linux specifically you could use proc(5).
In your shell process (running bash or some POSIX compliant shell) started by your C program, $PPID give the parent process id, hopefully the pid of the process running your C program.
Then the executable is /proc/$PPID/exe which is a symbolic link. Try for example the ls -l /proc/$PPID/exe command in some terminal.
(notice that you don't run C source files or stricto sensu C programs, you often run some ELF executable which was built by compiling C code)
You might have weird cases (you'll often ignore them, but you might decide to handle them). Someone might move or replace or remove your executable while it is running. Or the parent process (your executable) died prematurely, so the shell process becomes orphan. Or the executable removed itself.
I'm trying to compile and run a C code while looping the input file through bash. Here is my code for the bash script I am using to automate it.
~!/bin/bash
file=1
outfile='outputnumber'$file
readsfile='readsfilename'$file'.txt'
compilefile=compiler$file'.o'
gcc -lgsl -lgslcblas -std=c99 filewithccode.c -o $compilefile
echo "Compilation over"
./$compilefile $outfile $readsfile
So what I'm basically trying to do is compile filewithcode.c so that the executable is stored as compiler1, which takes outputnumber1 and readsfilename1.txt as input. The reason I want to do this is so that I can loop it over "file" and automate it for multiple files (I have 45 of them) and automate the execution. But I'm getting the error:
Segmentation fault (core dumped) ./$compilefile $outfile $readsfile
I am trying to use different names for the compiled file because I am trying to run them parallelly on a server and I am not sure if compiling with the same output name will cause an issue.
Any suggestions? I know that maybe the "./$" is causing that error, because the BASH is echoing "Compilation over".
Your last line, being that it's just variables placed on a line might cause problems with your script being able to interpret it properly. You might try making it into a string, and then using the exec command on that string. for example:
comm="./""$compilefile $outfile $readsfile"
exec $comm
This has saved me a lot of syntax trouble with referencing variables in the past.
the code snippet I wrote is like this:
#include <stdlib.h>
int main()
{
system("/bin/bash ls");
}
when I compile and execute the binary, I got the result:
/bin/ls: /bin/ls: cannot execute binary file
so what's the thing missing here?
ls is an actual system binary. it's not a built-in shell command. All you need is system("ls"). Right now you're trying to pass the contents of the ls binary file into bash as a script.
Do not use system() from a program , because strange values for some environment variables might be used to subvert system integrity. Use the exec(3) family of functions instead, but not execlp(3) or execvp(3). system() will not, in fact, work properly from programs with set-user-ID or set-group-ID privileges on systems on which /bin/sh is bash version 2, since bash 2 drops privileges on startup. (Debian uses a modified bash which does not do this when invoked as sh.)
In your case , ls is not built in command in shell so system() is not working.
You can check using type <cmd_name> command to know that cmd_name is built-in or not.
For more man system()
If no options are specified, the argument to /bin/bash is the name of a file containing shell commands to execute.
To execute commands specified on the command line, use the -c option: /bin/bash -c ls.
As others have noted, there are security considerations when doing this, so you should seek alternatives.
I'm a student and this is my first exposure to bash scripting, besides messing with a simple Makefile for c.
#!/usr/bin/sh
gcc -g -std=c99 -Wall -c field.c
gcc -g -std=c99 -Wall -c testField.c
gcc -g field.o testField.o -o testField
#testField get 0xa 0 1 > PA1output.txt
#testField get 0xaa 0 3 >> PA1output.txt
is my script.I want to compile field.c and testField.c into the executable testField.
No matter if I leave the last 2 lines commented out or not, they linux terminal hangs and after 10 seconds of nothing happening I press ctrl+c to stop it. Ultimately I want to redirect output to PA1output.txt, then concatenate things on the end of the file, but I want to rewrite the file contents each time.
As far as I understand it, > rewrites the contents of the specified file, and >> concatenates onto the end.
This is not my homework, I want to automate testing of other homework I have. 'testField get 0xaa 0 3 are arguments into my c program.'
I tried Bash script hangs
but that didn't answer my question totally.
My script is called 'as' to make it easy to type.
Why does the terminal hang and how do I get the script to do what I described above?
Thanks.
Your system has another program called ‘as’ which is an assembler. You are likely running this rather than your script, and it hangs because the assembler is waiting for input from your terminal.
If you insist on keeping the name, you should run your script with a full or partial pathname (like ‘./as’) so that the correct program is run.
You will probably find that your script will not run without the ‘#’ at the beginning of your first line. However, another way to run your script is ‘sh ./as’ from the command line, which does not depend on having the #! line.
As Jeremy described, it's most likely a conflict of names.
If you are running your script from the command line (I really hope you are), you don't have to be afraid of giving your scripts (and all file names for that matter) longer, but more specific, names. Most (if not all) command line interfaces on linux have some form of tab-expansion. All you have to do is type enough of the name to make it unique, then press [Tab], and the shell should complete the name for you.
Here's a more thorough explanation for Bash.