Problem with gcc tracker/make/fork/exec/wait - c

This is a most singular problem, with many interdisciplinary ramifications.
It focuses on this piece of code (file name mainpp.c):
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char *argv[])
{
int status;
if (fork())
{
FILE *f=fopen("/tmp/gcc-trace","a");
fprintf(f,"----------------------------------------------------------------\n");
int i;
for(i=0;i<argc;i++)
{
fprintf(f,"%s:",argv[i]);
}
wait(&status);
fprintf(f,"\nstatus=%d",status);
fprintf(f,"\n");
fclose(f);
}
else
{
execv("g++.old",argv);
}
sleep(10);
return status;
}
This is used with a bash script:
#!/bin/sh
gcc -g main.c -o gcc
gcc -g mainpp.c -o g++
mv /usr/bin/gcc /usr/bin/gcc.old
mv /usr/bin/g++ /usr/bin/g++.old
cp ./gcc /usr/bin/gcc
cp ./g++ /usr/bin/g++
The purpose of this code ( and a corresponding main.c for gcc) is hopefully clear. it replaces g++ and logs calls to g++ plus all commandline arguments, it then proceeds to call the g++ compiler ( now called g++.old ).
The plan is use this to log all the calls to g++/gcc. ( Since make -n does not trace recursive makes, this is a way of capturing calls "in the wild". )
I tried this out on several programs and it worked well. ( Including compiling the program itself. ) I then tried it out on the project I was interested in, libapt-pkg-dev ( Ubuntu repository ).
The build seemed to go well but when I checked some executables were missing. Counting files in the project directory I find that an unlogged version produces 1373 whereas a logged version produces 1294. Making a list of these files, I discover that all the missing files are executables, shared libraries or object files.
Capturing the standard out of both logged makes and unlogged makes gives the same output.
The recorded return value of all processes called by exec is 0.
I've placed sleeps in various positions in the code. They do not seem to make any difference. ( The code with the traced version seems to compile much faster per file. I suspected that the exec might have caused the program to terminate while leaving gcc running. I thought that might cause failure because some object files might not be finishing when others need them. )
I have only one more diagnostic to run to see if I can diagnose the problem and then I am out of ideas. Suggestions?

I'm not sure if this if this will solve your problem, but have you considered using strace instead of your custom code?
strace executes a command (or attaches to a running process) and lists all the system calls it makes. So for instance, instead of running make directly, you might run:
strace -f -q -e trace=execve make
-f means attach to new processes as they are forked
-q means suppress attach/detach messages
-e trace=execve means only report calls to execve
You can then grep through the output for messages about /usr/bin/gcc.

Related

Basic SDL2 app compiles with MinGW-w64 but doesn't run

I'm trying to set up a SDL2 and C development environment on Windows 10 with MinGW-w64.
When trying to run the basic c app with SDL initialization, it compiles without warnings but fails to run afterwards, again without any warnings. Executable just exits.
Here's the source:
#include<SDL2/SDL.h>
int main(int argc, char* argv[]) {
puts("\nmain...\n");
if (SDL_Init(SDL_INIT_VIDEO) < 0) {
printf("\nInit error: %s\n", SDL_GetError());
}
else {
puts("\nSDL init success...");
}
}
... and the makefile:
OBJS = sdl_init.c
EXE_NAME = sdl_init_test
CFLAGS_W = -w -Wl,-subsystem,windows
LFLAGS_W = -lmingw32 -lSDL2main -lSDL2
INCS_W = -IC:\MinGW\devlibs\SDL2-2.0.12\x86_64-w64-mingw32\include
LIBS_W = -LC:\MinGW\devlibs\SDL2-2.0.12\x86_64-w64-mingw32\lib
windows_debug:
gcc $(OBJS) $(INCS_W) $(LIBS_W) $(CFLAGS_W) $(LFLAGS_W) -g -o $(EXE_NAME).exe
... and the weird output from gdb:
Reading symbols from .\sdl_init_test.exe...
(gdb) list main
12 ../../src/mingw-w64-crt/crt/crt0_c.c: No such file or directory.
(gdb) b main
Breakpoint 1 at 0x402e70: file ../../src/mingw-w64-crt/crt/crt0_c.c, line 17.
I'm assuming I'm doing something wrong in the linking phase, but can't pinpoint it exactly.
On Linux, everything compiles, runs and debugs as expected.
Here's a corrected makefile, as answered which will compile and work fine in Windows console:
SRC = sdl_init.c
EXE_NAME = sdl_init_test
CFLAGS_W = -Wall -Wl,-subsystem,console
LFLAGS_W = -lmingw32 -lSDL2main -lSDL2
INCS_W = -IC:\MinGW\devlibs\SDL2-2.0.12\x86_64-w64-mingw32\include
LIBS_W = -LC:\MinGW\devlibs\SDL2-2.0.12\x86_64-w64-mingw32\lib
windows_debug:
gcc $(SRC) $(INCS_W) $(LIBS_W) $(CFLAGS_W) $(LFLAGS_W) -g -o $(EXE_NAME).exe
Aside from startup issue with missing dynamic library, you seem to be mislead (arguably by SDL actually being misleading in that aspect) that your b main in gdb sets breakpoint in your main function. That's not the case as SDL redefines main to SDL_main, so if you have #include "SDL2.h" or something similar and SDL have main wrapper implemented for your operating system - your function gets renamed. Internally main (or wmain, or WinMain, or whatever target system uses as user-defined code entry point) is implemented in SDL2main library that you link with, and it calls SDL_main (your code).
TL;DR use b SDL_main in gdb instead.
Second point is why you don't see output text. That's once again windows specific, basically because you've build "GUI" app, which is different from "console" app, and don't really have its stdout associated with console output. Output is still there but you can't see it - but it can be redirected to other program or file, e.g. your_program.exe | more or your_program.exe > stdout.txt. There are ways to reconnect stdout to console (some freopen with CON magic, as I recall), or you can just build console program instead with -Wl,-subsystem,console.
As a side note, -w compiler flag (that could be loosely read as "don't ever warn me about any potential problems with my code as I'm 100% sure it is absolutely perfect and all your warnings are unjustified complaints about my perfect code" (sorry)) is a really really bad idea, with some very rare exceptions. Compilers, especially gcc and clang, are very good at giving warnings in places where it really matter, allowing you to spot mistakes early. You want more warnings (e.g. -Wall -Wextra, probably more), not no warnings at all. And while we're at it, OBJS in makefile logically should mean object files, not sources (of course you technically can call your variables anything you like, it is just misleading).

Embed a binary in C program

I am trying to write a program in C that would be able to call certain binaries (ex. lsof, netstat) with options. The purpose of this program is to collect forensic data from a computer, while at the same time this program should not use the binaries of the computer under analysis as they might be compromised. As a result it is required the certified/uncompromised binaries (ex. lsof, netstat -antpu etc) already to be embedded in a C program or to be called by the C program stored in a usb drive for example.
Having for example the binary of the "ls" command I created an object file using the linker as follows:
$ ld -s -r -b binary -o testls.o bin-x86-2.4/ls
Using the following command I extracted the following entry points from the object file
$ nm testls.o
000000000007a0dc D _binary_bin_x86_2_4_ls_end
000000000007a0dc A _binary_bin_x86_2_4_ls_size
0000000000000000 D _binary_bin_x86_2_4_ls_start
The next step would be to call the "function" from the main program with some options that I might need for example "ls -al". Thus I made a C program to call the entry point of the object file.
Then I compiled the program with the following gcc options
gcc -Wall -static testld.c testls.o -o testld
This is the main program:
#include <stdio.h>
extern int _binary_bin_x86_2_4_ls_start();
int main(void)
{
_binary_bin_x86_2_4_ls_start();
return 0;
}
When I run the program I am getting a segmentation fault. I checked the entry points using the objdump in the testld program and the linking seems to be successful. Why then I am getting a segmentation fault?
I still need also to call "ls" with options. How I could do this, i.e. call the "function" with the arguments "-al".
Thank you.
The ELF header of a binary isn't a function. You can't call it. If you could (like in some ancient binary formats) it would be a really bad idea because it would never return.
If you want to run another program midstream do this:
int junk;
pid_t pid;
if (!(pid = fork())) {
execl("ls", "/bin/ls", ...); /* this results in running ls in current directory which is probably what you want but maybe you need to adjust */
_exit(3);
}
if (pid > 0) waitpid(pid, &junk, 0);
Error handling omitted for brevity.
In your case, you should ship your own copies of your binaries alongside your program.

gcc on Windows: generated "a.exe" file vanishes

I'm using GCC version 4.7.1, but I've also tried this on GCC 4.8. Here is the code I'm trying to compile:
#include <stdio.h>
void print(int amount) {
int i;
for (i = 0; i < 5; i++) {
printf("%d", i);
}
}
int main(int argc, char** argv) {
print(5);
return 0;
}
It looks like it should work, and when I compile with...
gcc main.c
It takes a while to compile, produces an a.exe file and the the a.exe file disappears. It isn't giving me any errors with my code.
Here's a gif of proof, as some people are misinterpreting this:
(Since ahoffer's deleted answer isn't quite correct, I'll post this, based on information in the comments.)
On Windows, gcc generates an executable named a.exe by default. (On UNIX-like systems, the default name, for historical reasons, is a.out.) Normally you'd specify a name using the -o option.
Apparently the generated a.exe file generates a false positive match in your antivirus software, so the file is automatically deleted shortly after it's created. I see you've already contacted the developers of Avast about this false positive.
Note that antivirus programs typically check the contents of a file, not its name, so generating the file with a name other than a.exe won't help. Making some changes to the program might change the contents of the executable enough to avoid the problem, though.
You might try compiling a simple "hello, world" program to see if the same thing happens.
Thanks to Chrono Kitsune for linking to this relevant Mingw-users discussion in a comment.
This is not relevant to your problem, but you should print a newline ('\n') at the end of your program's output. It probably doesn't matter much in your Windows environment, but in general a program's standard output should (almost) always have a newline character at the end of its last line.
Try to compile with gcc but without all standard libraries using a command like this:
gcc -nostdlib -c test.c -o test.o; gcc test.o -lgcc -o test.exe
One of the mingw libraries binary must generate a false positive, knowing which library would be useful.
There is no issue with your code it is just exiting properly.
You have to run it in the command line which will show you all the info.
start->run->cmd, then cd to your directory. then a.exe. If you don't want to do that you can add a sleep() before the return in main.
More over, in your code when you pass print(5) to your function it's not being used.
I confirm is due to Antivirus.
I did this test:
compile helloworld.c at t=0;
within 1 second tell McAfee not consider helloworld.exe a threat. >> the file is still there
If I am too slow, the file will be deleted.
If suppose you get the error near a.exe while running the file ,
Theen follow the below steps:
1.open virus & threat protection
2.there select manage settings in virus & threat protection settings
3.there is real time protection and cloud delivered protection is in ON then OFF the real time protection and cloud delivered protection.!
(https://i.stack.imgur.com/mcIio.jpg)
a.exe is also the name of a virus. I suspect your computer's security software is deleting or quarantining the file because it believes it is a virus. Use redFIVE's suggestion to rename your output file to "print.exe" so that the virus scanner does not delete it.
You try:
gcc -o YOUR_PROGRAM.exe main.c
You can stop your antivirus software from deleting your .exe by specifying the full file path (for eg: c:\MyProject) in the 'paths to be excluded from scanning' section of the antivirus software.

Why are different nodes running different compiles of my executable? (MPI)

After I recompile my (C) program, some nodes are running old compiles (with the debug information still in it), and some nodes are running the new copy. The server is running Gentoo Linux and all nodes get the file from the same storage. I'm told the filesystem is NFS. The MPI I'm using is MPICH Version 1.2.7. Why are some nodes not using the newly compiled copy?
Some more details (in case you're having trouble sleeping):
I'm trying to create my first MPI program (and I'm new to C and Linux, too). I have the following in my code:
#if DEBUG
{
int i=9;
pid_t PID;
char hostname[256];
gethostname(hostname, sizeof(hostname));
printf("PID %d on %s ready for attach.\n", PID=getpid(), hostname);
fflush(stdout);
while (i>0) {
printf("PID %d on %s will wait for `gdb` to attach for %d more iterations.\n", PID, hostname, i);
fflush(stdout);
sleep(5);
i--;
}
}
#endif
Then I recompiled with (no -DDEBUG=1 option, so the above code is excluded)
$ mpicc -Wall -I<directories...> -c myprogram.c
$ mpicc -o myprogram myprogram.o -Wall <some other options...>
The program compiles with no problems. Then I execute it like this:
$ mpirun -np 3 myprogram
Sometimes (and more and more frequently), different copies of the executable run on different nodes of the cluster? On some nodes, the debugging code executes (and prints) and on some nodes it doesn't.
Note that the cluster is currently experiencing some "clock skew" (or something like that), which may be the cause. Is that the problem?
Also note that I actually just change the compile options by commenting/uncommenting lines in a Makefile because I haven't had time to implement these suggestions yet.
Edit: When the problem occurs, md5sum myprogram returns a different value on the nodes where the issue presents itself.
Your different nodes have retained a copy of a file and are using that instead of the latest when you run the binary. This has little to nothing to do with Gentoo because it is an artifact of the Linux (kernel) caching and/or NFS implementations.
In other words, your binary is cached. Read this answer:
NFS cache-cleaning command?
Tweaking some settings may also help.
I happen to have a command here that syncs and flushes:
$ cat /home/jaroslav/bin/flush_cache
sudo sync
sudo sh -c 'echo 3 > /proc/sys/vm/drop_caches'

"Too few arguments" error trying to run my compiled program

I'm trying to code to refresh my memory preparing myself for a course.
int main(){
int x;
for( x = 0;x < 10; x++){
printf("Hello world\n");
}
return 0;
}
But when I tried to run this I get Too few arguments
I compiled the code above using gcc -o repeat file.c Then to run this I just type repeat
Sorry if this was a stupid question, it has been a while since I took the introduction class.
When you type
filename
at a prompt, your OS searches the path. By default, Linux doesn't include the current directory in the path, so you end up running something like /bin/filename, which complains because it wants arguments. To find out what file you actually ran, try
which filename
To run the filename file gcc created in the working directory, use
./filename
Your code compiles fine. Try:
gcc -o helloworld file.c
./helloworld
UPDATE :
Based on more recent comments, the problem is that the executable is named repeat, and you're using csh or tcsh, so repeat is a built-in command.
Type ./repeat rather than repeat.
And when asking questions, don't omit details like that; copy-and-paste your source code, any commands you typed, and any messages you received.
The executable is named file, which is also a command.
To run your own program, type
./file
EDIT :
The above was an educated guess, based on the assumption that:
The actual compilation command was gcc file.c -o file or gcc -o file file.c; and
The predefined file command (man file for information) would produce that error message if you invoke it without arguments.
The question originally said that the compilation command was gcc file.c; now it says gcc -o filename file.c. (And the file command prints a different error message if you run it without arguments).
The correct way to do this is:
gcc file.c -o filename && ./filename
(I'd usually call the executable file to match the name of the source file, but you can do it either way.)
The gcc command, if it succeeds, gives you an executable file in your current directory named filename. The && says to execute the second command only if the first one succeeds (no point in trying to run your program if it didn't compile). ./filename explicitly says to run the filename executable that's in the current directory (.); otherwise it will search your $PATH for it.
If you get an error message Too few arguments, it's not coming from your program; you won't see that message unless something prints it explicitly. The explanation must be that you're running some other program. Perhaps there's already a command on your system called filename.
So try doing this:
gcc file.c -o filename && ./filename
and see what happens; it should run your program. If that works, try typing just
filename
and see what that does. If that doesn't run your program, then type
type -a filename
or
which filename
to see what you're actually executing.
And just to avoid situations like this, cultivate the habit of using ./whatever to execute a program in the current directory.

Resources