Is it possible to compile a C++ (or the like) program without generating the executable file but writing it and executing it directly from memory?
For example with GCC and clang, something that has a similar effect to:
c++ hello.cpp -o hello.x && ./hello.x $# && rm -f hello.x
In the command line.
But without the burden of writing an executable to disk to immediately load/rerun it.
(If possible, the procedure may not use disk space or at least not space in the current directory which might be read-only).
Possible? Not the way you seem to wish. The task has two parts:
1) How to get the binary into memory
When we specify /dev/stdout as output file in Linux we can then pipe into our program x0 that reads
an executable from stdin and executes it:
gcc -pipe YourFiles1.cpp YourFile2.cpp -o/dev/stdout -Wall | ./x0
In x0 we can just read from stdin until reaching the end of the file:
int main(int argc, const char ** argv)
{
const int stdin = 0;
size_t ntotal = 0;
char * buf = 0;
while(true)
{
/* increasing buffer size dynamically since we do not know how many bytes to read */
buf = (char*)realloc(buf, ntotal+4096*sizeof(char));
int nread = read(stdin, buf+ntotal, 4096);
if (nread<0) break;
ntotal += nread;
}
memexec(buf, ntotal, argv);
}
It would also be possible for x0 directly execute the compiler and read the output. This question has been answered here: Redirecting exec output to a buffer or file
Caveat: I just figured out that for some strange reason this does not work when I use pipe | but works when I use the x0 < foo.
Note: If you are willing to modify your compiler or you do JIT like LLVM, clang and other frameworks you could directly generate executable code. However for the rest of this discussion I assume you want to use an existing compiler.
Note: Execution via temporary file
Other programs such as UPX achieve a similar behavior by executing a temporary file, this is easier and more portable than the approach outlined below. On systems where /tmp is mapped to a RAM disk for example typical servers, the temporary file will be memory based anyway.
#include<cstring> // size_t
#include <fcntl.h>
#include <stdio.h> // perror
#include <stdlib.h> // mkostemp
#include <sys/stat.h> // O_WRONLY
#include <unistd.h> // read
int memexec(void * exe, size_t exe_size, const char * argv)
{
/* random temporary file name in /tmp */
char name[15] = "/tmp/fooXXXXXX";
/* creates temporary file, returns writeable file descriptor */
int fd_wr = mkostemp(name, O_WRONLY);
/* makes file executable and readonly */
chmod(name, S_IRUSR | S_IXUSR);
/* creates read-only file descriptor before deleting the file */
int fd_ro = open(name, O_RDONLY);
/* removes file from file system, kernel buffers content in memory until all fd closed */
unlink(name);
/* writes executable to file */
write(fd_wr, exe, exe_size);
/* fexecve will not work as long as there in a open writeable file descriptor */
close(fd_wr);
char *const newenviron[] = { NULL };
/* -fpermissive */
fexecve(fd_ro, argv, newenviron);
perror("failed");
}
Caveat: Error handling is left out for clarities sake. Includes for sake of brevity.
Note: By combining step main() and memexec() into a single function and using splice(2) for copying directly between stdin and fd_wr the program could be significantly optimized.
2) Execution directly from memory
One does not simply load and execute an ELF binary from memory. Some preparation, mostly related to dynamic linking, has to happen. There is a lot of material explaining the various steps of the ELF linking process and studying it makes me believe that theoretically possible. See for example this closely related question on SO however there seems not to exist a working solution.
Update UserModeExec seems to come very close.
Writing a working implementation would be very time consuming, and surely raise some interesting questions in its own right. I like to believe this is by design: for most applications it is strongly undesirable to (accidentially) execute its input data because it allows code injection.
What happens exactly when an ELF is executed? Normally the kernel receives a file name and then creates a process, loads and maps the different sections of the executable into memory, performs a lot of sanity checks and marks it as executable before passing control and a file name back to the run-time linker ld-linux.so (part of libc). The takes care of relocating functions, handling additional libraries, setting up global objects and jumping to the executables entry point. AIU this heavy lifting is done by dl_main() (implemented in libc/elf/rtld.c).
Even fexecve is implemented using a file in /proc and it is this need for a file name that leads us to reimplement parts of this linking process.
Libraries
UserModeExec
libelf -- read, modify, create ELF files
eresi -- play with elfes
OSKit (seems like a dead project though)
Reading
http://www.linuxjournal.com/article/1060?page=0,0 -- introduction
http://wiki.osdev.org/ELF -- good overview
http://s.eresi-project.org/inc/articles/elf-rtld.txt -- more detailed Linux-specific explanation
http://www.codeproject.com/Articles/33340/Code-Injection-into-Running-Linux-Application -- how to get to hello world
http://www.acsu.buffalo.edu/~charngda/elf.html -- nice reference of ELF structure
Loaders and Linkers by John Levine -- deeoer explanation of linking
Related Questions at SO
Linux user-space ELF loader
ELF Dynamic loader symbol lookup ordering
load-time ELF relocation
How do global variables get initialized by the elf loader
So it seems possible, you decide whether is also practical.
Yes, though doing it properly requires designing significant parts of the compiler with this in mind. The LLVM guys have done this, first with a kinda-separate JIT, and later with the MC subproject. I don't think there's a ready-made tool doing it. But in principle, it's just a matter of linking to clang and llvm, passing the source to clang, and passing the IR it creates to MCJIT. Maybe a demo does this (I vaguely recall a basic C interpreter that worked like this, though I think it was based on the legacy JIT).
Edit: Found the demo I recalled. Also, there's cling, which seems to do basically what I described, but better.
Linux can create virtual file systems in RAM using tempfs. For example, I have my tmp directory set up in my file system table like so:
tmpfs /tmp tmpfs nodev,nosuid 0 0
Using this, any files I put in /tmp are stored in my RAM.
Windows doesn't seem to have any "official" way of doing this, but has many third-party options.
Without this "RAM disk" concept, you would likely have to heavily modify a compiler and linker to operate completely in memory.
If you are not specifically tied to C++, you may also consider other JIT based solutions:
in Common Lisp SBCL is able to generate machine code on the fly
you could use TinyCC and its libtcc.a which emits quickly poor (i.e. unoptimized) machine code from C code in memory.
consider also any JITing library, e.g. libjit, GNU Lightning, LLVM, GCCJIT, asmjit
of course emitting C++ code on some tmpfs and compiling it...
But if you want good machine code, you'll need it to be optimized, and that is not fast (so the time to write to a filesystem is negligible).
If you are tied to C++ generated code, you need a good C++ optimizing compiler (e.g. g++ or clang++); they take significant time to compile C++ code to optimized binary, so you should generate to some file foo.cc (perhaps in a RAM file system like some tmpfs, but that would give a minor gain, since most of the time is spent inside g++ or clang++ optimization passes, not reading from disk), then compile that foo.cc to foo.so (using perhaps make, or at least forking g++ -Wall -shared -O2 foo.cc -o foo.so, perhaps with additional libraries). At last have your main program dlopen that generated foo.so. FWIW, MELT was doing exactly that, and on Linux workstation the manydl.c program shows that a process can generate then dlopen(3) many hundred thousands of temporary plugins, each one being obtained by generating a temporary C file and compiling it. For C++ read the C++ dlopen mini HOWTO.
Alternatively, generate a self-contained source program foobar.cc, compile it to an executable foobarbin e.g. with g++ -O2 foobar.cc -o foobarbin and execute with execve that foobarbin executable binary
When generating C++ code, you may want to avoid generating tiny C++ source files (e.g. a dozen lines only; if possible, generate C++ files of a few hundred lines at least; unless lots of template expansion happens thru extensive use of existing C++ containers, where generating a small C++ function combining them makes sense). For instance, try if possible to put several generated C++ functions in the same generated C++ file (but avoid having very big generated C++ functions, e.g. 10KLOC in a single function; they take a lot of time to be compiled by GCC). You could consider, if relevant, to have only one single #include in that generated C++ file, and pre-compile that commonly included header.
Jacques Pitrat's book Artificial Beings, the conscience of a conscious machine (ISBN 9781848211018) explains in details why generating code at runtime is useful (in symbolic artificial intelligence systems like his CAIA system). The RefPerSys project is trying to follow that idea and generate some C++ code (and hopefully, more and more of it) at runtime. Partial evaluation is a relevant concept.
Your software is likely to spend more CPU time in generating C++ code than GCC in compiling it.
tcc compiler "-run" option allows for exactly this, compile into memory, run there and finally discard the compiled stuff. No filesystem space needed. "tcc -run" can be used in shebang to allow for C script, from tcc man page:
#!/usr/local/bin/tcc -run
#include <stdio.h>
int main()
{
printf("Hello World\n");
return 0;
}
C scripts allow for mixed bash/C scripts, with "tcc -run" not needing any temporary space:
#!/bin/bash
echo "foo"
sed -n "/^\/\*\*$/,\$p" $0 | tcc -run -
exit
/**
*/
#include <stdio.h>
int main()
{
printf("bar\n");
return 0;
}
Execution output:
$ ./shtcc2
foo
bar
$
C scripts with gcc are possible as well, but need temporary space like others mentioned to store executable. This script produces same output as the previous one:
#!/bin/bash
exc=/tmp/`basename $0`
if [ $0 -nt $exc ]; then sed -n "/^\/\*\*$/,\$p" $0 | gcc -x c - -o $exc; fi
echo "foo"
$exc
exit
/**
*/
#include <stdio.h>
int main()
{
printf("bar\n");
return 0;
}
C scripts with suffix ".c" are nice, headtail.c was my first ".c" file that needed to be executable:
$ echo -e "1\n2\n3\n4\n5\n6\n7" | ./headtail.c
1
2
3
6
7
$
I like C scripts, because you just have one file, you can easily move around, and changes in bash or C part require no further action, they just work on next execution.
P.S:
The above shown "tcc -run" C script has a problem, C script stdin is not available for executed C code. Reason was that I passed extracted C code via pipe to "tcc -run". New gist run_from_memory_stdin.c does it correctly:
...
echo "foo"
tcc -run <(sed -n "/^\/\*\*$/,\$p" $0) 42
...
"foo" is printed by bash part, "bar 42" from C part (42 is passed argv[1]), and piped script input gets printed from C code then:
$ route -n | ./run_from_memory_stdin.c
foo
bar 42
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 172.29.58.98 0.0.0.0 UG 306 0 0 wlan1
10.0.0.0 0.0.0.0 255.255.255.0 U 0 0 0 wlan0
169.254.0.0 0.0.0.0 255.255.0.0 U 303 0 0 wlan0
172.29.58.96 0.0.0.0 255.255.255.252 U 306 0 0 wlan1
$
One can easily modify the compiler itself. It sounds hard first but thinking about it, it seams obvious. So modifying the compiler sources directly expose a library and make it a shared library should not take that much of afford (depending on the actual implementation).
Just replace every file access with a solution of a memory mapped file.
It is something I am about to do with compiling something transparently in the background to op codes and execute those from within Java.
-
But thinking about your original question it seams you want to speed up compilation and your edit and run cycle. First of all get a SSD-Disk you get almost memory speed (use a PCI version) and lets say its C we are talking about. C does this linking step resulting in very complex operations that are likely to take more time than reading and writing from / to disk. So just put everything on SSD and live with the lag.
Finally the answer to OP question is yes!
I found memrun repo from guitmz, that demoed running (x86_64) ELF from memory, with golang and assembler. I forked that, and provided C version of memrun, that runs ELF binaries (verified on x86_64 and armv7l), either from standard input, or via first argument process substitution. The repo contains demos and documentation (memrun.c is 47 lines of code only):
https://github.com/Hermann-SW/memrun/tree/master/C#memrun
Here is simplest example, with "-o /dev/fd/1" gcc compiled ELF gets sent to stdout, and piped to memrun, which executes it:
pi#raspberrypi400:~/memrun/C $ gcc info.c -o /dev/fd/1 | ./memrun
My process ID : 20043
argv[0] : ./memrun
no argv[1]
evecve --> /usr/bin/ls -l /proc/20043/fd
total 0
lr-x------ 1 pi pi 64 Sep 18 22:27 0 -> 'pipe:[1601148]'
lrwx------ 1 pi pi 64 Sep 18 22:27 1 -> /dev/pts/4
lrwx------ 1 pi pi 64 Sep 18 22:27 2 -> /dev/pts/4
lr-x------ 1 pi pi 64 Sep 18 22:27 3 -> /proc/20043/fd
pi#raspberrypi400:~/memrun/C $
The reason I was interested in this topic was usage in "C script"s. run_from_memory_stdin.c demonstrates all together:
pi#raspberrypi400:~/memrun/C $ wc memrun.c | ./run_from_memory_stdin.c
foo
bar 42
47 141 1005 memrun.c
pi#raspberrypi400:~/memrun/C $
The C script producing shown output is so small ...
#!/bin/bash
echo "foo"
./memrun <(gcc -o /dev/fd/1 -x c <(sed -n "/^\/\*\*$/,\$p" $0)) 42
exit
/**
*/
#include <stdio.h>
int main(int argc, char *argv[])
{
printf("bar %s\n", argc>1 ? argv[1] : "(undef)");
for(int c=getchar(); EOF!=c; c=getchar()) { putchar(c); }
return 0;
}
P.S:
I added tcc's "-run" option to gcc and g++, for details see:
https://github.com/Hermann-SW/memrun/tree/master/C#adding-tcc--run-option-to-gcc-and-g
Just nice, and nothing gets stored in filesystem:
pi#raspberrypi400:~/memrun/C $ uname -a | g++ -O3 -Wall -run demo.cpp 42
bar 42
Linux raspberrypi400 5.10.60-v7l+ #1449 SMP Wed Aug 25 15:00:44 BST 2021 armv7l GNU/Linux
pi#raspberrypi400:~/memrun/C $
Related
$ rustc <(echo 'fn main(){ print!("Hello world!");}')
$ ls
63
$ gcc <(echo '#include<stdio.h> int main(){ printf("Hello world!\n"); return 0;}')
/dev/fd/63: file not recognized: Illegal seek
collect2: error: ld returned 1 exit status
Why can't ld link the program?
The gcc command is mostly a dispatch engine. For each input file, it determines what sort of file it is from the filename's extension, and then passes the file on to an appropriate processor. So .c files are compiled by the C compiler, .h files are assembled into precompiled headers, .go files are sent to the cgo compiler, and so on.
If the filename has no extension or the extension is not recognised, gcc assumes that it is some kind of object file which should participate in the final link step. These files are passed to the collect2 utility, which then invokes ld, possibly twice. This will be the case with process substitution, which produces filenames like /dev/fd/63, which do not include extensions.
ld does not rely on the filename to identify the object file format. It is generally built with several different object file recognisers, each of which depends on some kind of "magic number" (that is, a special pattern at or near the beginning of the file). It calls these recognisers one at a time until it finds one which is happy to interpret the file. If the file is not recognised as a binary format, ld assumes that it is a linker script (which is a plain text file) and attempts to parse it as such.
Naturally, between attempts ld needs to rewind the file, and since process substitution arranges for a pipe to be passed instead of a file, the seek will fail. (The same thing would happen if you attempted to pass the file through redirection of stdin to a pipe, which you can do: gcc will process stdin as a file if you specify - as a filename. But it insists that you tell it what kind of file it is. See below.)
Since ld can't rewind the file, it will fail after the file doesn't match its first guess. Hence the error message from ld, which is a bit misleading since you might think that the file has already been compiled and the subsequent failure was in the link step. That's not the case; because the filename had no extension, gcc skipped directly to the link phase and almost immediately failed.
In the case of process substitution, pipes, stdin, and badly-named files, you can still manually tell gcc what the file is. You do that with the -x option, which is documented in the GCC manual section on options controlling the kind of output (although in this case, the option actually controls the kind of input).
There are a number of answers to questions like this floating around the Internet, including various answers here on StackOverflow, which claim that GCC attempts to detect the language of input files. It does not do that, and it never has. (And I doubt that it ever will, since some of the languages it compiles are sufficiently similar to each other that accurate detection would be impossible.) The only component which does automatic detection is ld, and it only does that once GCC has irrevocably decided to treat the input file as an object file or linker script.
At least in your case, you can use process substition when specifying the input language manually, using -xc. However, you should put a newline after the include statement.
$ gcc -xc <(echo '#include<stdio.h>
int main(){ printf("Hello world!\n"); return 0;}')
$ ls
a.out
$ ./a.out
Hello world!
For a possible reason why this works, see Charles' answer and the comments on this answer.
I have a C program that needs to run when I turn on my machine (Red Pitaya).
the beginning of the program presented here:
//my_test program
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include "redpitaya/rp.h"
int main(int argc, char **argv){
int jj=1;
while(1) {
printf("Ready for experiment number %i\n",jj);
int i, D;
int32_t TrigDly;
and so on...
the program is executable with a run.sh file called uri_test.sh, that contains the following:
cat /opt/redpitaya/fpga/fpga_0.94.bit>/dev/xdevcfg
LD_LIBRARY_PATH=/opt/redpitaya/lib ./my_test
both files are located in a directory under /root. the program is working perfectly when run manually on PuTTY terminal-
/RedPitaya/Examples/C/Uri# ./my_test
or
/RedPitaya/Examples/C/Uri# ./uri_test.sh
I tried to follow the solution presented here :
https://askubuntu.com/questions/9853/how-can-i-make-rc-local-run-on-startup
without success.
any suggestions? Thank you.
There are several ways to have a program running at startup, and it depends upon your init subsystem (are you using systemd or a SysV-style init?).
BTW, a source program in C is not a script and you generally compile it (using gcc -Wall -Wextra -g) into some executable. In your case, you probably want to set up its rpath at build time (in particular to avoid the LD_LIBRARY_PATH madness), perhaps by passing something like -Wl,-rpath,/opt/redpitaya/lib to your linking gcc command.
Perhaps a crontab(5) entry with #reboot could be enough.
Whatever way you are starting your program at startup time, it generally is the case that its stdin, stdout, stderr streams are redirected (e.g. to /dev/null, see null(4)) or not available. So it is likely that your printf output go nowhere. You might redirect stdout in your script, and I would recommend using syslog(3) in your C program, and logger(1) in your shell script (then look also into some *.log file under /var/log/). BTW, its environment is not the same as in some interactive shell (see environ(7)...), so your program is probably failing very early (perhaps at dynamic linking time, see ld-linux.so(8), since LD_LIBRARY_PATH might not be set to what you want it to be...).
You should consider handing program arguments in your C program (perhaps with getopt_long(3)) and might perhaps have some option (e.g. --daemonize) which would call daemon(3).
You certainly should read Advanced Linux Programming or something similar.
I recommend to first be able to successfully build then run some "hello-world" like program at startup which uses syslog(3). Later on, you could improve that program to make it work with your Red Pitaya thing.
The scenario is I try to re-link the program to a directory via /proc where these directories into an elf executable.
First, I create a directory with name test
$ mkdir test
Link to an hello binary
# ln /bin/ping test
# exit
Open a file descriptor to the target binary
$ exec 3< test
You know, this descriptor should now be accessible via /proc
$ ls -l /proc/$$/fd/3
lr-x------ 1 febri febri 64 Jul 17 11:09 /proc/2930/fd/3 -> /home/febri/test
Remove the directory previously created
$ rm -rf test
The /proc link should still exist, but now will be marked deleted.
$ ls -l /proc/$$/fd/3
lr-x------ 1 febri febri 64 Jul 17 11:09 /proc/2930/fd/3 -> /home/febri/test (deleted)
Replace the directory with example payload like :
$ cat hello.c
#include <stdio.h>
int main(int argc, char ** argv) {
printf("hello!\n");
return 0;
}
$ gcc -w -fPIC -shared -o test hello.c
$ ls -l test
-rwxrwxr-x 1 febri febri 6894 Jul 17 11:20 test
$ file test
test: ELF 32-bit LSB shared object, Intel 80386, version 1 (SYSV), dynamically linked, BuildID[sha1]=361c522d3d9db35ad24de9f3162f80f8a26c9c5b, not stripped
So, I running the linked program and the output is :
$ ./test
Segmentation fault (core dumped)
My question is :
Why the program crash when executed? if anyone can explain?
In fact, the directories and/or symbolic links you messed around with have absolutely nothing to do with the segmentation fault you're facing. Let's look at the command line options you're using to compile hello.c:
-w: Suppress all warnings. This is bad practice and will be the root of each and every single one of your bugs sooner or later. I've yet to find a good reason to suppress any warning. Anyways, this doesn't matter for this situation, as compiling a hello world program yields no warnings.
-fPIC: Generate ]position-independent code](https://en.wikipedia.org/wiki/Position-independent_code).
-shared: Generate a shared library instead of an executable.
So, you're attempting to execute a shared library, which is not intended to be executed! However, GCC marks the output file with the executable bit. That makes no sense at all... until you meet HP-UX's mmap() implementation.
Seemingly, due to one of HP-UX's features (cough design flaws cough), the whole Unix(-like) family has inherited this convention of shared libraries being marked as executable, even though most of them will SIGSEGV if you actually try to execute them.
The actual cause of the segmentation fault, from the operating system's point of view, is an artifact of the way the Executable and Linkable Format was designed back in the late 1980's.
Curiously, it happens that shared libraries can indeed avoid SEGV'ing upon execution. However, black voodoo, such as the GNU C Library's, shall be performed in order to do so. The consequences of performing such a ritual are agonizing. For instance, you're left with no way to initialize the C runtime, so you have to use direct read()'s and write()'s instead of stdio. Other runtime-supported subsystems, such as malloc() and friends, are out of question as well. Also, (because of no runtime support) there's no main(). You have to define your own entry point instead, and call _exit(0) explicitly.
tl;dr: Directories and symbolic links have nothing to do with the issue. You're attempting to execute a shared library, and, as that's not the expected behavior, you are SIGSEGV'd.
As a beginner, I am trying to write a simple c program to learn and execute the "write" function.
I am trying to execute a simple c program simple_write.c
#include <unistd.h>
#include <stdlib.h>
int main()
{
if ((write(1, “Here is some data\n”, 18)) != 18)
write(2, “A write error has occurred on file descriptor 1\n”,46);
exit(0);
}
I also execute chmod +x simple_write.c
But when i execute ./simple_write.c, it gives me syntax error near unexpected token '('
Couldn't figure out why this happens ??
P.S: The expected output is:-
$ ./simple_write
Here is some data
$
You did
$ chmod +x simple_write.c
$ ./simple_write.c
when you should have done
$ cc simple_write.c -o simple_write
$ chmod +x simple_write # On second thought, you probably don’t need this.
$ ./simple_write
In words: compile the program to create an executable simple_write
(without .c) file, and then run that.
What you did was attempt to execute your C source code file
as a shell script.
Notes:
The simple_write file will be a binary file.
Do not look at it with tools meant for text files
(e.g., cat, less, or text editors such as gedit).
cc is the historical name for the C compiler.
If you get cc: not found (or something equivalent),
try the command again with gcc (GNU C compiler).
If that doesn’t work,
If you’re on a shared system (e.g., school or library),
ask a system administrator how to compile a C program.
If you’re on your personal computer (i.e., you’re the administrator),
you will need to install the compiler yourself (or get a friend to do it).
There’s lots of guidance written about this; just search for it.
When you get to writing more complicated programs,
you are going to want to use
make simple_write
which has the advantages of
being able to orchestrate a multi-step build,
which is typical for complex programs, and
it knows the standard ways of compiling programs on that system
(for example, it will probably “know” whether to use cc or gcc).
And, in fact, you should be able to use the above command now.
This may (or may not) simplify your life.
P.S. Now that this question is on Stack Overflow,
I’m allowed to talk about the programming aspect of it.
It looks to me like it should compile, but
The first write line has more parentheses than it needs.
if (write(1, "Here is some data\n", 18) != 18)
should work.
In the second write line,
I count the string as being 48 characters long, not 46.
By the way, do you know how to make the first write fail,
so the second one will execute? Try
./simple_write >&-
You cannot execute C source code in Linux (or other systems) directly.
C is a language that requires compilation to binary format.
You need to install C compiler (the actual procedure differs depending on your system), compile your program and only then you can execute it.
Currently it was interpreted by shell. The first two lines starting with # were ignored as comments. The third line caused a syntax error.
Ok,
I got what i was doing wrong.
These are the steps that I took to get my problem corrected:-
$ gedit simple_write.c
Write the code into this file and save it (with .c extension).
$ make simple_write
$ ./simple_write
And I got the desired output.
Thanks!!
Inspired by this PCG challange: https://codegolf.stackexchange.com/q/61836/31033
I asked my self, if one would try to leave as few trace as possible when compiling such kind of tool (no matter of a browser or something else), is there some way (aimed for gcc/clang as this probably are the preinstalled commandline compillers in such a working enviroment) to hand over source code to the compiler as command line argument or equal mechanism, without need for the source code beeing saved as *.c file, as the user would usually do?
(ofcourse the compiler will produce temp files while compiling, but those probably won't get scanned.)
At least gcc can as it is able to read source from the standard input. You can also use Unix here string bash construction :
gcc -xc - << "int main() { exit(0); }"
or here file sh construction :
gcc -xc - <<MARK
int main() {
exit(0);
}
MARK
----EDIT----
You can also imagine using cryptography to encode your source, uncipher the content on the fly and inject the result to the standard input of gcc, something like:
uncipher myfile.protected | gcc -xc -