Program crashes when executed after linking to a directory - c

The scenario is I try to re-link the program to a directory via /proc where these directories into an elf executable.
First, I create a directory with name test
$ mkdir test
Link to an hello binary
# ln /bin/ping test
# exit
Open a file descriptor to the target binary
$ exec 3< test
You know, this descriptor should now be accessible via /proc
$ ls -l /proc/$$/fd/3
lr-x------ 1 febri febri 64 Jul 17 11:09 /proc/2930/fd/3 -> /home/febri/test
Remove the directory previously created
$ rm -rf test
The /proc link should still exist, but now will be marked deleted.
$ ls -l /proc/$$/fd/3
lr-x------ 1 febri febri 64 Jul 17 11:09 /proc/2930/fd/3 -> /home/febri/test (deleted)
Replace the directory with example payload like :
$ cat hello.c
#include <stdio.h>
int main(int argc, char ** argv) {
printf("hello!\n");
return 0;
}
$ gcc -w -fPIC -shared -o test hello.c
$ ls -l test
-rwxrwxr-x 1 febri febri 6894 Jul 17 11:20 test
$ file test
test: ELF 32-bit LSB shared object, Intel 80386, version 1 (SYSV), dynamically linked, BuildID[sha1]=361c522d3d9db35ad24de9f3162f80f8a26c9c5b, not stripped
So, I running the linked program and the output is :
$ ./test
Segmentation fault (core dumped)
My question is :
Why the program crash when executed? if anyone can explain?

In fact, the directories and/or symbolic links you messed around with have absolutely nothing to do with the segmentation fault you're facing. Let's look at the command line options you're using to compile hello.c:
-w: Suppress all warnings. This is bad practice and will be the root of each and every single one of your bugs sooner or later. I've yet to find a good reason to suppress any warning. Anyways, this doesn't matter for this situation, as compiling a hello world program yields no warnings.
-fPIC: Generate ]position-independent code](https://en.wikipedia.org/wiki/Position-independent_code).
-shared: Generate a shared library instead of an executable.
So, you're attempting to execute a shared library, which is not intended to be executed! However, GCC marks the output file with the executable bit. That makes no sense at all... until you meet HP-UX's mmap() implementation.
Seemingly, due to one of HP-UX's features (cough design flaws cough), the whole Unix(-like) family has inherited this convention of shared libraries being marked as executable, even though most of them will SIGSEGV if you actually try to execute them.
The actual cause of the segmentation fault, from the operating system's point of view, is an artifact of the way the Executable and Linkable Format was designed back in the late 1980's.
Curiously, it happens that shared libraries can indeed avoid SEGV'ing upon execution. However, black voodoo, such as the GNU C Library's, shall be performed in order to do so. The consequences of performing such a ritual are agonizing. For instance, you're left with no way to initialize the C runtime, so you have to use direct read()'s and write()'s instead of stdio. Other runtime-supported subsystems, such as malloc() and friends, are out of question as well. Also, (because of no runtime support) there's no main(). You have to define your own entry point instead, and call _exit(0) explicitly.
tl;dr: Directories and symbolic links have nothing to do with the issue. You're attempting to execute a shared library, and, as that's not the expected behavior, you are SIGSEGV'd.

Related

How can I test if an specific .c file in linux kernel source builds?

I work a lot with kernels because package them in my distro (Parabola), and sometimes some modifications make one single .c file fail to build. However I wanted to know if there's a way to test one of those single .c files to know if it will end up failing when building the whole kernel. For example, let's say that drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c fails to build, so if I manually do:
$ gcc drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
it fails with:
drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c:23:10: fatal error: linux/moduleparam.h: No such file or directory
23 | #include <linux/moduleparam.h>
| ^~~~~~~~~~~~~~~~~~~~~
compilation terminated.
most includes are under the includes dir, but idk how to make it work. Is it possible to do what I want? and how?
When gcc fails to build something, it should return a nonzero exit code. When it builds something successfully, it should return 0.
If you are doing this in the shell, you can check the exit code of the most recent command that was run; it lives in the $? variable. You could compare the exit code to 0, and if they don't match, then you can do whatever you want to do when there's an error, something like:
gcc somefile.c
if [ $? != 0 ]; then
# do whatever you want to do on errors
fi
If I understand the question correctly you want to know how to build a single .o file within the kernel tree. To do that just invoke make with that single .o file as target. Do this from the linux kernel source root directory.
For example:
$ make ./drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.o
CALL scripts/checksyscalls.sh
CALL scripts/atomic/check-atomics.sh
DESCEND objtool
CC [M] drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.o
$

ar on MSYS2 shell receives truncated paths when called from Makefile?

I'm using git-bash.exe from a PortableGit install, with environment variables from a different MinGW. So I have:
WORKGROUP+user#AD-X MINGW32 /z/user/Downloads
$ which ar
//WORKGROUP.EX.COM/Users/user/Downloads/mingw-w64/i686-4.9.3-posix-dwarf-rt_v4-rev1/mingw32/bin/ar
WORKGROUP+user#AD-X MINGW32 /z/user/Downloads
$ gcc --version | head -1
GNU ar (GNU Binutils) 2.25
Now there's a library I'm building, and at the end, the link step fails at the call of the ar command, which looks something like this:
ar -cr "Z:/user/Downloads/MyProjectNameABCDE/someLibraryABC/libs/someLibraryDEFGHI/lib/mingw/libsomeLibraryABCDebug.a" \
Z:/user/Downloads/MyProjectNameABCDE/someLibraryABC/libs/someLibraryDEFGHI/lib/mingw/obj/Debug/libs/someLibraryDEFGHI/test/someObject.o \
[...]
... and there's a bunch of objects listed in it - the command line is 10000 characters in length, which is still below getconf ARG_MAX of 32000 in MSYS2 shell of PortableGit (git-bash.exe). However, the failure I get is No such file or directory:
\\WORKGROUP.EX.COM\Users\user\Downloads\mingw-w64\i686-4.9.3-posix-dwarf-rt_v4-rev1\mingw32\bin\ar.exe: Z:/user/Downloads/MyProjectNameABCDE/someLibraryABC/libs/someLibraryDEFGHI/lib/mingw/obj/Debug/libs/some: No such file or directory
... and the path given is quite clearly a truncated version of the path where the object files are. What's even stranger, when I copy the full ar command line printed by the make process, and paste it back in the same terminal, it completes without error?
Would anyone have an idea why this happens, and what could I do to make sure ar completes when called from the Makefile?
Ok, so first I found in the Makefile where the ar command is ran, and I added the -v switch to it (so, -crv) for verbose.
I could find that most of the command line is read, and objects added, until it comes to about 8192 bytes of the command line, after which it is truncated, and the failure occurs. This is apparently a known issue:
How to avoid Max Command line size on Windows
Solving the 8192 Character Command Line Limit on Windows | MCU on Eclipse
... although, I'm not quite clear on why it should appear in a make process which already runs in git-bash.exe, that is, a MSYS2 shell ?!
Anyways, the workaround/fix I used is to use the file option of make, since just doing "#echo $CMD > arscript.sh" from the Makefile will again save only the 8kb truncated command line to file; so instead of the original call:
#$(AR) ${FLAGS_FOR_AR} "$#" $(FILES_FOR_AR)
... we save line that to file, and then call bash to interpret it as script; that is:
$(file >arscript.sh,#$(AR) ${FLAGS_FOR_AR} "$#" $(FILES_FOR_AR))
bash -x arscript.sh
... and this finally worked for me.

How can execute a decrypted file residing in the memory? [duplicate]

Is it possible to compile a C++ (or the like) program without generating the executable file but writing it and executing it directly from memory?
For example with GCC and clang, something that has a similar effect to:
c++ hello.cpp -o hello.x && ./hello.x $# && rm -f hello.x
In the command line.
But without the burden of writing an executable to disk to immediately load/rerun it.
(If possible, the procedure may not use disk space or at least not space in the current directory which might be read-only).
Possible? Not the way you seem to wish. The task has two parts:
1) How to get the binary into memory
When we specify /dev/stdout as output file in Linux we can then pipe into our program x0 that reads
an executable from stdin and executes it:
gcc -pipe YourFiles1.cpp YourFile2.cpp -o/dev/stdout -Wall | ./x0
In x0 we can just read from stdin until reaching the end of the file:
int main(int argc, const char ** argv)
{
const int stdin = 0;
size_t ntotal = 0;
char * buf = 0;
while(true)
{
/* increasing buffer size dynamically since we do not know how many bytes to read */
buf = (char*)realloc(buf, ntotal+4096*sizeof(char));
int nread = read(stdin, buf+ntotal, 4096);
if (nread<0) break;
ntotal += nread;
}
memexec(buf, ntotal, argv);
}
It would also be possible for x0 directly execute the compiler and read the output. This question has been answered here: Redirecting exec output to a buffer or file
Caveat: I just figured out that for some strange reason this does not work when I use pipe | but works when I use the x0 < foo.
Note: If you are willing to modify your compiler or you do JIT like LLVM, clang and other frameworks you could directly generate executable code. However for the rest of this discussion I assume you want to use an existing compiler.
Note: Execution via temporary file
Other programs such as UPX achieve a similar behavior by executing a temporary file, this is easier and more portable than the approach outlined below. On systems where /tmp is mapped to a RAM disk for example typical servers, the temporary file will be memory based anyway.
#include<cstring> // size_t
#include <fcntl.h>
#include <stdio.h> // perror
#include <stdlib.h> // mkostemp
#include <sys/stat.h> // O_WRONLY
#include <unistd.h> // read
int memexec(void * exe, size_t exe_size, const char * argv)
{
/* random temporary file name in /tmp */
char name[15] = "/tmp/fooXXXXXX";
/* creates temporary file, returns writeable file descriptor */
int fd_wr = mkostemp(name, O_WRONLY);
/* makes file executable and readonly */
chmod(name, S_IRUSR | S_IXUSR);
/* creates read-only file descriptor before deleting the file */
int fd_ro = open(name, O_RDONLY);
/* removes file from file system, kernel buffers content in memory until all fd closed */
unlink(name);
/* writes executable to file */
write(fd_wr, exe, exe_size);
/* fexecve will not work as long as there in a open writeable file descriptor */
close(fd_wr);
char *const newenviron[] = { NULL };
/* -fpermissive */
fexecve(fd_ro, argv, newenviron);
perror("failed");
}
Caveat: Error handling is left out for clarities sake. Includes for sake of brevity.
Note: By combining step main() and memexec() into a single function and using splice(2) for copying directly between stdin and fd_wr the program could be significantly optimized.
2) Execution directly from memory
One does not simply load and execute an ELF binary from memory. Some preparation, mostly related to dynamic linking, has to happen. There is a lot of material explaining the various steps of the ELF linking process and studying it makes me believe that theoretically possible. See for example this closely related question on SO however there seems not to exist a working solution.
Update UserModeExec seems to come very close.
Writing a working implementation would be very time consuming, and surely raise some interesting questions in its own right. I like to believe this is by design: for most applications it is strongly undesirable to (accidentially) execute its input data because it allows code injection.
What happens exactly when an ELF is executed? Normally the kernel receives a file name and then creates a process, loads and maps the different sections of the executable into memory, performs a lot of sanity checks and marks it as executable before passing control and a file name back to the run-time linker ld-linux.so (part of libc). The takes care of relocating functions, handling additional libraries, setting up global objects and jumping to the executables entry point. AIU this heavy lifting is done by dl_main() (implemented in libc/elf/rtld.c).
Even fexecve is implemented using a file in /proc and it is this need for a file name that leads us to reimplement parts of this linking process.
Libraries
UserModeExec
libelf -- read, modify, create ELF files
eresi -- play with elfes
OSKit (seems like a dead project though)
Reading
http://www.linuxjournal.com/article/1060?page=0,0 -- introduction
http://wiki.osdev.org/ELF -- good overview
http://s.eresi-project.org/inc/articles/elf-rtld.txt -- more detailed Linux-specific explanation
http://www.codeproject.com/Articles/33340/Code-Injection-into-Running-Linux-Application -- how to get to hello world
http://www.acsu.buffalo.edu/~charngda/elf.html -- nice reference of ELF structure
Loaders and Linkers by John Levine -- deeoer explanation of linking
Related Questions at SO
Linux user-space ELF loader
ELF Dynamic loader symbol lookup ordering
load-time ELF relocation
How do global variables get initialized by the elf loader
So it seems possible, you decide whether is also practical.
Yes, though doing it properly requires designing significant parts of the compiler with this in mind. The LLVM guys have done this, first with a kinda-separate JIT, and later with the MC subproject. I don't think there's a ready-made tool doing it. But in principle, it's just a matter of linking to clang and llvm, passing the source to clang, and passing the IR it creates to MCJIT. Maybe a demo does this (I vaguely recall a basic C interpreter that worked like this, though I think it was based on the legacy JIT).
Edit: Found the demo I recalled. Also, there's cling, which seems to do basically what I described, but better.
Linux can create virtual file systems in RAM using tempfs. For example, I have my tmp directory set up in my file system table like so:
tmpfs /tmp tmpfs nodev,nosuid 0 0
Using this, any files I put in /tmp are stored in my RAM.
Windows doesn't seem to have any "official" way of doing this, but has many third-party options.
Without this "RAM disk" concept, you would likely have to heavily modify a compiler and linker to operate completely in memory.
If you are not specifically tied to C++, you may also consider other JIT based solutions:
in Common Lisp SBCL is able to generate machine code on the fly
you could use TinyCC and its libtcc.a which emits quickly poor (i.e. unoptimized) machine code from C code in memory.
consider also any JITing library, e.g. libjit, GNU Lightning, LLVM, GCCJIT, asmjit
of course emitting C++ code on some tmpfs and compiling it...
But if you want good machine code, you'll need it to be optimized, and that is not fast (so the time to write to a filesystem is negligible).
If you are tied to C++ generated code, you need a good C++ optimizing compiler (e.g. g++ or clang++); they take significant time to compile C++ code to optimized binary, so you should generate to some file foo.cc (perhaps in a RAM file system like some tmpfs, but that would give a minor gain, since most of the time is spent inside g++ or clang++ optimization passes, not reading from disk), then compile that foo.cc to foo.so (using perhaps make, or at least forking g++ -Wall -shared -O2 foo.cc -o foo.so, perhaps with additional libraries). At last have your main program dlopen that generated foo.so. FWIW, MELT was doing exactly that, and on Linux workstation the manydl.c program shows that a process can generate then dlopen(3) many hundred thousands of temporary plugins, each one being obtained by generating a temporary C file and compiling it. For C++ read the C++ dlopen mini HOWTO.
Alternatively, generate a self-contained source program foobar.cc, compile it to an executable foobarbin e.g. with g++ -O2 foobar.cc -o foobarbin and execute with execve that foobarbin executable binary
When generating C++ code, you may want to avoid generating tiny C++ source files (e.g. a dozen lines only; if possible, generate C++ files of a few hundred lines at least; unless lots of template expansion happens thru extensive use of existing C++ containers, where generating a small C++ function combining them makes sense). For instance, try if possible to put several generated C++ functions in the same generated C++ file (but avoid having very big generated C++ functions, e.g. 10KLOC in a single function; they take a lot of time to be compiled by GCC). You could consider, if relevant, to have only one single #include in that generated C++ file, and pre-compile that commonly included header.
Jacques Pitrat's book Artificial Beings, the conscience of a conscious machine (ISBN 9781848211018) explains in details why generating code at runtime is useful (in symbolic artificial intelligence systems like his CAIA system). The RefPerSys project is trying to follow that idea and generate some C++ code (and hopefully, more and more of it) at runtime. Partial evaluation is a relevant concept.
Your software is likely to spend more CPU time in generating C++ code than GCC in compiling it.
tcc compiler "-run" option allows for exactly this, compile into memory, run there and finally discard the compiled stuff. No filesystem space needed. "tcc -run" can be used in shebang to allow for C script, from tcc man page:
#!/usr/local/bin/tcc -run
#include <stdio.h>
int main()
{
printf("Hello World\n");
return 0;
}
C scripts allow for mixed bash/C scripts, with "tcc -run" not needing any temporary space:
#!/bin/bash
echo "foo"
sed -n "/^\/\*\*$/,\$p" $0 | tcc -run -
exit
/**
*/
#include <stdio.h>
int main()
{
printf("bar\n");
return 0;
}
Execution output:
$ ./shtcc2
foo
bar
$
C scripts with gcc are possible as well, but need temporary space like others mentioned to store executable. This script produces same output as the previous one:
#!/bin/bash
exc=/tmp/`basename $0`
if [ $0 -nt $exc ]; then sed -n "/^\/\*\*$/,\$p" $0 | gcc -x c - -o $exc; fi
echo "foo"
$exc
exit
/**
*/
#include <stdio.h>
int main()
{
printf("bar\n");
return 0;
}
C scripts with suffix ".c" are nice, headtail.c was my first ".c" file that needed to be executable:
$ echo -e "1\n2\n3\n4\n5\n6\n7" | ./headtail.c
1
2
3
6
7
$
I like C scripts, because you just have one file, you can easily move around, and changes in bash or C part require no further action, they just work on next execution.
P.S:
The above shown "tcc -run" C script has a problem, C script stdin is not available for executed C code. Reason was that I passed extracted C code via pipe to "tcc -run". New gist run_from_memory_stdin.c does it correctly:
...
echo "foo"
tcc -run <(sed -n "/^\/\*\*$/,\$p" $0) 42
...
"foo" is printed by bash part, "bar 42" from C part (42 is passed argv[⁠1]), and piped script input gets printed from C code then:
$ route -n | ./run_from_memory_stdin.c
foo
bar 42
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 172.29.58.98 0.0.0.0 UG 306 0 0 wlan1
10.0.0.0 0.0.0.0 255.255.255.0 U 0 0 0 wlan0
169.254.0.0 0.0.0.0 255.255.0.0 U 303 0 0 wlan0
172.29.58.96 0.0.0.0 255.255.255.252 U 306 0 0 wlan1
$
One can easily modify the compiler itself. It sounds hard first but thinking about it, it seams obvious. So modifying the compiler sources directly expose a library and make it a shared library should not take that much of afford (depending on the actual implementation).
Just replace every file access with a solution of a memory mapped file.
It is something I am about to do with compiling something transparently in the background to op codes and execute those from within Java.
-
But thinking about your original question it seams you want to speed up compilation and your edit and run cycle. First of all get a SSD-Disk you get almost memory speed (use a PCI version) and lets say its C we are talking about. C does this linking step resulting in very complex operations that are likely to take more time than reading and writing from / to disk. So just put everything on SSD and live with the lag.
Finally the answer to OP question is yes!
I found memrun repo from guitmz, that demoed running (x86_64) ELF from memory, with golang and assembler. I forked that, and provided C version of memrun, that runs ELF binaries (verified on x86_64 and armv7l), either from standard input, or via first argument process substitution. The repo contains demos and documentation (memrun.c is 47 lines of code only):
https://github.com/Hermann-SW/memrun/tree/master/C#memrun
Here is simplest example, with "-o /dev/fd/1" gcc compiled ELF gets sent to stdout, and piped to memrun, which executes it:
pi#raspberrypi400:~/memrun/C $ gcc info.c -o /dev/fd/1 | ./memrun
My process ID : 20043
argv[0] : ./memrun
no argv[1]
evecve --> /usr/bin/ls -l /proc/20043/fd
total 0
lr-x------ 1 pi pi 64 Sep 18 22:27 0 -> 'pipe:[1601148]'
lrwx------ 1 pi pi 64 Sep 18 22:27 1 -> /dev/pts/4
lrwx------ 1 pi pi 64 Sep 18 22:27 2 -> /dev/pts/4
lr-x------ 1 pi pi 64 Sep 18 22:27 3 -> /proc/20043/fd
pi#raspberrypi400:~/memrun/C $
The reason I was interested in this topic was usage in "C script"s. run_from_memory_stdin.c demonstrates all together:
pi#raspberrypi400:~/memrun/C $ wc memrun.c | ./run_from_memory_stdin.c
foo
bar 42
47 141 1005 memrun.c
pi#raspberrypi400:~/memrun/C $
The C script producing shown output is so small ...
#!/bin/bash
echo "foo"
./memrun <(gcc -o /dev/fd/1 -x c <(sed -n "/^\/\*\*$/,\$p" $0)) 42
exit
/**
*/
#include <stdio.h>
int main(int argc, char *argv[])
{
printf("bar %s\n", argc>1 ? argv[1] : "(undef)");
for(int c=getchar(); EOF!=c; c=getchar()) { putchar(c); }
return 0;
}
P.S:
I added tcc's "-run" option to gcc and g++, for details see:
https://github.com/Hermann-SW/memrun/tree/master/C#adding-tcc--run-option-to-gcc-and-g
Just nice, and nothing gets stored in filesystem:
pi#raspberrypi400:~/memrun/C $ uname -a | g++ -O3 -Wall -run demo.cpp 42
bar 42
Linux raspberrypi400 5.10.60-v7l+ #1449 SMP Wed Aug 25 15:00:44 BST 2021 armv7l GNU/Linux
pi#raspberrypi400:~/memrun/C $

Glibc link difference causing segmentation fault

Something about the server I build on is broken (I am not the only one who uses it...). It is SLES 11 (no SP). I have tried uninstalling and reinstalling gcc, glibc etc with no success.
The problem is my built program seg-faults as soon as it hits a library function such as memset or strlen (note it is the calling of this function and not the function itself, the parameters are fine). I think it is definitely linking wrong and I can prove it is different to how it was with readelf. eg:
# readelf -s myprog | grep memset
247: 081461d0 52 <OS specific>: 10 GLOBAL DEFAULT 27 memset#GLIBC_2.0 (3)
3530: 081461d0 52 <OS specific>: 10 GLOBAL DEFAULT 27 memset##GLIBC_2.0
vs a previous working version that says:
69: 00000000 0 FUNC GLOBAL DEFAULT UND memset#GLIBC_2.0 (2)
2035: 00000000 0 FUNC GLOBAL DEFAULT UND memset##GLIBC_2.0
Its a fairly standard makefile and nothing has changed. The linker flags are:
LDFLAGS = -L$(companylibrarypath) -lourcompanylibrary -L$(mysql_lib_path) -lmysqlclient -L/usr/tls/ -lpthread -pthread -lz -L$(curl_lib_path) -lcurl -lxslt
Your programs by some bad way redefines functions like memset (instead of using the std library provided versions). It is likely caused by some headers, which may be "standard"... Also may be your compiler (gcc?) by some way generates (elf) code not for your platform...
Also you say the link process is failing, did you mean the linker is failing and cannot produce an executable?
The functions you say fail (memset, printf) are extensively used, if your glibc was really that broken you wouldn't ever reach a shell when booting. And definitely you would't be able to compile anything. I'd first look at the libraries it is picking up via the -L... flags. Check if a LD_PRELOAD=... snuck in somehow. See what ldd and nm tell you. Perhaps a strace myprog 2> /tmp/log or running it under a debugger clear up the mistery...

C - program compiling, but unable to provide arguments

I'm on a Mac and in terminal I'm compiling my program
gcc -Wall -g -o example example.c
it compiles (there are no errors), but when I try to provide command line arguments
example 5 hello how are you
terminal responds with "-bash: example: command not found"
how am supposed to provide the arguments I want to provide after compiling?
Run it like this with path:
./example 5 hello how are you
Unless the directory where the example binary is part of the PATH variable, what you have won't work even if the binary you are running is in the current directory.
It is not a compilation issue, but an issue with your shell. The current directory is not in your PATH (look with echo $PATH and use which to find out how the shell uses it for some particular program, e.g. which gcc).
I suggest testing your program with an explicit file path for the program like
./example 5 hello how are you
You could perhaps edit your ~/.bashrc to add . at the end of your PATH. There are pro and conses (in particular some possible security issues if your current directory happens to be sometimes a "malicious" one like perhaps /tmp might be : bad guys might put there a gcc which is a symlink to /bin/rm so you need to add . at the end of your PATH if you do).
Don't forget to learn how to use a debugger (like gdb). This skill is essential when coding in C (or in C++). Perhaps consider also upgrading your gcc (Apple don"t like much its current GPLv3 license so don't distribute the recent one; try just gcc -v and notice that the latest released GCC is today 4.8.1).
./example 5 Hello how are you is the syntax you're looking for.
This article lends a good explanation as to why this is important.
Basically, when you hit Enter, the shell checks to see if the first set of characters is an absolute path. If it's not, it checks the PATH variable to find executables with the name of the command you are trying to run. If it's found, it will be run, but otherwise it will crash and burn and you will become very sad.

Resources