Related
A C source code (compiled and running Linux Centos 6.3) has the line:
execve(cmd, argv, envp);
execve does not return, but I want to modify the code to know when it is finished. So I do this:
if (child = fork()) {
waitpid(child, NULL, 0);
/*now I know execve is finished*/
exit(0);
}
execve(cmd, argv, envp);
When I do this, the resulting program works 99% of the time, but very rarely it exhibits strange errors.
Is anything wrong with the above?? I expect the above code to run precisely (except a little slower) as before. Am I correct?
If you want to know the background, the modified code is dash. The execve call is used to run a simple command, after dash has figured out the string to run. When I modify precisely as above (without even running anything after waiting) and recompile and run programs under the modified dash, most of the time they run fine. However, a recompilation of one particular kernel module called "biosutility" gives me this error
cc1: error: unrecognized command line option "-mfentry"
Read carefully documentation of execve(2) and of fork(2) and of waitpid(2). Both execve & fork are tricky and fork is difficult to understand. I strongly suggest to read Advanced Linux Programming (freely available online, but you could buy the paper book) which has several chapters for these questions.
(Don't be afraid of spending a few days reading and understanding these system calls, they are tricky)
Some important points.
every system call can fail and you should always handle its failure, at least by showing some error message with perror(3) and immediately exit(3)-ing.
the execve(2) syscall usually never returns, since it returns only on failure (when successful, it does not return, since the calling program has been replaced so wiped out!) hence most calls to it (and similar exec(3) functions) are often like:
if (execve(cmd, argv, envp)) { perror (cmd); exit(127); };
/* else branch cannot be reached! */
it is customary to use a weird exit code like 127 (usually unused, except like above) on execve failure, and very often you could not do anything else.
When used (almost always) with fork you'll often call execve in the child process.
the fork(2) syscall returns twice on success (once in parent process, once in child process). This is tricky to understand, read the references I gave. It returns once only on failure. So you always keep the result of fork, so typical code would be:
pid_t pid = fork ();
if (pid<0) { // fork has failed
perror("fork"); exit(EXIT_FAILURE);
}
else if (pid==0) { // successful fork in the child process
// very often you call execve in child, so you don't continue here.
// example code:
if (execve(cmd, argv, envp)) { perror (cmd); exit(127); };
// not reached!
};
// here pid is positive, we are in the parent and fork succeeded....
/// do something sensible, at some point you need to call waitpid and use pid
Suggestion: use strace(1) on some programs, perhaps try strace -f bash -c 'date; pwd' and study the output. It mentions many syscalls(2)....
Your sample code might (sometimes) work by just adding some else like
// better code, but still wrong because of unhandled failures....
if ((child = fork())>0) {
waitpid(child, NULL, 0);
/*now I know execve is finished*/
exit(0);
}
/// missing handling of `fork` failure!
else if (!child) {
execve(cmd, argv, envp);
/// missing handling of `execve` failure
}
but that code is still incorrect because failures are not handled.
Here's one possibility.
dash does, in fact, need to know when a child process terminates. It must reap the child (by waiting it) to avoid filling the process table with zombies, and anyway it cares about the exit status of the process.
Now, it knows what the PID of the process it started was, and it can use that when it does a wait to figure out which process terminated and therefore what to do with the exit status.
But you are doing an extra fork. So dash thinks it started some process with PID, say, 368. But you fork a new child, say PID 723. Then you wait for that child, but you ignore the status code. Finally, your process terminates successfully. So then dash notices that process 368 terminated successfully. Even if it didn't.
Now suppose dash was actually executing a script like
do_something && do_something_else
The programmer has specified that the shell definitely shouldn't do_something_else if do_something failed. Terrible things could happen. Or at least mysterious things. Yet, you have hidden that failure. So dash cheerfully fires up do_something_else. Et voilà
Well, it's just a theory. I have no idea, really, but it shows the sort of thing that can happen.
The bottom line is that dash has some mechanism which lets it know when child processes have finished, and if you want to hook into the exit handling of a child process, you'd be much better off figuring out how that mechanism works so that you can hook into it. Trying to add your own additional mechanism is almost certain to end in tears.
Following Rici's excellent comments and answer, I found the root cause of the problem.
The original code exits with whatever cmd exited. I changed that to exit with 0 always. That is why the code behaves differently.
The following fix does not exhibit the error:
int status;
if (child = fork()) {
waitpid(child, &status, 0);
/*now we know execve is finished*/
if (WIFEXITED(status))
exit(WEXITSTATUS(status));
exit(1);
}
execve(cmd, argv, envp);
for this question: "is anything wrong with the above??"
and regarding this code:
if (child = fork()) {
waitpid(child, NULL, 0);
/*now I know execve is finished*/
exit(0);
}
execve(cmd, argv, envp);
the fork() function has three kinds of returned value:
-1 means an error occurred
=0 means fork() was successful and the child process is running
>0 means fork() was successful and the parent process is running
the call to execvp() needs to be followed
(for the rare case of the call failing) with
perror( "execvp failed" );
exit( EXIT_FAILURE );
the call to fork() returns a pid_t.
After the call, the code needs to be similar to: (using child as the pid variable)
if( 0 > child )
{
perror( "fork failed");
exit( EXIT_FAILURE );
}
else if( 0 == child )
{ // then child process
execve(cmd, argv, envp);
perror( "execvp failed" );
exit( EXIT_FAILURE );
}
//else
//{ // else parent process
waitpid(child, NULL, 0);
exit( EXIT_SUCCESS );
for your second question, about the error message:
cc1: error: unrecognized command line option "-mfentry"
the word: unrecognized is mis-spelled, so this is not the actual error message.
This error message is not related to your question about the changes you made to dash.
However, the dash does not directly invoke any compile operations, so I suspect the questions are totally unrelated.
Suggest looking at the makefile for the biosutility utility for why there is a invalid parameter being passed to cc1.
For child processes, the wait() and waitpid() functions can be used to suspends execution of the current process until a child has exited. But this function can not be used for non-child processes.
Is there another function, which can wait for exit of any process ?
Nothing equivalent to wait(). The usual practice is to poll using kill(pid, 0) and looking for return value -1 and errno of ESRCH to indicate that the process is gone.
Update: Since linux kernel 5.3 there is a pidfd_open syscall, which creates an fd for a given pid, which can be polled to get notification when pid has exited.
On BSDs and OS X, you can use kqueue with EVFILT_PROC+NOTE_EXIT to do exactly that. No polling required. Unfortunately there's no Linux equivalent.
So far I've found three ways to do this on Linux:
Polling: you check for the existence of the process every so often, either by using kill or by testing for the existence of /proc/$pid, as in most of the other answers
Use the ptrace system call to attach to the process like a debugger so you get notified when it exits, as in a3nm's answer
Use the netlink interface to listen for PROC_EVENT_EXIT messages - this way the kernel tells your program every time a process exits and you just wait for the right process ID. I've only seen this described in one place on the internet.
Shameless plug: I'm working on a program (open source of course; GPLv2) that does any of the three.
You could also create a socket or a FIFO and read on them. The FIFO is especially simple: Connect the standard output of your child with the FIFO and read. The read will block until the child exits (for any reason) or until it emits some data. So you'll need a little loop to discard the unwanted text data.
If you have access to the source of the child, open the FIFO for writing when it starts and then simply forget about it. The OS will clean the open file descriptor when the child terminates and your waiting "parent" process will wake up.
Now this might be a process which you didn't start or own. In that case, you can replace the binary executable with a script that starts the real binary but also adds monitoring as explained above.
Here is a way to wait for any process (not necessarily a child) in linux to exit (or get killed) without polling:
Using inotify to wait for the /proc'pid' to be deleted would be the perfect solution, but unfortunately inotify does not work with pseudo file systems like /proc.
However we can use it with the executable file of the process.
While the process still exists, this file is being held open.
So we can use inotify with IN_CLOSE_NOWRITE to block until the file is closed.
Of course it can be closed for other reasons (e.g. if another process with the same executable exits) so we have to filter those events by other means.
We can use kill(pid, 0), but that can't guarantee if it is still the same process. If we are really paranoid about this, we can do something else.
Here is a way that should be 100% safe against pid-reuse trouble: we open the pseudo directory /proc/'pid', and keep it open until we are done. If a new process is created in the meantime with the same pid, the directory file descriptor that we hold will still refer to the original one (or become invalid, if the old process cease to exist), but will NEVER refer the new process with the reused pid. Then we can check if the original process still exists by checking, for example, if the file "cmdline" exists in the directory with openat(). When a process exits or is killed, those pseudo files cease to exist too, so openat() will fail.
here is an example code:
// return -1 on error, or 0 if everything went well
int wait_for_pid(int pid)
{
char path[32];
int in_fd = inotify_init();
sprintf(path, "/proc/%i/exe", pid);
if (inotify_add_watch(in_fd, path, IN_CLOSE_NOWRITE) < 0) {
close(in_fd);
return -1;
}
sprintf(path, "/proc/%i", pid);
int dir_fd = open(path, 0);
if (dir_fd < 0) {
close(in_fd);
return -1;
}
int res = 0;
while (1) {
struct inotify_event event;
if (read(in_fd, &event, sizeof(event)) < 0) {
res = -1;
break;
}
int f = openat(dir_fd, "fd", 0);
if (f < 0) break;
close(f);
}
close(dir_fd);
close(in_fd);
return res;
}
You could attach to the process with ptrace(2). From the shell, strace -p PID >/dev/null 2>&1 seems to work. This avoid the busy-waiting, though it will slow down the traced process, and will not work on all processes (only yours, which is a bit better than only child processes).
None I am aware of. Apart from the solution from chaos, you can use semaphores if you can change the program you want to wait for.
The library functions are sem_open(3), sem_init(3), sem_wait(3), ...
sem_wait(3) performs a wait, so you don´t have to do busy waiting as in chaos´ solution. Of course, using semaphores makes your programs more complex and it may not be worth the trouble.
Maybe it could be possible to wait for /proc/[pid] or /proc/[pid]/[something] to disappear?
There are poll() and other file event waiting functions, maybe that could help?
Since linux kernel 5.3 there is a pidfd_open syscall, which creates an fd for a given pid, which can be polled to get notification when pid has exited.
Simply poll values number 22 and 2 of the /proc/[PID]/stat.
The value 2 contains name of the executable and 22 contains start time.
If they change, some other process has taken the same (freed) PID. Thus the method is very reliable.
You can use eBPF to achieve this.
The bcc toolkit implements many excellent monitoring capabilities based on eBPF. Among them, exitsnoop traces process termination, showing the command name and reason for termination,
either an exit or a fatal signal.
It catches processes of all users, processes in containers, as well as processes that
become zombie.
This works by tracing the kernel sched_process_exit() function using dynamic tracing, and
will need updating to match any changes to this function.
Since this uses BPF, only the root user can use this tool.
You can refer to this tool for related implementation.
You can get more information about this tool from the link below:
Github repo: tools/exitsnoop: Trace process termination (exit and fatal signals). Examples.
Linux Extended BPF (eBPF) Tracing Tools
ubuntu manpages: exitsnoop-bpfcc
You can first install this tool and use it to see if it meets your needs, and then refer to its implementation for coding, or use some of the libraries it provides to implement your own functions.
exitsnoop examples:
Trace all process termination
# exitsnoop
Trace all process termination, and include timestamps:
# exitsnoop -t
Exclude successful exits, only include non-zero exit codes and fatal signals:
# exitsnoop -x
Trace PID 181 only:
# exitsnoop -p 181
Label each output line with 'EXIT':
# exitsnoop --label EXIT
Another option
Wait for a (non-child) process' exit using Linux's PROC_EVENTS
Reference project:
https://github.com/stormc/waitforpid
mentioned in the project:
Wait for a (non-child) process' exit using Linux's PROC_EVENTS. Thanks
to the CAP_NET_ADMIN POSIX capability permitted to the waitforpid
binary, it does not need to be set suid root. You need a Linux kernel
having CONFIG_PROC_EVENTS enabled.
Appricate #Hongli's answer for macOS with kqueue. I implement it with swift
/// Wait any pids, including non-child pid. Block until all pids exit.
/// - Parameters:
/// - timeout: wait until interval, nil means no timeout
/// - Throws: WaitOtherPidError
/// - Returns: isTimeout
func waitOtherPids(_ pids: [Int32], timeout: TimeInterval? = nil) throws -> Bool {
// create a kqueue
let kq = kqueue()
if kq == -1 {
throw WaitOtherPidError.createKqueueFailed(String(cString: strerror(errno)!))
}
// input
// multiple changes is OR relation, kevent will return if any is match
var changes: [Darwin.kevent] = pids.map({ pid in
Darwin.kevent.init(ident: UInt(pid), filter: Int16(EVFILT_PROC), flags: UInt16(EV_ADD | EV_ENABLE), fflags: NOTE_EXIT, data: 0, udata: nil)
})
let timeoutDeadline = timeout.map({ Date(timeIntervalSinceNow: $0)})
let remainTimeout: () ->timespec? = {
if let deadline = timeoutDeadline {
let d = max(deadline.timeIntervalSinceNow, 0)
let fractionalPart = d - TimeInterval(Int(d))
return timespec(tv_sec: Int(d), tv_nsec: Int(fractionalPart * 1000 * 1000 * 1000))
} else {
return nil
}
}
// output
var events = changes.map{ _ in Darwin.kevent.init() }
while !changes.isEmpty {
// watch changes
// sync method
let numOfEvent: Int32
if var timeout = remainTimeout() {
numOfEvent = kevent(kq, changes, Int32(changes.count), &events, Int32(events.count), &timeout);
} else {
numOfEvent = kevent(kq, changes, Int32(changes.count), &events, Int32(events.count), nil);
}
if numOfEvent < 0 {
throw WaitOtherPidError.keventFailed(String(cString: strerror(errno)!))
}
if numOfEvent == 0 {
// timeout. Return directly.
return true
}
// handle the result
let realEvents = events[0..<Int(numOfEvent)]
let handledPids = Set(realEvents.map({ $0.ident }))
changes = changes.filter({ c in
!handledPids.contains(c.ident)
})
for event in realEvents {
if Int32(event.flags) & EV_ERROR > 0 { // #see 'man kevent'
let errorCode = event.data
if errorCode == ESRCH {
// "The specified process to attach to does not exist"
// ingored
} else {
print("[Error] kevent result failed with code \(errorCode), pid \(event.ident)")
}
} else {
// succeeded event, pid exit
}
}
}
return false
}
enum WaitOtherPidError: Error {
case createKqueueFailed(String)
case keventFailed(String)
}
PR_SET_PDEATHSIG can be used to wait for parent process termination
Is it possible to wait for all processes launched by a child process in Windows? I can't modify the child or grandchild processes.
Specifically, here's what I want to do. My process launches uninstallA.exe. The process uninistallA.exe launches uninstallB.exe and immediately exits, and uninstallB.exe runs for a while. I'd like to wait for uninstallB.exe to exit so that I can know when the uninstall is finished.
Create a Job Object with CreateJobObject. Use CreateProcess to start UninstallA.exe in a suspended state. Assign that new process to your job object with AssignProcessToJobObject. Start UninstallA.exe running by calling ResumeThread on the handle of the thread you got back from CreateProcess.
Then the hard part: wait for the job object to complete its execution. Unfortunately, this is quite a bit more complex than anybody would reasonably hope for. The basic idea is that you create an I/O completion port, then you create the object object, associate it with the I/O completion port, and finally wait on the I/O completion port (getting its status with GetQueuedCompletionStatus). Raymond Chen has a demonstration (and explanation of how this came about) on his blog.
Here's a technique that, while not infallible, can be useful if for some reason you can't use a job object. The idea is to create an anonymous pipe and let the child process inherit the handle to the write end of the pipe.
Typically, grandchild processes will also inherit the write end of the pipe. In particular, processes launched by cmd.exe (e.g., from a batch file) will inherit handles.
Once the child process has exited, the parent process closes its handle to the write end of the pipe, and then attempts to read from the pipe. Since nobody is writing to the pipe, the read operation will block indefinitely. (Of course you can use threads or asynchronous I/O if you want to keep doing stuff while waiting for the grandchildren.)
When (and only when) the last handle to the write end of the pipe is closed, the write end of the pipe is automatically destroyed. This breaks the pipe and the read operation completes and reports an ERROR_BROKEN_PIPE failure.
I've been using this code (and earlier versions of the same code) in production for a number of years.
// pwatch.c
//
// Written in 2011 by Harry Johnston, University of Waikato, New Zealand.
// This code has been placed in the public domain. It may be freely
// used, modified, and distributed. However it is provided with no
// warranty, either express or implied.
//
// Launches a process with an inherited pipe handle,
// and doesn't exit until (a) the process has exited
// and (b) all instances of the pipe handle have been closed.
//
// This effectively waits for any child processes to exit,
// PROVIDED the child processes were created with handle
// inheritance enabled. This is usually but not always
// true.
//
// In particular if you launch a command shell (cmd.exe)
// any commands launched from that command shell will be
// waited on.
#include <windows.h>
#include <stdio.h>
void error(const wchar_t * message, DWORD err) {
wchar_t msg[512];
swprintf_s(msg, sizeof(msg)/sizeof(*msg), message, err);
printf("pwatch: %ws\n", msg);
MessageBox(NULL, msg, L"Error in pwatch utility", MB_OK | MB_ICONEXCLAMATION | MB_SYSTEMMODAL);
ExitProcess(err);
}
int main(int argc, char ** argv) {
LPWSTR lpCmdLine = GetCommandLine();
wchar_t ch;
DWORD dw, returncode;
HANDLE piperead, pipewrite;
STARTUPINFO si;
PROCESS_INFORMATION pi;
SECURITY_ATTRIBUTES sa;
char buffer[1];
while (ch = *(lpCmdLine++)) {
if (ch == '"') while (ch = *(lpCmdLine++)) if (ch == '"') break;
if (ch == ' ') break;
}
while (*lpCmdLine == ' ') lpCmdLine++;
sa.nLength = sizeof(sa);
sa.bInheritHandle = TRUE;
sa.lpSecurityDescriptor = NULL;
if (!CreatePipe(&piperead, &pipewrite, &sa, 1)) error(L"Unable to create pipes: %u", GetLastError());
GetStartupInfo(&si);
if (!CreateProcess(NULL, lpCmdLine, NULL, NULL, TRUE, 0, NULL, NULL, &si, &pi))
error(L"Error %u creating process.", GetLastError());
if (WaitForSingleObject(pi.hProcess, INFINITE) == WAIT_FAILED) error(L"Error %u waiting for process.", GetLastError());
if (!GetExitCodeProcess(pi.hProcess, &returncode)) error(L"Error %u getting exit code.", GetLastError());
CloseHandle(pipewrite);
if (ReadFile(piperead, buffer, 1, &dw, NULL)) {
error(L"Unexpected data received from pipe; bug in application being watched?", ERROR_INVALID_HANDLE);
}
dw = GetLastError();
if (dw != ERROR_BROKEN_PIPE) error(L"Unexpected error %u reading from pipe.", dw);
return returncode;
}
There is not a generic way to wait for all grandchildren but for your specific case you may be able to hack something together. You know you are looking for a specific process instance. I would first wait for uninstallA.exe to exit (using WaitForSingleObject) because at that point you know that uninstallB.exe has been started. Then use EnumProcesses and GetProcessImageFileName from PSAPI to find the running uninstallB.exe instance. If you don't find it you know it has already finished, otherwise you can wait for it.
An additional complication is that if you need to support versions of Windows older than XP you can't use GetProcessImageFileName, and for Windows NT you can't use PSAPI at all. For Windows 2000 you can use GetModuleFileNameEx but it has some caveats that mean it might fail sometimes (check docs). If you have to support NT then look up Toolhelp32.
Yes this is super ugly.
Use a named mutex.
One possibility is to install Cygwin and then use the ps command to watch for the grandchild to exit
I'm writing a program that monitors system calls ( among other things ). But I'm having some trouble getting ptrace to recognize the process ID that I'm passing to it. Upon executing the program, I get this error message:
:No such process
However, I have verified the process ID right before the call by printing it to the console and verifying it with ps -all.
Here's some of the code that may be relevant ( I can post more if necessary ):
Heading the child process:
/* Call to be traced */
if (ptrace (PTRACE_TRACEME, 0, 0, 0) < 0){
perror ("Process couldn't be traced");
exit (-1);
}
/* Execute process image */
if (execv (ProcessArgs[0], &ProcessArgs[1]) < 0){
perror ("Couldn't execute process");
exit (-1);
}
In a thread of the parent process:
DbgdProcess * _Process = ( DbgdProcess * ) _ProcessPass;
int SystemCall = 0,
Status = 0;
/* I have tried sleep(1) here to wait for PTRACE_ME to no avail */
while (!_Process->CloseSignal){
if ( wait (&Status) < 0) // error handler
if ( WIFEXITED (Status)) // error handler
if (!WIFSTOPPED (Status)) continue;
SystemCall = ptrace (PTRACE_PEEKUSER, _Process->ID, 4 * ORIG_RAX, 0);
if (SystemCall < 0) // error handler
printf ("Process made system call %d\n", SystemCall);
if (ptrace (PTRACE_CONT, _Process->ID, 0, 0) < 0) // error handler
}
May anyone explain this behavior to me?
Some extra notes:
The process being debugged is a direct child of the parent
I'm pretty sure this is a 64 bit compilation because sys/reg.h only defines RAX
All error handlers include a perror() message
Update:
I've read this from the man page:
Most ptrace commands (all except PTRACE_ATTACH, PTRACE_SEIZE,
PTRACE_TRACEME, PTRACE_INTERRUPT, and PTRACE_KILL) require the tracee
to be in a ptrace-stop, otherwise they fail with ESRCH.
ESRCH, I believe, gives the message 'No such process'. So maybe the process is not ptrace-stopped when I make the ptrace call?
Update:
I was testing the code in this example. I did getting it to work after doing the following:
- updating the header from to a
- changing (eax_orig * 4) to (rax_orig * 8)
But those changes are, as well, in my program and it's still not working.
Update:
I've got my code working. I'm not entirely sure why but it started working after I called PTRACE_ATTACH within the same thread that makes the polling calls with ptrace(2). I guess that would mean that ptrace must be used within the same thread of the parent process but I'm not entirely sure. My question now is, does anyone know if that's true? Or, if not, why ptrace behaves this way?
Update:
I found this link, which seems to suggest my problem is not unheard of.
sleep(1) is sometimes not enough; try sleep(5).
Why are you doing a PTRACE_SYSCALL, prior to checking if process has stopped or not?
Ideally, in the parent thread, you should wait for child to stop by using wait.
Once the child stops with WIFSTOPPED, then only use any other ptrace calls.
it appears that ESRCH is being returned by PTRACE_SYSCALL. Can you please confirm it
For an assignment, I am working on creating a time aware shell. The shell forks and executes commands and kills them if they run for more than a set amount of time. For example.
input# /bin/ls
a.out code.c
input# /bin/cat
Error - Expired After 10 Seconds.
input#
Now, my question is: is there a way to prevent the alarm from starting if an error is incurred in the processing of the program, that is, when exevce returns -1?
Since the child-process runs separately and after hours of experimenting and research I have yet to find anything that discusses or even hints at this type of task, I have a feeling it may be impossible. If it is indeed impossible, how can I prevent something like the following from happening...
input# /bin/fgdsfgs
Error executing program
input# Error - Expired After 10 Seconds.
For context, here is the code I am currently working with, with my attempt at doing this myself removed. Thanks for the help in advance!
while(1){
write(1, prompt, sizeof(prompt)); //Prompt user
byteCount = read(0, cmd, 1024); //Retrieve command from user, and count bytes
cmd[byteCount-1] = '\0'; //Prepare command for execution
//Create Thread
child = fork();
if(child == -1){
write(2, error_fork, sizeof(error_fork));
}
if(child == 0){ //Working in child
if(-1 == execve(cmd,arg,env)){ //Execute program or error
write(2, error_exe, sizeof(error_exe));
}
}else if(child != 0){ //Working in the parent
signal(SIGALRM, handler); //Handle the alarm when it goes off
alarm(time);
wait();
alarm(0);
}
}
According to the man page:
Description
The alarm() function shall cause the system to generate a SIGALRM signal for the process after the number of realtime seconds specified by seconds have elapsed. Processor scheduling delays may prevent the process from handling the signal as soon as it is generated.
If seconds is 0, a pending alarm request, if any, is canceled.
Alarm requests are not stacked; only one SIGALRM generation can be scheduled in this manner. If the SIGALRM signal has not yet been generated, the call shall result in rescheduling the time at which the SIGALRM signal is generated.
Interactions between alarm() and any of setitimer(), ualarm(), or usleep() are unspecified.
So, to cancel an alarm: alarm(0). It is even present in your sample code.
The main problem
By the way, you're missing an important piece here:
if(child == 0){ //Working in child
if(-1 == execve(cmd,arg,env)){ //Execute program or error
write(2, error_exe, sizeof(error_exe));
_exit(EXIT_FAILURE); // EXIT OR A FORKED SHELL WILL KEEP GOING
}
}else if(child != 0){ //Working in the parent
The wait() system call takes an argument; why aren't you compiling with the right headers in the source file and with compiler warnings (preferably errors) for undeclared functions? Or, if you are getting such warnings, pay heed to them before submitting code for review on places like StackOverflow.
You don't need to test the return status from execve() (or any of the exec*() functions); if it returns, it failed.
It is good to write an error on failure. It would be better if the child process exited as well, so that it doesn't go back into the while (1) loop, competing with your main shell for input data.
if (child == 0)
{
execve(cmd, arg, env);
write(2, error_exe, sizeof(error_exe));
exit((errno == ENOEXEC) ? 126 : 127);
}
In fact, the non-exiting of your child is the primary cause of your problem; the wait doesn't return until the alarm goes off because the child hasn't exited. The exit statuses shown are intended to match the POSIX shell specification.