(I know, Too many answers already, but need help)
After i executed my payload to spawn a shell i had this error. Does someone know the problem ?
close failed in file object destructor:
sys.excepthook is missing
lost sys.stderr
Related
First off, apologies in advance for any sloppy code - I'm relatively new to C. I'm currently working my way through some coding for my introductory OS class, but having spent far too many hours of my weekend trying to brute-force my way through this one problem, I figure it's time I swallow my pride and try to get a nudge in the right direction here. It deals with compressing basic text files and is meant to make use of Unix system calls and pipes. Following a fork() call, one process is supposed to handle reading a text file (specified as a command line argument) and then send the data via pipe to the other process, which handles compression and writing to a destination file. Having tested out a non-pipe version of this program, I'm fairly sure the compression stuff works as intended, but I think my issue lies with the pipe data sharing. I don't think anything is getting passed through, based on some amateur debugging.
The program also terminates prematurely with the following line:
Segmentation fault (core dumped)
And here's the code itself:
(redacted)
Can someone figure out what the issue may be? I'd be unbelievably appreciative.
Create the pipe before you fork. As it is you are creating a seperate pipe in each process.
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 6 years ago.
Improve this question
In my program I get a freeze sometimes when writing to stderr in this case:
Program starts (e.g. from Terminal)
Program forks itself two times and using execvp to start each process with different parameters (original file is read from /proc/self/exe)
The first started program quits.
Now the two forked processes are still running
Close the terminal the first program was started
A few attempts using fprintf to write to stderr work, on some point I will get a complete lockup on my program. Debugger tells me its fprintf.
What is happening here? What I already tried is putting a SIG_IGN on SIGPIPE to prevent the program crash as soon as nobody is listening on the pipes anymore. But now I am stuck (the behavious with the Freeze is the same, with SIG_IGN and without it).
Any help is appreciated.
In a nutshell: The system sends your program signals to save you from a problem. You ignore those signals. Bad things happen.
When your parent program was run, it had stdin (fd 0), stdout (fd 1) and stderr (fd 2) connected to the TTY of the shell that ran you (the terminal). These function much like pipes. When you closed the terminal, these fds are left hanging, with no one on the other side to be able to communicate with them.
At first, nothing bad happens. You write to stderr, and the standard library caches those writes. No system calls are performed, so no problem.
But then the buffers fill up, and stdlib tries to flush them. When it does that, it fills up the kernel buffers for the pipe or TTY. At first, that works fine as well. Sooner or later, however, these buffers fill up as well. When that happens, the kernel suspends your processes and waits for someone to read from the other end of those pipes. Since you closed the terminal, however, no one ever will, and your programs are suspended indefinitely.
The standard way to avoid this problem is to disconnect the 0-2 file descriptors from the controlling TTY. Instead of telling you how to do that, I would like to suggest that what you are trying to do here, run a program so that it is completely disconnected from a TTY, has a name: daemonizing.
Check out this question for more details on how to do that.
Edited to add:
It was not clear from your function whether the programs you are execveing are your own or not. If they are not, please be aware that many user programs are not designed to run as a daemon. The most obvious caveat is that if a program unconnected to any TTY opens a TTY file, and unless it passes O_NOCTTY to open, that TTY becomes the controlling TTY of the program. Depending on circumstances, that might lead to unexpected results.
Is it possible to programmatically capture stdout (and stdin) of an already running process on Linux? (Maybe redirect it to a pipe?)
It would be best if the solution worked in userspace (meaning without needing root privileges).
I've seen an answer apparently using gdb, but I'd like to do it without gdb.
EDIT: To clarify: no, I don't have access to the code, neither do I want to change the binary, I want the solution to work from a separate process. The target process is already running anyway.
From inside the process itself (assuming you can change its code in C) you might try freopen(3), perhaps as
FILE*newout = freopen("/some/path", "w", stdout);
if (!newout) { perror("freopen"); exit (EXIT_FAILURE); }
stdout = newout;
See also stdio(3). (Otherwise dup2 the STDOUT_FILENO).
From outside of the process you might perhaps play with /proc/$PID/fd/ that is dup2(2), or redirecting, or tee(1), the /proc/$PID/fd/0 for the stdin of your process $PID, the /proc/$PID/fd/1 for the stdout of your process $PID etc. See proc(5) for more.
Hmmm, note to self: after reading some other similar questions, here are some promising projects which might (?) help me find the answer:
neercs (via)
reptyr (via, via)
injcode (via, via)
If you've already started the process you can use gdb to actually redirect stdout or strace to just intercept the calls to write et.al.
If you don't want to use gdb or strace you would probably need to basically do the same thing as these do. The magic they're doing is to use the ptrace function to trace the other process. This will require that you have the right to trace the program (which also gdb and strace requires), as a normal user you can't trace programs of other users.
Linux Journal has an article about playing with ptrace that shows how you do it. One thing to note is that this will be platform dependent: you have to write the code specifically for the platform you're using, the article seems to have examples for the (32 bit) x86 platform.
This question already has answers here:
How to create a single instance application in C or C++
(15 answers)
Closed 9 years ago.
I have a binary and it's a daemon and it's developed in C. I want to add a check at the beginning of my program to guarantee that the binary is launched only one time. My binary runs on Linux.
Any suggestions?
A common method is to put a PID file in /var/run. After your daemon starts successfully, you flock write its PID to this file. At startup, you check the value of the PID in this file, if it exists. If there is no PID currently running, it's safe for the application to startup. If the PID exists, perform a check to see if that PID is an instance of your executable. If it's not, it is also safe to startup. You should delete the file on exit, but it's not strictly necessary.
The best way to do this, in my opinion, is not to do it. Let your initialization scheme serialize instances of the daemon: systemd, runit, supervise, upstart, launchd, and so on can make sure there are no double invocations.
If you need to invoke your daemon "by hand," try the linux utility flock(1) or a 3rd-party utility like setlock. Both of these will run the daemon under the protection of a (perhaps inherited) lockfile which remains locked for the life of the program.
If you insist upon adding this functionality to the daemon itself (which, in my opinion, is complication that most daemons don't need), choose a lockfile and keep it exclusively flock(2)d. Unlike most pidfile/process table approaches, this approach is not race-prone. Unlike POSIX system semaphores, this mechanism will correctly handle the case of a crashed daemon (the lock vanishes when the process does).
There may be other easy serializations, too. If your daemon binds to a socket, you know that EADDRINUSE probably means that another instance is running...
Fork and execute this:
pidof nameOfProgram
If it returns a value, you know your program is running!
The other classic method is to have a lock file - the program creates a file, but only if that file does not already exist. If the file does exist, it presumes there's another copy of the program running. Because the program could crash after creating the file, smarter versions of this have ways to detect that situation.
This question already has answers here:
Closed 13 years ago.
Possible Duplicate:
Linux API to list running processes?
How can I detect hung processes in Linux using C?
Under linux the way to do this is by examining the contents of /proc/[PID]/* a good one-stop location would be /proc/*/status. Its first two lines are:
Name: [program name]
State: R (running)
Of course, detecting hung processes is an entirely separate issue.
/proc//stat is a more machine-readable format of the same info as /proc//status, and is, in fact, what the ps(1) command reads to produce its output.
Monitoring and/or killing a process is just a matter of system calls. I'd think the toughest part of your question would really be reliably determining that a process is "hung", rather than meerly very busy (or waiting for a temporary condition).
In the general case, I'd think this would be rather difficult. Even Windows asks for a decision from the user when it thinks a program might be "hung" (on my system it is often wrong about that, too).
However, if you have a specific program that likes to hang in a specific way, I'd think you ought to be able to reliably detect that.
Seeing as the question has changed:
http://procps.sourceforge.net/
Is the source of ps and other process tools. They do indeed use proc (indicating it is probably the conventional and best way to read process information). Their source is quite readable. The file
/procps-3.2.8/proc/readproc.c
You can also link your program to libproc, which sould be available in your repo (or already installed I would say) but you will need the "-dev" variation for the headers and what-not. Using this API you can read process information and status.
You can use the psState() function through libproc to check for things like
#define PS_RUN 1 /* process is running */
#define PS_STOP 2 /* process is stopped */
#define PS_LOST 3 /* process is lost to control (EAGAIN) */
#define PS_UNDEAD 4 /* process is terminated (zombie) */
#define PS_DEAD 5 /* process is terminated (core file) */
#define PS_IDLE 6 /* process has not been run */
In response to comment
IIRC, unless your program is on the CPU and you can prod it from within the kernel with signals ... you can't really tell how responsive it is. Even then, after the trap a signal handler is called which may run fine in the state.
Best bet is to schedule another process on another core that can poke the process in some way while it is running (or in a loop, or non-responsive). But I could be wrong here, and it would be tricky.
Good Luck
You may be able to use whatever mechanism strace() uses to determine what system calls the process is making. Then, you could determine what system calls you end up in for things like pthread_mutex deadlocks, or whatever... You could then use a heuristic approach and just decide that if a process is hung on a lock system call for more than 30 seconds, it's deadlocked.
You can run 'strace -p ' on a process pid to determine what (if any) system calls it is making. If a process is not making any system calls but is using CPU time then it is either hung, or is running in a tight calculation loop inside userspace. You'd really need to know the expected behaviour of the individual program to know for sure. If it is not making system calls but is not using CPU, it could also just be idle or deadlocked.
The only bulletproof way to do this, is to modify the program being monitored to either send a 'ping' every so often to a 'watchdog' process, or to respond to a ping request when requested, eg, a socket connection where you can ask it "Are you Alive?" and get back "Yes". The program can be coded in such a way that it is unlikely to do the ping if it has gone off into the weeds somewhere and is not executing properly. I'm pretty sure this is how Windows knows a process is hung, because every Windows program has some sort of event queue where it processes a known set of APIs from the operating system.
Not necessarily a programmatic way, but one way to tell if a program is 'hung' is to break into it with gdb and pull a backtrace and see if it is stuck somewhere.