First off, apologies in advance for any sloppy code - I'm relatively new to C. I'm currently working my way through some coding for my introductory OS class, but having spent far too many hours of my weekend trying to brute-force my way through this one problem, I figure it's time I swallow my pride and try to get a nudge in the right direction here. It deals with compressing basic text files and is meant to make use of Unix system calls and pipes. Following a fork() call, one process is supposed to handle reading a text file (specified as a command line argument) and then send the data via pipe to the other process, which handles compression and writing to a destination file. Having tested out a non-pipe version of this program, I'm fairly sure the compression stuff works as intended, but I think my issue lies with the pipe data sharing. I don't think anything is getting passed through, based on some amateur debugging.
The program also terminates prematurely with the following line:
Segmentation fault (core dumped)
And here's the code itself:
(redacted)
Can someone figure out what the issue may be? I'd be unbelievably appreciative.
Create the pipe before you fork. As it is you are creating a seperate pipe in each process.
Related
Let's say that there is an existing program that listens on stdin for it's inputs. I want to create a pthread within the same program that is now the one to listen to stdin, and depending on what comes through, let it go through to the original program.
For this, I would create a pipe(), and configure the pthread to write to the input file descriptor, and the original program to listen to the output descriptor. Is this a correct way to have this done? I understand piping between processes, but is it possible to pipe like this within a single process?
Sure, you can use pipe(), but the data has to pass through the kernel even though both the end points are within the same process.
If you have source code for this (which I assume you have) and you don't mind making non-trivial changes, and performance is a priority for you, I would suggest using shared memory to send the data to the original program. It will be much faster than using pipe()
I am currently using
$./launch argv1 argv2 argv3
template to send command line arguments to my c program. However, I want to launch my program once, and want to send my arguments without typing "./launch" part (basically a for loop that asks for input every round). I basically want that whenever I type something, my program interpretes as I am sending arguments to it. I know that I should use pthread and stuff but I don't really know how to do it and I am kinda new to this, so any help is appreciated. Thanks in advance
I basically want that whenever I type something, my program interpretes as I am sending arguments to it.
It does not work that way, sorry. Program arguments are passed to a program only when it starts, as a part of starting it.
Afterward, your program can read additional data from its standard input or other sources, but such data is not received in the form of program arguments. If you wish, you can process them the same way you do your program's arguments, but that would be unusual. Usually, programs use arguments and I/O for different purposes.
I know that I should use pthread and stuff
I have no clue what gave you the idea that pthreads should have a role to play here, and I urge you to develop a good working understanding of how single-threaded code and programs work before you consider delving into pthreads or any other multi-threading API.
I have a problem similar to bash script flush but would appreciate confirmation and clarification:
I have a console-based, interactive perl script that writes to stdout. Along the way, it uses system() to call a C program, and its output (also to stdout) is massaged by the script and output to the user.
On Windows there have been no issues. But on Linux, in one very specific and very repeatable instance, only the trailing part of the program's message arrives to the script. Found that by adding fflushes in the C program, the issue goes away.
EDIT: Further testing reveals that a single fflush just before returning has cleared up the issue, at least for this specific instance. So this point of this post is now where the community thinks the issue is occurring, so I can avoid it in other situations in the future...
Is it the interplay between the script and the C program, both of which use stdout? Or is it that system() in perl creates a sub-shell to execute the program, and maybe the issue arises there?
If the former: is fflush the best option? Just one call seems quite reasonable in terms of overhead.
If the latter, could stdbuf somehow be used to alter the stdout behavior of the sub-shell running the program, especially if it's a program that can't be rewritten?
Thanks!
I've tested, and found the issue can be resolved with either of these approaches:
Add fflush at the end of the C program.
Run the C program without fflush using (in perl) `stdbuf -oL myprog` (I had incorrectly referenced system() before, sorry).
Some commenters refer to the fact that C should flush the buffer on exit, but CERT vulnerability FI023.C is specifically about unflushed data in stdout using C. When capturing stdout using perl tickmarks, I'm left wondering if that's a sort of redirection where this situation can occur.
I am currently working on moving some bash scripts over to C, and one of them call's an external python script, so for instance ./remoteclient01a.py "remotecommand player /something something"
./remoteclient01a.py "remotecommand player /something something"
and Ive been looking for a way to execute this command in C, but Im not really sure as to which I should use, being that System() seems to be the best one, but then a few pages say that it is a bad choice in some cases, so if someone could recommend a method to do this I would really appreciate it, thanks!
The only portable ways to do this in C are system and popen. The function popen allows you to read output from the command.
For simple problems in unixish environments try popen().
From the man page
The popen() function opens a process by creating a pipe, forking and involking the
shell.
There are other means to call an external program in C. See the following tutorial explaining all the ways :
Executing programs with C(Linux)
This question already has answers here:
Closed 13 years ago.
Possible Duplicate:
Linux API to list running processes?
How can I detect hung processes in Linux using C?
Under linux the way to do this is by examining the contents of /proc/[PID]/* a good one-stop location would be /proc/*/status. Its first two lines are:
Name: [program name]
State: R (running)
Of course, detecting hung processes is an entirely separate issue.
/proc//stat is a more machine-readable format of the same info as /proc//status, and is, in fact, what the ps(1) command reads to produce its output.
Monitoring and/or killing a process is just a matter of system calls. I'd think the toughest part of your question would really be reliably determining that a process is "hung", rather than meerly very busy (or waiting for a temporary condition).
In the general case, I'd think this would be rather difficult. Even Windows asks for a decision from the user when it thinks a program might be "hung" (on my system it is often wrong about that, too).
However, if you have a specific program that likes to hang in a specific way, I'd think you ought to be able to reliably detect that.
Seeing as the question has changed:
http://procps.sourceforge.net/
Is the source of ps and other process tools. They do indeed use proc (indicating it is probably the conventional and best way to read process information). Their source is quite readable. The file
/procps-3.2.8/proc/readproc.c
You can also link your program to libproc, which sould be available in your repo (or already installed I would say) but you will need the "-dev" variation for the headers and what-not. Using this API you can read process information and status.
You can use the psState() function through libproc to check for things like
#define PS_RUN 1 /* process is running */
#define PS_STOP 2 /* process is stopped */
#define PS_LOST 3 /* process is lost to control (EAGAIN) */
#define PS_UNDEAD 4 /* process is terminated (zombie) */
#define PS_DEAD 5 /* process is terminated (core file) */
#define PS_IDLE 6 /* process has not been run */
In response to comment
IIRC, unless your program is on the CPU and you can prod it from within the kernel with signals ... you can't really tell how responsive it is. Even then, after the trap a signal handler is called which may run fine in the state.
Best bet is to schedule another process on another core that can poke the process in some way while it is running (or in a loop, or non-responsive). But I could be wrong here, and it would be tricky.
Good Luck
You may be able to use whatever mechanism strace() uses to determine what system calls the process is making. Then, you could determine what system calls you end up in for things like pthread_mutex deadlocks, or whatever... You could then use a heuristic approach and just decide that if a process is hung on a lock system call for more than 30 seconds, it's deadlocked.
You can run 'strace -p ' on a process pid to determine what (if any) system calls it is making. If a process is not making any system calls but is using CPU time then it is either hung, or is running in a tight calculation loop inside userspace. You'd really need to know the expected behaviour of the individual program to know for sure. If it is not making system calls but is not using CPU, it could also just be idle or deadlocked.
The only bulletproof way to do this, is to modify the program being monitored to either send a 'ping' every so often to a 'watchdog' process, or to respond to a ping request when requested, eg, a socket connection where you can ask it "Are you Alive?" and get back "Yes". The program can be coded in such a way that it is unlikely to do the ping if it has gone off into the weeds somewhere and is not executing properly. I'm pretty sure this is how Windows knows a process is hung, because every Windows program has some sort of event queue where it processes a known set of APIs from the operating system.
Not necessarily a programmatic way, but one way to tell if a program is 'hung' is to break into it with gdb and pull a backtrace and see if it is stuck somewhere.