I'm attempting to write an output to a bash fifo so I can pipe it into another program. However, as soon as I run it I get a segmentation fault. Any thoughts? (C amateur here)
(errorout in this case a global variable and is successfully called elsewhere)
void print_log(){
printf("about to creat file pointer");
FILE* image_fifo;
printf("open fifo");
image_fifo = fopen("image",O_WRONLY);
if(image_fifo == NULL){
printf("unable to open fifo");
}//end if
else{
printf("writing to fifo");
int j;
for(j=0;j<1024;j++){
fprintf(image_fifo,"%u",errorout[j]);
}//end for
}
fclose(image_fifo);
for now, reading it out using this: (isn't python great?)
with open("image","r") as f:
print(f.read())
I think the segmentation fault is caused by this:
image_fifo = fopen("image",O_WRONLY);
This is for calling Linux system style open() from sys/stat.h (or fcntl.h), where the second parameter is an int which indicates flags (and O_WRONLY is indeed defined to an int value in the header file). In C fopen() from stdio.h you do this with a char *, e.g. fopen("name", "w");.
To be sure next time include anything the program prints before ending. You might also want to look at valgrind output; it might give you some clues about what's gone wrong.
As for "bash style fifo", I am not sure what you're trying to do, but if you're just trying to write to a file, you're doing it almost correctly. If you're trying to make a pipeline (I've heard someone call them fifo's too, so might be the case), you should look at the libpipeline library and its usage.
Related
I've got a utility that outputs a list of files required by a game. How can I run that utility within a C program and grab its output so I can act on it within the same program?
UPDATE: Good call on the lack of information. The utility spits out a series of strings, and this is supposed to be portable across Mac/Windows/Linux. Please note, I'm looking for a programmatic way to execute the utility and retain its output (which goes to stdout).
As others have pointed out, popen() is the most standard way. And since no answer provided an example using this method, here it goes:
#include <stdio.h>
#define BUFSIZE 128
int parse_output(void) {
char *cmd = "ls -l";
char buf[BUFSIZE];
FILE *fp;
if ((fp = popen(cmd, "r")) == NULL) {
printf("Error opening pipe!\n");
return -1;
}
while (fgets(buf, BUFSIZE, fp) != NULL) {
// Do whatever you want here...
printf("OUTPUT: %s", buf);
}
if (pclose(fp)) {
printf("Command not found or exited with error status\n");
return -1;
}
return 0;
}
Sample output:
OUTPUT: total 16
OUTPUT: -rwxr-xr-x 1 14077 14077 8832 Oct 19 04:32 a.out
OUTPUT: -rw-r--r-- 1 14077 14077 1549 Oct 19 04:32 main.c
For simple problems in Unix-ish environments try popen().
From the man page:
The popen() function opens a process by creating a pipe, forking and invoking the shell.
If you use the read mode this is exactly what you asked for. I don't know if it is implemented in Windows.
For more complicated problems you want to look up inter-process communication.
popen is supported on Windows, see here:
http://msdn.microsoft.com/en-us/library/96ayss4b.aspx
If you want it to be cross-platform, popen is the way to go.
Well, assuming you're on a command line in a windows environment, you can use pipes or command line redirects. For instance,
commandThatOutputs.exe > someFileToStoreResults.txt
or
commandThatOutputs.exe | yourProgramToProcessInput.exe
Within your program, you could use the C standard input functions to read the other programs output (scanf, etc.): http://irc.essex.ac.uk/www.iota-six.co.uk/c/c1_standard_input_and_output.asp . You could also use the file example and use fscanf. This should also work in Unix/Linux.
This is a very generic question, you may want to include more details, like what type of output it is (just text, or a binary file?) and how you want to process it.
Edit: Hooray clarification!
Redirecting STDOUT looks to be troublesome, I've had to do it in .NET, and it gave me all sorts of headaches. It looks like the proper C way is to spawn a child process, get a file pointer, and all of a sudden my head hurts.
So heres a hack that uses temporary files. It's simple, but it should work. This will work well if speed isn't an issue (hitting the disk is slow), or if it's throw-away. If you're building an enterprise program, looking into the STDOUT redirection is probably best, using what other people recommended.
#include <stdlib.h>
#include <stdio.h>
int main(int argc, char* argv[])
{
FILE * fptr; // file holder
char c; // char buffer
system("dir >> temp.txt"); // call dir and put it's contents in a temp using redirects.
fptr = fopen("temp.txt", "r"); // open said file for reading.
// oh, and check for fptr being NULL.
while(1){
c = fgetc(fptr);
if(c!= EOF)
printf("%c", c); // do what you need to.
else
break; // exit when you hit the end of the file.
}
fclose(fptr); // don't call this is fptr is NULL.
remove("temp.txt"); // clean up
getchar(); // stop so I can see if it worked.
}
Make sure to check your file permissions: right now this will simply throw the file in the same directory as an exe. You might want to look into using /tmp in nix, or C:\Users\username\Local Settings\Temp in Vista, or C:\Documents and Settings\username\Local Settings\Temp in 2K/XP. I think the /tmp will work in OSX, but I've never used one.
In Linux and OS X, popen() really is your best bet, as dmckee pointed out, since both OSs support that call. In Windows, this should help: http://msdn.microsoft.com/en-us/library/ms682499.aspx
MSDN documentation says
If used in a Windows program, the _popen function returns an invalid file pointer that causes the program to stop responding indefinitely. _popen works properly in a console application. To create a Windows application that redirects input and output, see Creating a Child Process with Redirected Input and Output in the Windows SDK.
You can use system() as in:
system("ls song > song.txt");
where ls is the command name for listing the contents of the folder song and song is a folder in the current directory. Resulting file song.txt will be created in the current directory.
//execute external process and read exactly binary or text output
//can read image from Zip file for example
string run(const char* cmd){
FILE* pipe = popen(cmd, "r");
if (!pipe) return "ERROR";
char buffer[262144];
string data;
string result;
int dist=0;
int size;
//TIME_START
while(!feof(pipe)) {
size=(int)fread(buffer,1,262144, pipe); //cout<<buffer<<" size="<<size<<endl;
data.resize(data.size()+size);
memcpy(&data[dist],buffer,size);
dist+=size;
}
//TIME_PRINT_
pclose(pipe);
return data;
}
I have the following code to find the release of the Linux distribution that I am using.
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
int main()
{
return print_osinfo();
}
int print_osinfo()
{
FILE *fp;
extern FILE* popen();
char buffer[128];
int index = 0;
memset(buffer,0,sizeof(buffer));
fp = popen("/etc/centos-release", "r");
if(!fp)
{
pclose(fp);
fp = popen("/etc/redhat-release", "r");
if(!fp)
{
pclose(fp);
return 1;
}
}
while(fgets(buffer, sizeof(buffer), fp)!= NULL)
{
printf("%s\n",buffer);
}
pclose(fp);
return 0;
}
If I run the above code on Ubuntu 14.04 I get the following error.
sh: 1: /etc/centos-release: not found
I fail to understand why it is not trying to open redhat-release and then return -1. Also, is there a way to prevent the above error from being displayed on the screen?
popen is a function more suited for accessing the output of a subprocess than for simply accessing the contents of a file. For that, you should use fopen. fopen takes a file path and a mode as arguments, so all you would need to do is replace your popens with fopens and it should work perfectly.
If you really want to use popen, it takes a shell command as it's first argument, not a filename. Try popen("cat /etc/centos-release","r"); instead.
Now, you might be a bit confused, because both of these functions return a FILE pointer. fopen returns a pointer to the file you passed as an argument. popen, however, returns a pipe pointing to the output of the command you passed to it, which C sees as a FILE pointer. This is because, in C, all i/o is file access; C's only connection to the outside world is through files. So, in order to pass the output of some shell command, popen creates what C sees as a FILE in memory, containing the output of said shell command. Since it is rather absurd to run a whole other program (the shell command) just to do what fopen does perfectly well, it makes far more sense to just use fopen to read from files that already exist on disk.
If I wanted to run a shell command in linux with a c program, I would use
system("ls");
Is there a way I can accomplish this in Wind River vxworks?
I found the below example but I'm wondering do I need to include vxworks header files for this to work? I assume I do, but how do I figure out which one?
Example:
// This function runs a shell command and captures the output to the
// specified file
//
extern int consoleFd;
typedef unsigned int (*UINTFUNCPTR) ();
extern "C" int shellToFile(char * shellCmd, char * outputFile)
{
int rtn;
int STDFd;
int outFileFd;
outFileFd = creat( outputFile, O_RDWR);
printf("creat returned %x as a file desc\n",outFileFd);
if (outFileFd != -1)
{
STDFd=ioGlobalStdGet(STD_OUT);
ioGlobalStdSet(STD_OUT,outFileFd);
rtn=execute(shellCmd);
if (rtn !=0)
printf("execute returned %d \n",outFileFd);
ioGlobalStdSet(STD_OUT,STDFd);
}
close(outFileFd);
return (rtn);
}
I found the code segment below worked for me. For some reason changing the globalStdOut didn't work. Also the execute function did not work for me. But my setting the specific task out to my file, I was able to obtain the data I needed.
/* This function directs the output from the devs command into a new file*/
int devsToFile(const char * outputFile)
{
int stdTaskFd;
int outputFileFd;
outputFileFd = creat( outputFile, O_RDWR);
if (outputFileFd != ERROR)
{
stdTaskFd = ioTaskStdGet(0,1);
ioTaskStdSet(0,1,outputFileFd);
devs();
ioTaskStdSet(0,1,stdTaskFd);
close(outputFileFd);
return (OK);
}
else
return (ERROR);
}
If this is a target/kernel shell (i.e. running on the target itself), then remember that all the shell commands are simply translated to function calls.
Thus "ls" really is a call to ls(), which I believe is declared in dirLib.h
I think that the ExecCmd function is what you are looking for.
http://www.dholloway.com/vxworks/6.5/man/cat2/ExecCmd.shtml
As ever, read the documentation. ioLib.h is required for most of the functions used in that example, and stdio.h of course for printf().
As to the general question of whether you need to include any particular headers for any code to compile, you do need to declare all symbols used, and generally that means including appropriate headers. The compiler will soon tell you about any undefined symbols, either by warning or error (in C89/90 undefined functions are not an error, just a bad idea).
fclose() is causing a segfault. I have :
char buffer[L_tmpnam];
char *pipeName = tmpnam(buffer);
FILE *pipeFD = fopen(pipeName, "w"); // open for writing
...
...
...
fclose(pipeFD);
I don't do any file related stuff in the ... yet so that doesn't affect it. However, my MAIN process communicates with another process through shared memory where pipeName is stored; the other process fopen's this pipe for reading to communicated with MAIN.
Any ideas why this is causing a segfault?
Thanks,
Hristo
Pass pipeFD to fclose. fclose closes the file by file handle FILE* not filename char*. With C (unlike C++) you can do implicit type conversions of pointer types (in this case char* to FILE*), so that's where the bug comes from.
Check if pepeFD is non NULL before calling fclose.
Edit: You confirmed that the error was due to fopen failing, you need to check the error like so:
pipeFD = fopen(pipeName, "w");
if (pipeFD == NULL)
{
perror ("The following error occurred");
}
else
{
fclose (pipeFD);
}
Example output:
The following error occurred: No such file or directory
A crash in fclose implies the FILE * passed to it has been corrupted somehow. This can happen if the pointer itself is corrupted (check in your debugger to make sure it has the same value at the fclose as was returned by the fopen), or if the FILE data structure gets corrupted by some random pointer write or buffer overflow somewhere.
You could try using valgrind or some other memory corruption checker to see if it can tell you anything. Or use a data breakpoint in your debugger on the address of the pipeFD variable. Using a data breakpoint on the FILE itself is tricky as its multiple words, and is modified by normal file i/o operations.
You should close pipeFD instead of pipeName.
I'm looking at some legacy Linux code which uses pthreads.
In one thread a file is read via fgets(). The FILE variable is a global variable shared across all threads. (Hey, I didn't write this...)
In another thread every now and again the FILE is closed and reopened with another filename.
For several seconds after this has happened, the thread fgets() acts as if it is continuing to read the last record it read from the previous file: almost as if there was an error but fgets() was not returning NULL. Then it sorts itself out and starts reading from the new file.
The code looks a bit like this (snipped for brevity so I hope it's still intelligible):
In one thread:
while(gRunState != S_EXIT){
nanosleep(&timer_delay,0);
flag = fgets(buff, sizeof(buff), gFile);
if (flag != NULL){
// do something with buff...
}
}
In the other thread:
fclose(gFile);
gFile = fopen(newFileName,"r");
There's no lock to make sure that the fgets() is not called at the same time as the fclose()/fopen().
Any thoughts as to failure modes which might cause fgets() to fail but not return NULL?
How the described code goes wrong
The stdio library buffers data, allocating memory to store the buffered data. The GNU C library dynamically allocates file structures (some libraries, notably on Solaris, use pointers to statically allocated file structures, but the buffer is still dynamically allocated unless you set the buffering otherwise).
If your thread works with a copy of a pointer to the global file pointer (because you passed the file pointer to the function as an argument), then it is conceivable that the code would continue to access the data structure that was orginally allocated (even though it was freed by the close), and would read data from the buffer that was already present. It would only be when you exit the function, or read beyond the contents of the buffer, that things start going wrong - or the space that was previously allocated to the file structure is reallocated for a new use.
FILE *global_fp;
void somefunc(FILE *fp, ...)
{
...
while (fgets(buffer, sizeof(buffer), fp) != 0)
...
}
void another_function(...)
{
...
/* Pass global file pointer by value */
somefunc(global_fp, ...);
...
}
Proof of Concept Code
Tested on MacOS X 10.5.8 (Leopard) with GCC 4.0.1:
#include <stdio.h>
#include <stdlib.h>
FILE *global_fp;
const char etc_passwd[] = "/etc/passwd";
static void error(const char *fmt, const char *str)
{
fprintf(stderr, fmt, str);
exit(1);
}
static void abuse(FILE *fp, const char *filename)
{
char buffer1[1024];
char buffer2[1024];
if (fgets(buffer1, sizeof(buffer1), fp) == 0)
error("Failed to read buffer1 from %s\n", filename);
printf("buffer1: %s", buffer1);
/* Dangerous!!! */
fclose(global_fp);
if ((global_fp = fopen(etc_passwd, "r")) == 0)
error("Failed to open file %s\n", etc_passwd);
if (fgets(buffer2, sizeof(buffer2), fp) == 0)
error("Failed to read buffer2 from %s\n", filename);
printf("buffer2: %s", buffer2);
}
int main(int argc, char **argv)
{
if (argc != 2)
error("Usage: %s file\n", argv[0]);
if ((global_fp = fopen(argv[1], "r")) == 0)
error("Failed to open file %s\n", argv[1]);
abuse(global_fp, argv[1]);
return(0);
}
When run on its own source code, the output was:
Osiris JL: ./xx xx.c
buffer1: #include <stdio.h>
buffer2: ##
Osiris JL:
So, empirical proof that on some systems, the scenario I outlined can occur.
How to fix the code
The fix to the code is discussed well in other answers. If you avoid the problem I illustrated (for example, by avoiding global file pointers), that is simplest. Assuming that is not possible, it may be sufficient to compile with the appropriate flags (on many Unix-like systems, the compiler flag '-D_REENTRANT' does the job), and you will end up using thread-safe versions of the basic standard I/O functions. Failing that, you may need to put explicit thread-safe management policies around the access to the file pointers; a mutex or something similar (and modify the code to ensure that the threads use the mutex before using the corresponding file pointer).
A FILE * is just a pointer to the various resources. If the fclose does not zero out those resource, it's possible that the values may make enough sense that fgets does not immediately notice it.
That said, until you add some locking, I would consider this code completely broken.
Umm, you really need to control access to the FILE stream with a mutex, at the minimum. You aren't looking at some clever implementation of lock free methods, you are looking at really bad (and dusty) code.
Using thread local FILE streams is the obvious and most elegant fix, just use locks appropriately to ensure no two threads operate on the same offset of the same file at once. Or, more simply, ensure that threads block (or do other work) while waiting for the file lock to clear. POSIX advisory locks would be best for this, or your dealing with dynamically growing a tree of mutexes... or initializing a file lock mutex per thread and making each thread check the other's lock (yuck!) (since files can be re-named).
I think you are staring down the barrel of some major fixes .. unfortunately (from what you have indicated) there is no choice but to make them. In this case, its actually easier to debug a threaded program written in this manner than it would be to debug something using forks, consider yourself lucky :)
You can also put some condition-wait (pthread_cond_wait) instead of just some nanosleep which will get signaled when intended e.g. when a new file gets fopened.