C: not able to open message queue - c

I'm working on OpenSuse 42.3 Leap. This is my first touch with Unix message queues and I have some basic trouble with simple opening a new queue. My original issue was I wasn't able to open two queues but after few tries I got the problem reduced to this strange behaviour:
If I compile and run this
#include<stdio.h>
#include <mqueue.h>
int main() {
// printf("Hello world!\n");
/* create queue */
char *q_name = "/myQueue";
mqd_t desc = mq_open(q_name, O_RDWR | O_CREAT);
if(desc == (mqd_t) -1)
perror("Error in mq_open");
printf("We have opened %d\n", desc);
/* close descriptor and unlink name */
if (mq_close(desc)) perror("Error in close:");
if (mq_unlink(q_name)) perror("Error in unlink:");
return 0;
}
it works great with standard output:
We have opened 3
The queue is closed correctly and I can rerun it with no error.
But if I uncomment the line
printf("Hello world!\n");
it obviously still correctly compiles but when run it outputs
Hello world!
Error in mq_open: Invalid argument
We have opened -1
Error in close:: Bad file descriptor
Error in unlink:: No such file or directory
If instead of simple 'Hello world! I try to print:
printf("Hello world! My pid = %d\n", getpid());
then instead of Invalid argument the error
Error in mq_open: Bad address
is produced.
Any idea why this simple printf crashes the queue opening?

From the mq_open manual page:
If O_CREAT is specified in oflag, then two additional arguments must
be supplied. [...]
You don't supply them, so you have undefined behaviour. What seems to happen is that the missing arguments are taken from somewhere in memory where they would have been, and what happens to be there is different depending on what your program did just before.

Related

fprintf() to stdout not working after creating and opening a FIFO

When my program starts, it just creates a fifo and opens it, after that I just want to output some information to the screen, however, nothing gets printed out. Here's a snippet of my code:
void listen(const server_config_t* server_conf)
{
// Create FIFO
if (mkfifo(SERVER_FIFO_PATH, 0660) == -1) {
fprintf(stdout, "server FIFO not created as it already exists. continuing...\n");
}
// Open FIFO (for reading)
int fd;
if ((fd = open(SERVER_FIFO_PATH, O_RDONLY)) == -1) {
// fprintf(stderr, "error: could not open server FIFO\n");
perror("FIFO");
exit(1);
}
// Open dummy FIFO (for writing, prevent busy waiting)
// TODO: find way to wait without another file descriptor?
int fd_dummy;
if ((fd_dummy = open(SERVER_FIFO_PATH, O_WRONLY)) == -1) {
perror("DUMMY FIFO");
exit(1);
}
// TODO: this should print immediately after starting,
// but doesn't for some reason
fprintf(stdout, "server listening... %d %s\n", server_conf->num_threads,
server_conf->password);
fflush(stdout);
.
.
.
}
Here's my output:
I've tried commenting out the fifo creation and opening, and when I do that the message gets printed correctly to the screen.
Opening a FIFO normally blocks until the other end is opened as well, see http://man7.org/linux/man-pages/man7/fifo.7.html. So your program probably waits in open(SERVER_FIFO_PATH, O_RDONLY) and does not reach any other fprintf or perror.
Your attempt to open the FIFO for reading first and then for writing does not work because the first open does not return.
You should be able to see this when you step through your program using a debugger.
BTW: When mkfifo returns -1 you should check if errno is EEXIST. There could be other errors that would also result in return value -1, see https://linux.die.net/man/3/mkfifo
As you can see from your output, there is blocking. That is, your current process cannot go on until the other end of the FIFO is opened for write. You should glance at the man page.
As to your error, there are two cases maybe the directory into which you want to place the FIFO doesn't permit to do that. Second case may be due to a system error. To overcome the issue, you need to change your fprintf as following.
#include <string.h>
#include <stdlib.h>
..
..
fprintf(stderr, "server FIFO not created as it already exists. Error: %s\n", strerror(errno));
exit(EXIT_FAILURE);

mkfifo() not able to create file in C

I'm trying to create a named pipe in C, but have not had any success.
Here is my code:
#define FIFO_NAME "/tmp/myfifo"
int main(){
int fd;
fd = mkfifo(FIFO_NAME, 0666);//, 0);
if(fd<0){
fprintf(stderr,"Error creating fifo\n");
exit(0);
}
On running the above code every time output comes out:
Error creating fifo
Please help.
You want to replace fprintf(stderr,"Error creating fifo\n"); by perror("mkfifo() failed");. This gives you the long text error message which corresponds to the value of errno set by mkfifo() on failure.
– alk
Thanks that worked, it had a existing file by same name.
– vidit jain

Fail to read command output using popen function

In Linux, I am finding pid of process by opening pipe with "pidof process_name" command and then reading it's output using fgets function. But it fails to find pid once in a while. Below is my code for finding pid of my process.
int FindPidByProcessName(char *pName)
{
int pid = -1;
char line[30] = { 0 };
char buf[64] = { 0 };
sprintf(buf, "pidof %s", pName);
//pipe stream to process
FILE *cmd = popen(buf, "r");
if (NULL != cmd)
{
//get line from pipe stream
fgets(line, 30, cmd);
//close pipe
pclose(cmd); cmd = NULL;
//convert string to unsigned LONG integer
pid = strtoul(line, NULL, 10);
}
return pid;
}
In output sometimes pid=0 comes even though process is available in "ps" command output.
So, I try to find root cause behind this issue and i found something like input/output buffer mechanism is may creating issue in my scenario.
So I try to use sync() function before opening popen() and strangely my function starts working with 100% accuracy.
Now sync() function is taking too much time(approximately 2min sometime) to complete its execution which is not desirable. So i try to use fflush(), fsync() and fdatasync() but these all are not working appropriately.
So please anyone tell me what was the exact root cause behind this issue And how to solve this issue appropriately?
Ok, the root cause of the error is stored in the errno variable (which btw you do not need to initialize). You can get an informative message using the fucntion
perror("Error: ");
If u use perror the variable errno is interpreted and you get a descriptive message.
Another way (the right way!) of finding the root cause is compiling your program with the -g flag and running the binary with gdb.
Edit: I strongly suggest the use of the gdb debugger so that you can look exactly what path does your code follow, so that you can explain the strange behaviour you described.
Second Edit: Errno stores the last error (return value). Instead of calling the functions as you do, you should write, and check errno immediately:
if ((<function>) <0) {
perror("<function>: ");
exit(1);
}

C: setup pseudoterminal and open with xterm

The following simplified piece of code is executed by a thread in the background. The thread runs until he is told to exit (by user input).
In the code below I have removed some error checking for better readability. Even with error checking the code works well and both the master and the slave are created and/or opened.
...
int master, slave;
char *slavename;
char *cc;
master = posix_openpt(O_RDWR);
grantpt(master);
unlockpt(master);
slavename = ptsname(master);
slave = open(slavename, O_RDWR);
printf("master: %d\n",master);
printf("slavename: %s\n",slavename);
On my machine the output is the following:
master: 3
slavename: /dev/pts/4
So I thought that opening an xterm with the command xterm -S4/3 (4 = pt-slave, 3 = pt-master) while my program is running should open a new xterm window for the created pseudoterminal. But xterm just starts running without giving an error or any further informations but does not open a window at all. Any suggestions on that?
EDIT:
Now with Wumpus Q. Wumbley's help xterm starts normally, but I can't redirect any output to it. I tried:
dup2(slave, 1);
dup2(slave, 2);
printf("Some test message\n");
and opening the slave with fopen and then using fprinf. Both didn't work.
The xterm process needs to get access to the file descriptor somehow. The intended usage of this feature is probably to launch xterm as a child process of the one that created the pty. There are other ways, though. You could use SCM_RIGHTS file descriptor passing (pretty complicated) or, if you have a Linux-style /proc filesystem try this:
xterm -S4/3 3<>/proc/$PID_OF_YOUR_OTHER_PROGRAM/fd/3
'
You've probably seen shell redirection operators before: < for stdin, > for stdout, 2> for stderr (file descriptor 2). Maybe you've also seen other file descriptors being opend for input or output with things like 3<inputfile 4>outputfile. Well the 3<> operator here is another one. It opens file descriptor 3 in read/write mode. And /proc/PID/fd/NUM is a convenient way to access files opened by another process.
I don't know about the rest of the question. I haven't tried to use this mode of xterm before.
OK, the trick with /proc was a bad idea. It's equivalent to a fresh open of /dev/ptmx, creating a new unrelated pty.
You're going to have to make the xterm a child of your pty-creating program.
Here's the test program I used to explore the feature. It's sloppy but it revealed some interesting things. One interesting thing is that xterm writes its window ID to the pty master after successful initialization. This is something you'll need to deal with. It appears as a line of input on the tty before the actual user input begins.
Another interesting thing is that xterm (the version in Debian at least) crashes if you use -S/dev/pts/2/3 in spite of that being specifically mentioned in the man page as an allowed format.
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <unistd.h>
#include <fcntl.h>
int main(void)
{
int master;
char *slavename, window[64], buf[64];
FILE *slave;
master = posix_openpt(O_RDWR);
grantpt(master);
unlockpt(master);
slavename = ptsname(master);
printf("master: %d\n", master);
printf("slavename: %s\n", slavename);
snprintf(buf, sizeof buf, "-S%s/%d", strrchr(slavename,'/')+1, master);
if(!fork()) {
execlp("xterm", "xterm", buf, (char *)0);
_exit(1);
}
slave = fopen(slavename, "r+");
fgets(window, sizeof window, slave);
printf("window: %s\n", window);
fputs("say something: ", slave);
fgets(buf, sizeof buf, slave);
fprintf(slave, "you said %s\nexiting in 3 seconds...\n", buf);
sleep(3);
return 0;
}

Perl, how do I create a pipe to my exec'd child?

I am trying to pass data from my perl script to my c program using a pipe (uni-directional).
I need to find a way to to do this without messing with the child programs STDIN or STDOUT, so I try creating a new handle and passing the fd.
I create 2 IO::Handles and create a pipe. I write to one end of the pipe and attempt to pass the File descriptor of the other end of the pipe to my child program that is being execed. I pass the file descriptor by setting an ENV variable. Why does this not work? (It does not print out 'hello world'). As far as I know, file descriptors and pipes are inherited by the child when exec'd.
Perl script:
#!/opt/local/bin/perl
use IO::Pipe;
use IO::Handle;
my $reader = IO::Handle->new();
my $writer = IO::Handle->new();
$reader->autoflush(1);
$writer->autoflush(1);
my $pipe = IO::Pipe->new($reader, $writer);
print $writer "hello world";
my $fh = $reader->fileno;
$ENV{'MY_FD'} = $fh;
exec('./child') or print "error opening app\n";
# No more code after this since exec replaces the current process
C Program, app.c (Compiled with gcc app.c -o child):
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
int main(int argc, char ** argv) {
int fd = atoi(getenv("MY_FD"));
char buf[12];
read(fd, buf, 11);
buf[11] = '\0';
printf("fd: %d\n", fd);
printf("message: %s\n", buf);
}
Output:
fd: 3
message:
The message is never passed through the pipe to the C program. Any suggestions?
Your pipe file descriptors are set FD_CLOEXEC, and so are closed upon exec().
Perl's $^F controls this behavior. Try something like this, before you call IO::Pipe->new:
$^F = 10; # Assumes we don't already have a zillion FDs open
Alternatively, you can with Fcntl clear the FD_CLOEXEC flag yourself after creating the pipe.
I found the solution. Some people said that it was not possible with exec, that it would not see pipes or file descriptors, but that was not correct.
Turns out that perl closes/invalidates all fd > 2 automatically unless you say otherwise.
Adding the following flags to the FD fixes this problem (where READ is the handle here, NOT STDIN):
my $flags = fcntl(READ, F_GETFD, 0);
fcntl(READ, F_SETFD, $flags & ~FD_CLOEXEC);
Your program is failing because exec calls another program and never returns. It isn't designed for communication with another process at all.
You probably wrote the above code based on the IO::Pipe documentation, which says "ARGS are passed to exec". That isn't what it means, though. IO::Pipe is for communication between two processes within your Perl script, which are created by fork. They mean the execution of the new process, rather than a call to exec in your own code.
Edit: for one-directional communication, all you need is open with a pipe:
open my $prog, '|-', './child' or die "can't run program: $!";
print {$prog} "Hello, world!";
Rodrigo, I can tell you that your file descriptor is no longer valid when you exec into the c app.
Please be aware that I just say it is INVALID, but it still exists in the environment variables. The FD=3 will continue existing until the whole process ends.
You can check the fd by fcntl. The code is listing below
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <fcntl.h>
int main(int argc, char ** argv) {
int fd = atoi(getenv("MY_FD"));
char buf[12];
read(fd, buf, 11);
buf[11] = '\0';
printf("fd: %d, if fd still valid: %d\n", fd, fcntl(fd, F_GETFD));
printf("strlen %d\n", (int)strlen(buf));
printf("message: %s\n", buf);
}
You can see that MY_FD=3 will always in ENV as the process doesn't destroy itself, so you can get fd as 3. But, this file descriptor has been invalid. so the result of fcntl(fd, F_GETFD) will be -1, and the length you read from fd will be 0.
That's why you will never see the "hello world" sentence.
One more thing, #dan1111 is right, but you don't need to open a new pipe, as you have already done so.
All you need to is just set MY_FD=0, like
$ENV{'MY_FD'} = 0;
The STDIN/OUT is another independent process that always exists, so the pipe will not broken down when your perl app exec into c app. That's why you can read from what you input in app.
If your requirement is writing from another file hanle, please try to make that file handle an independent process and always exist, just like STDIN.

Resources