I have a server and client application, using IPC queues. The server is (for now) simply sending back the text received from the client. I would like the server to change the letters in the message from lower to upper case. I'm wondering how to achieve it. Do I have to create a pipe? I'm thinking about 'grabbing' the text from the received queue, executing the tr command on it and sending back to the client. But if using a pipe, from where do I get the file descriptors? I mean, int fds[2]; and pipe(fds); gives me a pipe, but it's not working on two char arrays like this:
int fds[2];
pipe(fds);
char a[100];
char b[100];
fds[0] = open(a,O_RDOLNY);
fds[1] = open(b,O_WRONLY);
How can I execute a tr command on a text held by a message queue?
I wouldn't fork a program for this:
p = str;
while (*p) {
*p = toupper(*p);
p++;
}
More seriously, you should probably use popen that automatically (and robustly) forks and uses a pipe to setup a FILE * for you.
FILE *cmd = popen("tr ... ", "r");
And then simple fgets from it (don't forget to pclose it). Sadly on Linux you can't write and read to a popened file at the same time (you can on FreeBSD).
EDIT
Since this is a homework question (and frankly because I don't think it's trivial to get it completely right at this time of night), here is what popen actually does:
Create a pipe
Fork a shell that will run the command
Return a FILE * (possibly via fdopen)
The last step is really optional as you could always read from the file descriptor directly.
Related
int main()
{
int fd=open("/dev/pts/0",O_RDWR);
if(fd==-1){
printf("Error");
exit(1);
}
dup2(fd,0);
char c[20];
printf("reading from file\n");
scanf("%s",c);
}
In above code "/dev/pts/0" is set as stdin.scanf behaves normally.
But when I set to filename like "inp.txt" it doesn't wait directly reads whatever it finds.
Why is that like that?What to do if I want to make it wait?
When you read from a file, the data is already there, so why would scanf() wait? Or any other way of reading the file?
/dev/pts/<i>N</i> are Unix 98 pseudoterminals, which by their very nature are interactive. A blocking read from one waits for interactive input. A nonblocking read would just tell you that there is no data to read right now.
If you create a pipe between processes, associate the read end with an stdio FILE handle via fdopen(), you can use scanf() to scan data from the pipe. That, too, will wait for input, unless all write ends of the pipe are closed (then the scanf will fail with end-of-input). So, there is nothing special about the pseudoterminals in this respect.
The following simplified piece of code is executed by a thread in the background. The thread runs until he is told to exit (by user input).
In the code below I have removed some error checking for better readability. Even with error checking the code works well and both the master and the slave are created and/or opened.
...
int master, slave;
char *slavename;
char *cc;
master = posix_openpt(O_RDWR);
grantpt(master);
unlockpt(master);
slavename = ptsname(master);
slave = open(slavename, O_RDWR);
printf("master: %d\n",master);
printf("slavename: %s\n",slavename);
On my machine the output is the following:
master: 3
slavename: /dev/pts/4
So I thought that opening an xterm with the command xterm -S4/3 (4 = pt-slave, 3 = pt-master) while my program is running should open a new xterm window for the created pseudoterminal. But xterm just starts running without giving an error or any further informations but does not open a window at all. Any suggestions on that?
EDIT:
Now with Wumpus Q. Wumbley's help xterm starts normally, but I can't redirect any output to it. I tried:
dup2(slave, 1);
dup2(slave, 2);
printf("Some test message\n");
and opening the slave with fopen and then using fprinf. Both didn't work.
The xterm process needs to get access to the file descriptor somehow. The intended usage of this feature is probably to launch xterm as a child process of the one that created the pty. There are other ways, though. You could use SCM_RIGHTS file descriptor passing (pretty complicated) or, if you have a Linux-style /proc filesystem try this:
xterm -S4/3 3<>/proc/$PID_OF_YOUR_OTHER_PROGRAM/fd/3
'
You've probably seen shell redirection operators before: < for stdin, > for stdout, 2> for stderr (file descriptor 2). Maybe you've also seen other file descriptors being opend for input or output with things like 3<inputfile 4>outputfile. Well the 3<> operator here is another one. It opens file descriptor 3 in read/write mode. And /proc/PID/fd/NUM is a convenient way to access files opened by another process.
I don't know about the rest of the question. I haven't tried to use this mode of xterm before.
OK, the trick with /proc was a bad idea. It's equivalent to a fresh open of /dev/ptmx, creating a new unrelated pty.
You're going to have to make the xterm a child of your pty-creating program.
Here's the test program I used to explore the feature. It's sloppy but it revealed some interesting things. One interesting thing is that xterm writes its window ID to the pty master after successful initialization. This is something you'll need to deal with. It appears as a line of input on the tty before the actual user input begins.
Another interesting thing is that xterm (the version in Debian at least) crashes if you use -S/dev/pts/2/3 in spite of that being specifically mentioned in the man page as an allowed format.
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <unistd.h>
#include <fcntl.h>
int main(void)
{
int master;
char *slavename, window[64], buf[64];
FILE *slave;
master = posix_openpt(O_RDWR);
grantpt(master);
unlockpt(master);
slavename = ptsname(master);
printf("master: %d\n", master);
printf("slavename: %s\n", slavename);
snprintf(buf, sizeof buf, "-S%s/%d", strrchr(slavename,'/')+1, master);
if(!fork()) {
execlp("xterm", "xterm", buf, (char *)0);
_exit(1);
}
slave = fopen(slavename, "r+");
fgets(window, sizeof window, slave);
printf("window: %s\n", window);
fputs("say something: ", slave);
fgets(buf, sizeof buf, slave);
fprintf(slave, "you said %s\nexiting in 3 seconds...\n", buf);
sleep(3);
return 0;
}
I'm just starting to learn C programming and I have some uncertainty about fork(), exec(), pipe(), etc.
I've developed this code, but when I execute it, the variable c remains empty, so I don't know if the child isn't writing to the pipe, or the parent isn't reading from it.
Could you help me please? This is the code:
int main() {
int pid=0;
int pipefd[2];
char* c=(char *)malloc(sizeof(char));
FILE *fp;
pipe(pipefd);
pid=fork();
if (pid==0){
close(pipefd[0]);
dup2(pipefd[1],1);
close(pipefd[1]);
execl("ls -l | cut -c28","ls -l | cut -c28", (char *) 0);
}
else{
close(pipefd[1]);
read(pipefd[0], c, 1);
char* path="/home/random";
char* txt=".txt";
char* root=malloc(strlen(path) + strlen(txt) + sizeof(char));
strcpy(root,path);
strcat(root,c);
strcat(root,txt);
close(pipefd[0]);
fp=fopen(root,"w+");
(...)
}
The problem is that the final root string its only "/home/random.txt" because there is nothing in the char c, and what I want is to open the file "/home/random(number stored in char c).txt".
execl executes a single command, and is not aware of shell concepts such as pipes. If you want to execute a shell command, you will have to execute a shell, as follows:
execl("/bin/sh","/bin/sh","-c","ls -l | cut -c28", (char*) 0);
Always check the return value of the system calls (like execve(2) and derived functions like execl(3)), and use the errno(3) to figure out what went wrong.
In your case the execl line fails.
Using strcpy/strcat seems a bit excessively complex. snprintf can turn those 3 lines into one.
snprintf( root, size_of_buf, "/home/random%s", c );
Additionally, check your error codes. As noted, execl is failing and you don't know it. fork, dup2, ...,can also fail, you want to know sooner rather than later.
I want execute a Linux command in a C program and read (parse) stdout from this command in the program. The code below works but I don't know how to limit execution time of the command, in addition to the string and bytes read limits. Any ideas?
FILE *ps_pipe;
int bytes_read;
int nbytes = 100;
char *my_string=NULL;
char message[1024];
message=sprintf(message,"any command here");
ps_pipe = popen (message, "r");
my_string = (char *) malloc (nbytes + 1);
bytes_read = getdelim (&my_string, &nbytes, "delimiter_word", ps_pipe);
pclose(ps_pipe);
free(my_string);
You could do that with select(). Select can "wait" on one or more file descriptors for an event to happen (readable, writable, ...), with an optional time-out. Since it operates on file descriptors, you'll also need fileno(ps_pipe).
Keep in mind however that you won't be able to kill the forked process easily, because popen hides certain details of the child process. If you need such control, you'll need to use lower level functions fork(), pipe(), dup(), exec(), wait() and possibly kill().
I have a C application with many worker threads. It is essential that these do not block so where the worker threads need to write to a file on disk, I have them write to a circular buffer in memory, and then have a dedicated thread for writing that buffer to disk.
The worker threads do not block any more. The dedicated thread can safely block while writing to disk without affecting the worker threads (it does not hold a lock while writing to disk). My memory buffer is tuned to be sufficiently large that the writer thread can keep up.
This all works great. My question is, how do I implement something similar for stdout?
I could macro printf() to write into a memory buffer, but I don't have control over all the code that might write to stdout (some of it is in third-party libraries).
Thoughts?
NickB
I like the idea of using freopen. You might also be able to redirect stdout to a pipe using dup and dup2, and then use read to grab data from the pipe.
Something like so:
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#define MAX_LEN 40
int main( int argc, char *argv[] ) {
char buffer[MAX_LEN+1] = {0};
int out_pipe[2];
int saved_stdout;
saved_stdout = dup(STDOUT_FILENO); /* save stdout for display later */
if( pipe(out_pipe) != 0 ) { /* make a pipe */
exit(1);
}
dup2(out_pipe[1], STDOUT_FILENO); /* redirect stdout to the pipe */
close(out_pipe[1]);
/* anything sent to printf should now go down the pipe */
printf("ceci n'est pas une pipe");
fflush(stdout);
read(out_pipe[0], buffer, MAX_LEN); /* read from pipe into buffer */
dup2(saved_stdout, STDOUT_FILENO); /* reconnect stdout for testing */
printf("read: %s\n", buffer);
return 0;
}
If you're working with the GNU libc, you might use memory streams string streams.
You can "redirect" stdout into file using freopen().
man freopen says:
The freopen() function opens the file
whose name is the string pointed to
by path and associates the stream
pointed to by stream with it. The
original stream (if it exists) is
closed. The mode argument is used
just as in the fopen() function.
The primary use of the freopen()
function is to change the file
associated with a standard text
stream (stderr, stdin, or stdout).
This file well could be a pipe - worker threads will write to that pipe and writer thread will listen.
Why don't you wrap your entire application in another? Basically, what you want is a smart cat that copies stdin to stdout, buffering as necessary. Then use standard stdin/stdout redirection. This can be done without modifying your current application at all.
~MSalters/# YourCurrentApp | bufcat
You can change how buffering works with setvbuf() or setbuf(). There's a description here: http://publications.gbdirect.co.uk/c_book/chapter9/input_and_output.html.
[Edit]
stdout really is a FILE*. If the existing code works with FILE*s, I don't see what prevents it from working with stdout.
One solution ( for both things your doing ) would be to use a gathering write via writev.
Each thread could for example sprintf into a iovec buffer and then pass the iovec pointers to the writer thread and have it simply call writev with stdout.
Here is an example of using writev from Advanced Unix Programming
Under Windows you would use WSAsend for similar functionality.
The method using the 4096 bigbuf will only sort of work. I've tried this code, and while it does successfully capture stdout into the buffer, it's unusable in a real world case. You have no way of knowing how long the captured output is, so no way of knowing when to terminate the string '\0'. If you try to use the buffer you get 4000 characters of garbage spit out if you had successfully captured 96 characters of stdout output.
In my application, I'm using a perl interpreter in the C program. I have no idea how much output is going to be spit out of what ever document is thrown at the C program, and hence the code above would never allow me to cleanly print that output out anywhere.