How to communicate from one program to another long running program? - c

I have a long running program in C under Linux:
longrun.c
#include <stdio.h>
int main()
{
int mode=0;
int c=0;
while(1)
{
printf("\nrun # mode %d value : %d ",mode,c );
if (c>100)
c=0;
if(mode==0)
c++;
else
c=c+2;
sleep(3);
}
return 0;
}
It will display
run # mode 0 value : 0
run # mode 0 value : 1
run # mode 0 value : 2
I need to write another program in C (some thing like changemode.c) , so that it can communicate to the longrun.c
and set its value of mode to some other value, so that the running program will
display values in incremental order of 2.
I.e., if I am running the program after some x minutes , it will display in this pattern:
run # mode 0 value : nnn
run # mode 0 value : nnn+2
run # mode 0 value : (nnn+2)+2
I can do it using file method the changemode.c will create a file saying mode =2
then the longrun.c will everytime open and check and proceed. Is there some other better way to solve this, like interprocess communication?
If possible can any one write a sample of the changemode.c?

One of the most basic ideas in Unix programming is process forking, and the establishment of a pipe between the 2 processes. longrun could start by creating a pipe, calling fork, and using the parent process as the changemode 'monitor' process, and the child process as you use longrun now. You will need to periodically read / write on either end.
A google search will return many examples. Here's another.

The solution has two parts:
A communication channel between the two processes. Unix Domain Sockets are a good tool for it, and they behave similarly to TCP/IP sockets.
Replacing sleep with select. select will listen on the socket, handling communication with the other program. You can also specify a 3 second timeout, so when it returns 0 (meaning no activity on the socket), you know it's time to print some output.
As an alternative to #2, you could use two threads - one sleeping and producing output, the other handling the socket. Note that any data shared by the threads should be synchronized (in your very simple case, where there's just one integer, you probably need nothing, but you sure do when it gets more complicated).

As mentioned in other answers, you need some kind of inter-process communication. You can find more info on the topic in the "Beej guide to Unix IPC" (it's a "classic"), available at:
http://beej.us/guide/bgipc/
Fernando

Related

Let a daemon simulate keypress with xdo

I'm trying to make a daemon simulate a keypress. I got it already working for a non daemon process.
#include <xdo.h>
int main()
{
xdo_t * x = xdo_new(NULL);
xdo_enter_text_window(x, CURRENTWINDOW, "Hallo xdo!", 500000);
return 0;
}
If I try the same code for my daemon I get the following error
Error: Can't open display: (null)
Is there a way to still make it work with xdo or something else?
Your process must know the $DESKTOP environment variable specifying the desktop session to send these keys to, and yours doesn't seem to have that environment set.
Which also means you must realize it needs to have all the necessary privileges, which means access to ~/.Xauthority of the owner of the session, and the sockets in /tmp/.X11-unix

posix_spawn pipe dmesg to python script

I've got several USB to 422 adapters in my test system. I've used FTProg to give each adapter a specific name: Sensor1, Sensor2, etc. They will all be plugged in at power on. I don't want to hard code each adapter to a specific ttyUSBx. I want the drivers to figure out which tty it needs to use. I'm developing in C for a linux system. My first thought was to something like this in my startup code.
system("dmesg | find_usb.py");
The python script would find the devices since each one has a unique Product Description. Then using the usb tree to associate each device with its ttyUSBx. The script would then create /tmp/USBDevs which would just be a simple device:tty pairing that would be easy for the C code to search.
I've been told...DoN't UsE sYsTeM...use posix_spawn(). But I'm having problems getting the output of dmesg piped to my python script. This isn't working
char *my_args[] = {"dmesg", "|", "find_usb.py", NULL};
pid_t pid;
int status;
status = posix_spawn(&pid, "/bin/dmesg", NULL, NULL, my_args, NULL);
if(status == 0){
if(waitpid(pid, &status, 0) != -1);{
printf("posix_spawn exited: %i", status);
}
I've been trying to figure out how to do this with posix_spawn_file_actions(), but I'm not allowed to hit the peak of the 'Ballmer Curve' at work.
Thanks in advance
Instead of using /dev/ttyUSB* devices, write udev rules to generate named symlinks to the devices. For a brief how-to, see here. Basically, you'll have an udev rule for each device, ending with say SYMLINK+=Sensor-name, and in your program, use /dev/Sensor-name for each sensor. (I do recommend using Sensor- prefix, noting the initial Capital letter, as all device names are currently lowercase. This avoids any clashes with existing devices.)
These symlinks will then only exist when the matching device is plugged in, and will point to the correct device (/dev/ttyUSB* in this case). When the device is removed, udev automagically deletes the symlink also. Just make sure your udev rule identifies the device precisely (not just vendor:device, but serial number also). I'd expect the rule to look something like
SUBSYSTEM=="tty", ATTRS{idVendor}=="VVVV", ATTRS{idProduct}=="PPPP", ATTRS{serial}=="SSSSSSSS", SYMLINK+="Sensor-name"
where VVVV is the USB Vendor ID (four hexadecimal digits), PPPP is the USB Product ID (four hexadecimal digits), and SSSSSSSS is the serial number string. You can see these values using e.g. udevadm info -a -n /dev/ttyUSB* when the device is plugged in.
If you still insist on parsing dmesg output, using your own script is a good idea.
You could use FILE *handle = popen("dmesg | find_usb.py", "r"); and read from handle like it was a file. When complete, close the handle using int exitstatus = pclose(handle);. See man popen and man pclose for the details, and man 2 wait for the WIFEXITED(), WEXITSTATUS(), WIFSIGNALED(), WTERMSIG() macros you'll need to use to examine exitstatus (although in your case, I suppose you can just ignore any errors).
If you do want to use posix_spawn() (or roughly equivalently, fork() and execvp()), you'd need to set up at least one pipe (to read the output of the spawned command) – two if you spawn/fork+exec both dmesg and your Python script –, and that gets a bit more complicated. See man pipe for details on that. Personally, I would rewrite the Python script so that it executes dmesg itself internally, and only outputs the device name(s). With posix_spawn(), you'd init a posix_file_actions_t, with three actions: _adddup2() to duplicate the write end of the pipe to STDOUT_FILENO, and two _addclose()s to close both ends of the pipe. However, I myself prefer to use fork() and exec() instead, somewhat similar to the example by Glärbo in this answer.

How to make c programe as daemon in ubuntu?

Hi I am new to the linux environment. I am trying to create daemon process.
#include<stdio.h>
int main()
{
int a=10,b=10,c;
c=sum(a,b);
printf("%d",c);
return (0);
}
int sum(int a,int b)
{
return a+b;
}
I want to create daemon process of it. May i know how can do this? Any help would be appreciated. Thank you.
A daemon generally doesn't use its standard input and output streams, so it is unclear how your program could be run as a daemon. And a daemon program usually don't have any terminal, so it cannot use clrscr. Read also the tty demystified page, and also daemon(7).
I recommend reading some good introduction to Linux programming, like the old freely downloadable ALP (or something newer). We can't explain all of it here, and you need to read an entire book. See also intro(2) and syscalls(2).
I also recommend reading more about OSes, e.g. the freely available Operating Systems: Three Easy Pieces textbook.
You could use the daemon(3) function in your C program to run it as a daemon (but then, you are likely to not have any input and output). You may want to log messages using syslog(3).
You might consider job control facilities of your shell. You could run your program in the background (e.g. type myprog myarg & in your interactive shell). You could use the batch command. However neither background processes nor batch jobs are technically daemons.
Perhaps you want to code some ONC-RPC or JSONRPC or Web API server and client. You'll find libraries for that. See also pipe(7), socket(7)
(take several days or several weeks to read much more)
First find what are the properties of daemon process, as of my knowledge a daemon process have these properties:
Should not have any parent (it itself should be parent)
Process itself is a session leader.
Environment change to root.
File mode creating mask should be zero.
No controlling terminal.
All terminal should be removed
Should not be un-mounted .
Implement the code by considering above properties which is
int i=0;
int main()
{
int pid;
pid=fork();
if(pid!=0) {
/** you can add your task here , whatever you want to run in background **/
exit(0);
}
else
{
setsid();//setting sessions
chdir("/");//root.. should'nt beunmounted
umask(0);
close(0);//all terminal are removed
close(1);
close(2);
while(1)
{
printf("i = %d \n",i);
i++;
}
}
return 0;
}
or you can go through man page of daemon()
int daemon(int nochdir, int noclose);
I hope it helps.
Instead of writing the code to make the C program a daemon I would go with an already mature tool like supervisor:
http://supervisord.org/
I think this below will work
screen cmd arg1 arg2
You can also try
nohup cmd arg1

How to configure GDB in Eclipse such that all prcoesses keep on running including the process being debugged?

I am new in C programming and I have been trying hard to customize an opensource tool written in C according to my organizational needs.
IDE: Eclipse,
Debugger: GDB,
OS: RHEL
The tool is multi-process in nature (main process executes first time and spawns several child processes using fork() ) and they share values in run time.
While debugging in Eclipse (using GDB), I find that the process being debugged is only running while other processes are in suspended mode. Thus, the only running process is not able to do its intended job because the other processes are suspended.
I saw somewhere that using MI command in GDB as "set non-stop on" could make other processes running. I used the same command in the gdbinit file shown below:
Note: I have overridden above .gdbinit file with an another gdbinit because the .gdbinit is not letting me to debug child processes as debugger terminates after the execution of main process.
But unfortunately debugger stops responding after using this command.
Please see below commands I am using in the gdbinit file:
Commenting non-stop enables Eclipse to continue usual debugging of the current process.
Adding: You can see in below image that only one process is running while others are suspended.
Can anyone please help me to configure GDB according to my requirement?
Thanks in advance.
OK #n.m.: Actually, You were right. I should have given more time to understand the flow of the code.
The tool creates 3 processes first and then the third process creates 5 threads and keeps on wait() for any child thread to terminate.
Top 5 threads (highlighted in blue) shown in the below image are threads and they are children of Process ID: 17991
The first two processes are intended to initiate basic functionality of the tool and hence they just wait to get exit(0). You can see below.
if (0 != (pid = zbx_fork()))
exit(0);
setsid();
signal(SIGHUP, SIG_IGN);
if (0 != (pid = zbx_fork()))
exit(0);
That was the reason I was not actually able to step in these 3 processes. Whenever, I tried to do so, the whole main process terminated immediately and consequently leaded to terminate all other processes.
So, I learned that I was supposed to "step-into" threads only. And yes, actually I can now debug :)
And this could be achieved because I had to remove the MI command "set follow-fork-mode child". So, I just used the default " .gdbinit" file with enabled "Automatically debug forked process".
Thanks everyone for your input. Stackoverflow is an awesome place to learn and share. :)

Making sure two processes interleave

In a C program on Linux, I fork() followed by execve() twice to create two processes running two seperate programs. How do I make sure that the execution of the two child processes interleave?
Thanks
Tried to do the above task as an answer given below had suggested but seems on encountering sched_scheduler() process hangs. Including code below...replay1 and replay2 are two prograns which simply prints "Replay1" and "Replay2" respectively.
# include<stdio.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <unistd.h>
#include <signal.h>
#include <sched.h>
void main()
{
int i,pid[5],pidparent,new=0;
char *newargv1[] = {"./replay1",NULL};
char *newargv2[] = {"./replay2",NULL};
char *newenviron[] = {NULL};
struct sched_param mysched;
mysched.sched_priority = 1;
sched_setscheduler(0,SCHED_FIFO, &mysched);
pidparent =getpid();
for(i=0;i<2;i++)
{
if(getpid()==pidparent)
{
pid[i] = fork();
if(pid[i] != 0)
kill(pid[i],SIGSTOP);
if(i==0 && pid[i]==0)
execve(newargv1[0], newargv1, newenviron);
if (i==1 && pid[i]==0)
execve(newargv2[0], newargv2, newenviron);
}
}
for(i=0;i<10;i++)
{
if(new==0)
new=1;
else
new=0;
kill(pid[new],SIGCONT);
sleep(100);
kill(pid[new], SIGSTOP);
}
}
Since you need random interleaving, here's a horrible hack to do it:
Immediately after forking, send a SIGSTOP to each application.
Set your parent application to have real-time priority with sched_setscheduler. This will allow you to have more fine-grained timers.
Send a SIGCONT to one of the child processes.
Loop: Wait a random, short time. Send a SIGSTOP to the currently-running application, and a SIGCONT to the other. Repeat.
This will help force execution to interleave. It will also make things quite slow. You may also want to try using sched_setaffinity to assign each process to a different CPU (if you have a dual-core or hyperthreaded CPU) - this will cause them to effectively run simultaneously, modulo wait times for I/O. I/O wait times (which could cause them to wait for the hard disk, at which point they're likely to wake up sequentially and thus not interleave) can be avoided by making sure whatever data they're manipulating is on a ramdisk (on linux, use tmpfs).
If this is too coarse-grained for you, you can use ptrace's PTRACE_SINGLESTEP operation to step one CPU operation at a time, interleaving as you see fit.
As this is for testing purposes, you could place sched_yield(); calls after every line of code in the child processes.
Another potential idea is to have a parent process ptrace() the child processes, and use PTRACE_SINGLESTEP to interleave the two process's execution on an instruction-by-instruction basis.
if you need to synchronize them and they are your own processes, use semaphores. If you do not have access to the source, then there is no way to synchronize them.
If your aim is to do concurrency testing, I know of only two techniques:
Test exact scenarios using synchronization. For example, process 1 opens a connection and executes a query, then process 2 comes in and executes a query, then process1 gets active again and gets the results, etc. You do this with synchronization techniques mentioned by others. However, getting good test scenarios is very difficult. I have rarely used this method in the past.
In random you trust: fire up a high number of test processes that execute a long running test suite. I used this method for both multithreading and multiprocess testing (my case was testing device driver access from multiple processes without blue screening out). Usually you want to make the number of processes and number of iterations of the test suite per process configurable so that you can either do a quick pass or do a longer test before a release (running this kind of test with 10 processes for 10-12 hours was not uncommon for us). A usual run for this sort of testing is measured in hours. You just fire up the processes, let them run for a few hours, and hope that they will catch all the timing windows. The interleaving is usually handled by the OS, so you don't really need to worry about it in the test processes.
Job control is much simpler with the Bash instead of C. Try this:
#! /bin/bash
stop ()
{
echo "$1 stopping"
kill -SIGSTOP $2
}
cont ()
{
echo "$1 continuing"
kill -SIGCONT $2
}
replay1 ()
{
while sleep 1 ; do echo "replay 1 running" ; done
}
replay2 ()
{
while sleep 1 ; do echo "replay 2 running" ; done
}
replay1 &
P1=$!
stop "replay 1" $P1
replay2 &
P2=$!
stop "replay 2" $P2
trap "kill $P1;kill $P2" EXIT
while sleep 1 ; do
cont "replay 1 " $P1
cont "replay 2" $P2
sleep 3
stop "replay 1 " $P1
stop "replay 2" $P2
done
The two processes are running in parallel:
$ ./interleave.sh
replay 1 stopping
replay 2 stopping
replay 1 continuing
replay 2 continuing
replay 2 running
replay 1 running
replay 1 running
replay 2 running
replay 1 stopping
replay 2 stopping
replay 1 continuing
replay 2 continuing
replay 1 running
replay 2 running
replay 2 running
replay 1 running
replay 2 running
replay 1 running
replay 1 stopping
replay 2 stopping
replay 1 continuing
replay 2 continuing
replay 1 running
replay 2 running
replay 1 running
replay 2 running
replay 1 running
replay 2 running
replay 1 stopping
replay 2 stopping
^C

Resources