I'm writing a C program in which I need to pass messages between child processes and the main process. But the thing is, I need to do it without using the functions like msgget() and msgsnd()
How can I implement? What kind of techniques can I use?
There are multiple ways to communicate with children processes, it depends on your application.
Very much depends on the level of abstraction of your application.
-- If the level of abstraction is low:
If you need very fast communication, you could use shared memory (e.g. shm_open()). But that would be complicated to synchronize correctly.
The most used method, and the method I'd use if I were in your shoes is: pipes.
It's simple, fast, and since pipes file descriptors are supported by epoll() and those kind of asynchronous I/O APIs, you can take advantage from this fact.
Another plus is that, if your application grows, and you need to communicate with remote processes (processes that are not in your local machine), adapting pipes to sockets is very easy, basically it's still the same reading/writing from/to a file descriptor.
Also, Unix-domain sockets (which in other platforms are called "named pipes") let you to have a server process that creates a listening socket with a very well known name (e.g. an entry in the filesystem, such as /tmp/my_socket) and all clients in the local machine can connect to that.
Pipes, networking sockets, or unix-domain sockets are very interchangeable solutions, because - as said before - all involve reading/writing data from/to a file descriptor, so you can reuse the code.
The disadvantage with a file descriptor is that you're writing data to a stream of bytes, so you need to implement the "message streaming protocol" of your messages by yourself, to "unstream" your messages (marshalling/unmarshalling), but that's not so complicated in the most of the cases, and that also depends on the kind of messages you're sending.
I'd pass on other solutions such as memory mapped files and so on.
-- If the level of abstraction is higher:
You could use a 3rd party message passing system, such as RabbitMQ, ZMQ, and so on.
Related
I am implementing a very basic C server that allows clients to chat. Right now I am using fork(), but I am having trouble having clients see each others' messages.
It also seems that all clients get the same file descriptor from accept(). Basically, I have a while loop where I test if someone wants to connect with select(), accept() their connection, and fork(). After that I read input and try to pass them to all users (whom I am keeping in a list). I can copy/paste my code if necessary.
So, is it possible to have the clients communicate with processes, or do I have to use pthreads?
Inter-process communication -IPC- (in general) don't care about client vs server (except at connect phase). A given process can have both a client and a server role (on different sockets), and would use poll(2) or the older select on several sockets in some event loop.
Notice that processes have each their own virtual address space, while threads share the same virtual address space (the one of their containing process). Read some pthread tutorial, and some book on POSIX programming (perhaps the old ALP). Be aware that a lot of information regarding processes can be queried on Linux thru /proc/ (see proc(5) for more). In particular, the virtual address space of process of pid 1234 can be obtained thru /proc/1234/maps and its opened file descriptors thru /proc/1234/fd/ and /proc/1234/fdinfo/ etc....
However, it is simpler to code a common server keeping the shared state, and dispatching messages to clients.
You could design a protocol where the clients have some way to initiate that IPC. For example, if all the processes are on the same machine, you could have a protocol which transmits a file path used as unix(7) socket address, or as fifo(7), and each "client" process later initiate (with some connect) a direct communication with another "client". It might be unwise to do so.
Look also into libraries like 0mq. They often are free software, so you can study their source code.
I would like to inject a shared library into a process (I'm using ptrace() to do that part) and then be able to get output from the shared library back into the debugger I'm writing using some form of IPC. My instinct is to use a pipe, but the only real requirements are:
I don't want to store anything on the filesystem to facilitate the communication as it will only last as long as the debugger is running.
I want a portable Unix solution (so Unix-standard syscalls would be ideal).
The problem I'm running into is that as far as I can see, if I call pipe() in the debugger, there is no way to pass the "sending" end of the pipe to the target process, and vice versa with the receiving end. I could set up shared memory, but I think that would require creating a file somewhere so I could reference the memory segment from both processes. How do other debuggers capture output when they attach to a process after it has already begun running?
I assume that you are in need of a debugging system for your business logic code (I mean application). From my experience, this kind of problem is tackled with below explained system design. (My experience is in C++, I think the same must hold good for the C based system also.)
Have a logger system (a separate process). This will contain - logger manager and the logging code - which will take the responsibility of dumping the log into hard disk.
Each application instance (process running in Unix) will communicate to this process with sockets. So you can have your own messaging protocol and communicate with the logger system with socket based communication.
Later, for each of this application - have a switch which can switch off/on the log.So that you can have a tool - to send signal to this process to switch on/off the message logging.
At a high level, this is the most generic way to develop a logging system. In case you need any information - Do comment it. I will try to answer.
Using better search terms showed me this question is a dup of these guys:
Can I share a file descriptor to another process on linux or are they local to the process?
Can I open a socket and pass it to another process in Linux
How to use sendmsg() to send a file-descriptor via sockets between 2 processes?
The top answers were what I was looking for. You can use a Unix-domain socket to hand a file descriptor off to a different process. This could work either from debugger to library or vice versa, but is probably easier to do from debugger to library because the debugger can write the socket's address into the target process while it injects the library.
However, once I pass the socket's address into the target process, I might as well just use the socket itself instead of using a pipe in addition.
I wrote a program that creates a TCP and UDP socket in C and starts both servers up. The goal of the application is to monitor requests over the TCP socket as to what UDP packets to send it (i.e. monitor for something like "0x01 0x02" and if I see it, then have the UDP server parse the payload, and forward it over to the TCP server for processing). The problem is, the UDP server will be busy keeping another device up, literally sending thousands of packets back and forth with this device. So what is the best way to continuously monitor requests from the TCP server, but send it certain payloads from the UDP server when requested since the UDP server will be busy?
I looked into pthreads with semaphores and/or mutex (not sure all the socket operations are thread safe, though, and if this is the right way to approach it) as well as fork / pipe. Forking the UDP server off as a child process seems easy enough, but I don't see exactly how I would be passing the kind of data I need among both servers (need request data from TCP and payload data from the UDP).
Firstly, would it make sense to put these two servers into one program? If so, you won't have to communicate between processes, and the whole logic becomes substantially easier. You will have to think about doing asynchronous input and output, and the select() function is designed for just this. There will be many explanations around on how to do this, and a quick look finds this page.
However, if you must have two separate processes, then you will need to choose a mechanism for inter-process communication, of which there are several, and your choice will be affected by your operating system. A pipe, if available, might be suitable, as might a Unix named pipe. Or you could look into third-party message passing frameworks, or just use shared memory and/or semaphores (but be very careful!).
What you should look at is libevent, anything else you are reinventing the wheel writing this low level code yourself. Here is a Tutorial, Google, Krugle
Also you should use some predefined protocol between the servers. There are lots to choose from. Ranging from the extremely simple XDR to Protocol Buffers.
You could use pipes on Unix. See http://tldp.org/LDP/lpg/node11.html
Well, you certainly picked an interesting introduction to C!
You might try shared memory. What OS?
If i use UDP sockets for interprocess communication, can i expect that all send data is received by the other process in the same order?
I know this is not true for UDP in general.
No. I have been bitten by this before. You may wonder how it can possibly fail, but you'll run into issues of buffers of pending packets filling up, and consequently packets will be dropped. How the network subsystem drops packets is implementation-dependent and not specified anywhere.
In short, no. You shouldn't be making any assumptions about the order of data received on a UDP socket, even over localhost. It might work, it might not, and it's not guaranteed to.
No, there is no such guarantee, even with local sockets. If you want an IPC mechanism that guraantees in-order delivery you might look into using full-duplex pipes with popen(). This opens a pipe to the child process that either can read or write arbitrarily. It will guarantee in-order delivery and can be used with synchronous or asynchronous I/O (select() or poll()), depending on how you want to build the application.
On unix there are other options such as unix domain sockets or System V message queues (some of which may be faster) but reading/writing from a pipe is dead simple and works. As a bonus it's easy to test your server process because it is just reading and writing from Stdio.
On windows you could look into Named Pipes, which work somewhat differently from their unix namesake but are used for precisely this sort of interprocess communication.
Loopback UDP is incredibly unreliable on many platforms, you can easily see 50%+ data loss. Various excuses have been given to the effect that there are far better transport mechanisms to use.
There are many middleware stacks available these days to make IPC easier to use and cross platform. Have a look at something like ZeroMQ or 29 West's LBM which use the same API for intra-process, inter-process (IPC), and network communications.
The socket interface will probably not flow control the originator of the data, so you will probably see reliable transmission if you have higher level flow control but there is always the possibility that a memory crunch could still cause a dropped datagram.
Without flow control limiting kernel memory allocation for datagrams I imagine it will be just as unreliable as network UDP.
What are they and how do they work?
Context happens to be SQL Server
Both on Windows and POSIX systems, named-pipes provide a way for inter-process communication to occur among processes running on the same machine. What named pipes give you is a way to send your data without having the performance penalty of involving the network stack.
Just like you have a server listening to a IP address/port for incoming requests, a server can also set up a named pipe which can listen for requests. In either cases, the client process (or the DB access library) must know the specific address (or pipe name) to send the request. Often, a commonly used standard default exists (much like port 80 for HTTP, SQL server uses port 1433 in TCP/IP; \\.\pipe\sql\query for a named pipe).
By setting up additional named pipes, you can have multiple DB servers running, each with its own request listeners.
The advantage of named pipes is that it is usually much faster, and frees up network stack resources.
--
BTW, in the Windows world, you can also have named pipes to remote machines -- but in that case, the named pipe is transported over TCP/IP, so you will lose performance. Use named pipes for local machine communication.
Unix and Windows both have things called "Named pipes", but they behave differently. On Unix, a named pipe is a one-way street which typically has just one reader and one writer - the writer writes, and the reader reads, you get it?
On Windows, the thing called a "Named pipe" is an IPC object more like a TCP socket - things can flow both ways and there is some metadata (You can obtain the credentials of the thing on the other end etc).
Unix named pipes appear as a special file in the filesystem and can be accessed with normal file IO commands including the shell. Windows ones don't, and need to be opened with a special system call (after which they behave mostly like a normal win32 handle).
Even more confusing, Unix has something called a "Unix socket" or AF_UNIX socket, which works more like (but not completely like) a win32 "named pipe", being bidirectional.
Linux Pipes
First In First Out (FIFO) interproccess communication mechanism.
Unnamed Pipes
On the command line, represented by a "|" between two commands.
Named Pipes
A FIFO special file. Once created, you can use the pipe just like a normal file(open, close, write, read, etc).
To create a named pipe, called "myPipe", from the command line (man page):
mkfifo myPipe
To create a named pipe from c, where "pathname" is the name you would like the pipe to have and "mode" contains the permissions you want the pipe to have (man page):
#include <sys/types.h>
#include <sys/stat.h>
int mkfifo(const char *pathname, mode_t mode);
According to Wikipedia:
[...] A traditional pipe is "unnamed" because it exists anonymously and persists only for as long as the process is running. A named pipe is system-persistent and exists beyond the life of the process and must be "unlinked" or deleted once it is no longer being used. Processes generally attach to the named pipe (usually appearing as a file) to perform IPC (inter-process communication).
Compare
echo "test" | wc
to
mkdnod apipe p
wc apipe
wc will block until
echo "test" > apipe
executes
This is an exeprt from Technet (so not sure why the marked answer says named pipes are faster??):
Named Pipes vs. TCP/IP Sockets
In a fast local area network (LAN) environment, Transmission Control Protocol/Internet Protocol (TCP/IP) Sockets and Named Pipes clients are comparable with regard to performance. However, the performance difference between the TCP/IP Sockets and Named Pipes clients becomes apparent with slower networks, such as across wide area networks (WANs) or dial-up networks. This is because of the different ways the interprocess communication (IPC) mechanisms communicate between peers.
For named pipes, network communications are typically more interactive. A peer does not send data until another peer asks for it using a read command. A network read typically involves a series of peek named pipes messages before it starts to read the data. These can be very costly in a slow network and cause excessive network traffic, which in turn affects other network clients.
It is also important to clarify if you are talking about local pipes or network pipes. If the server application is running locally on the computer that is running an instance of SQL Server, the local Named Pipes protocol is an option. Local named pipes runs in kernel mode and is very fast.
For TCP/IP Sockets, data transmissions are more streamlined and have less overhead. Data transmissions can also take advantage of TCP/IP Sockets performance enhancement mechanisms such as windowing, delayed acknowledgements, and so on. This can be very helpful in a slow network. Depending on the type of applications, such performance differences can be significant.
TCP/IP Sockets also support a backlog queue. This can provide a limited smoothing effect compared to named pipes that could lead to pipe-busy errors when you are trying to connect to SQL Server.
Generally, TCP/IP is preferred in a slow LAN, WAN, or dial-up network, whereas named pipes can be a better choice when network speed is not the issue, as it offers more functionality, ease of use, and configuration options.
Pipes are a way of streaming data between applications. Under Linux I use this all the time to stream the output of one process into another. This is anonymous because the destination app has no idea where that input-stream comes from. It doesn't need to.
A named pipe is just a way of actively hooking onto an existing pipe and hoovering-up its data. It's for situations where the provider doesn't know what clients will be eating the data.
Inter-process communication (mostly) for Windows Applications. Similar to using sockets to communicate between applications in Unix.
MSDN
Named pipes in a unix/linux context can be used to make two different shells to communicate since a shell just can't share anything with another.
Furthermore, one script instantiated twice in the same shell can't share anything through the two instances. I found a use for named pipes when coding a daemon that contains the start() and stop() function, and I wanted to use the same script to perform the two actions.
Without named pipes (or any kind of semaphore) starting the script in the background is not a problem. The thing is when it finishes you just can't access the instance in background.
So when you want to send him the stop command you just can't: running the same script without named pipes and calling the stop() function won't do anything since you are actually running another instance.
The solution was to implement two pipes, one READ and the other WRITE when you start the daemon. Then make him, among its other tasks, listen to the READ pipe. Then the Stop() function contains a command that will write a message in the pipe, that will be handled by the background running script that will perform an exit 0. This way our second instance of the same script has only on task to do: tell the first instance to stop.
This way one and only one script can start and stop itself.
Of course you have different ways to do it by triggering the stop via a touch for example. But this one is nice and interesting to code.
Named pipes is a windows system for inter-process communication. In the case of SQL server, if the server is on the same machine as the client, then it is possible to use named pipes to tranfer the data, as opposed to TCP/IP.