I wrote a program that creates a TCP and UDP socket in C and starts both servers up. The goal of the application is to monitor requests over the TCP socket as to what UDP packets to send it (i.e. monitor for something like "0x01 0x02" and if I see it, then have the UDP server parse the payload, and forward it over to the TCP server for processing). The problem is, the UDP server will be busy keeping another device up, literally sending thousands of packets back and forth with this device. So what is the best way to continuously monitor requests from the TCP server, but send it certain payloads from the UDP server when requested since the UDP server will be busy?
I looked into pthreads with semaphores and/or mutex (not sure all the socket operations are thread safe, though, and if this is the right way to approach it) as well as fork / pipe. Forking the UDP server off as a child process seems easy enough, but I don't see exactly how I would be passing the kind of data I need among both servers (need request data from TCP and payload data from the UDP).
Firstly, would it make sense to put these two servers into one program? If so, you won't have to communicate between processes, and the whole logic becomes substantially easier. You will have to think about doing asynchronous input and output, and the select() function is designed for just this. There will be many explanations around on how to do this, and a quick look finds this page.
However, if you must have two separate processes, then you will need to choose a mechanism for inter-process communication, of which there are several, and your choice will be affected by your operating system. A pipe, if available, might be suitable, as might a Unix named pipe. Or you could look into third-party message passing frameworks, or just use shared memory and/or semaphores (but be very careful!).
What you should look at is libevent, anything else you are reinventing the wheel writing this low level code yourself. Here is a Tutorial, Google, Krugle
Also you should use some predefined protocol between the servers. There are lots to choose from. Ranging from the extremely simple XDR to Protocol Buffers.
You could use pipes on Unix. See http://tldp.org/LDP/lpg/node11.html
Well, you certainly picked an interesting introduction to C!
You might try shared memory. What OS?
Related
I'm writing a C program in which I need to pass messages between child processes and the main process. But the thing is, I need to do it without using the functions like msgget() and msgsnd()
How can I implement? What kind of techniques can I use?
There are multiple ways to communicate with children processes, it depends on your application.
Very much depends on the level of abstraction of your application.
-- If the level of abstraction is low:
If you need very fast communication, you could use shared memory (e.g. shm_open()). But that would be complicated to synchronize correctly.
The most used method, and the method I'd use if I were in your shoes is: pipes.
It's simple, fast, and since pipes file descriptors are supported by epoll() and those kind of asynchronous I/O APIs, you can take advantage from this fact.
Another plus is that, if your application grows, and you need to communicate with remote processes (processes that are not in your local machine), adapting pipes to sockets is very easy, basically it's still the same reading/writing from/to a file descriptor.
Also, Unix-domain sockets (which in other platforms are called "named pipes") let you to have a server process that creates a listening socket with a very well known name (e.g. an entry in the filesystem, such as /tmp/my_socket) and all clients in the local machine can connect to that.
Pipes, networking sockets, or unix-domain sockets are very interchangeable solutions, because - as said before - all involve reading/writing data from/to a file descriptor, so you can reuse the code.
The disadvantage with a file descriptor is that you're writing data to a stream of bytes, so you need to implement the "message streaming protocol" of your messages by yourself, to "unstream" your messages (marshalling/unmarshalling), but that's not so complicated in the most of the cases, and that also depends on the kind of messages you're sending.
I'd pass on other solutions such as memory mapped files and so on.
-- If the level of abstraction is higher:
You could use a 3rd party message passing system, such as RabbitMQ, ZMQ, and so on.
I've made a test C program that creates an AF_PACKET socket, creates x amount of threads via pthreads, and within each thread performs epoll on the socket's file descriptor. This program was made for Linux and I've compiled it using GCC on Ubuntu 18.04. I've submitted a GitHub Gist of the program here since it's 200+ lines of code. I am still fairly new to C and network programming. Therefore, I'm sure there are many improvements I can make to the code. I am open to suggestions!
I have two main questions:
Is there a better way to receive and process high amounts of packets/traffic in a user space program than the above? I've read using pthreads along with epoll would be the best option, but I've also looked into select and standard poll.
When the program above is executed without any debug output via fprintf(), each thread consumes 100% CPU on the epoll_wait() function within the while loop. Is this normal behavior or am I using epoll incorrectly? I've looked at some other examples and I use epoll the same way as the examples do. I've taken a look at the manual page for epoll and I believe I'm using it correctly in my case. I've also tried setting a timeout for the epoll_wait() function, but it was still consuming 100% CPU per thread (which I'd expect due to the while loop).
I plan to make a program that will redirect traffic after inspecting the traffic and I expect a lot of incoming packets which is why I wanted to see if there is a better way to receive and process high amounts of packets. I also understand I could just use standard SOCK_DGRAM or SOCK_STREAM sockets and bind them to an IP and port. However, I do want to process and inspect all incoming traffic to an interface and forward traffic if necessary (e.g. if the destination address matches a forwarding rule). I also wasn't sure if I should make multiple sockets in this case (perhaps a socket per thread). I did do this initially, but it resulted in unexpected behavior and it was only ever reading from one socket descriptor anyways. Perhaps I wasn't creating the new sockets properly.
Any help is highly appreciated and if you need any more information, please let me know.
Thank you for your time.
I am writing a simple instant messenger program in C on Linux.
Right now I have a program that binds a socket to a port on the local machine, and listens for text data being sent by another program that connected to my local machine IP and port.
Well, I can have this client send text data to my program, and have it displayed using stdout on my local machine; however, I cannot program a way to send data back to the client machine, because my program is busy listening and displaying the text sent by the client machine.
How would I go about either creating a new process (that listens and displays the text sent to it by the client machine, then takes that text and sends it to the other program's stdout, while the other program takes care of stdin being sent to the client machine) or create 2 programs that do the separate jobs (sending, receiving, and displaying), and sends the appropriate data to one another?
Sorry if that is weirdly worded, and I will clarify if need be. I looked into exec, execve, fork, etc. but am confused as to whether this is the appropriate path to look in to, or if there is a simpler way that I am missing.
Any help would be greatly appreciated, Thank you.
EDIT: In retrospect, I figured that this would be much easier accomplished with 2 separate programs. One, the IM server, and the others, the IM clients.
The IM Clients would connect to the IM server program, and send whatever text they wanted to the IM server. Then, the IM server would just record the data sent to it in a buffer/file with the names/ip's of the clients appended to the text sent to it by each client, and send that text (in format of name:text) to each client that is connected.
This would remove the need for complicated inter-process/program communication for stdin and stdout, and instead, use a simple client/server way of communicating, with the client programs displaying text sent to it from server via stdout, and using stdin to send whatever text to the server.
With this said, I am still interested in someone answering my original question: for science. Thank you all for reading, and hopefully someone will benefit from my mental brainstorming, or whatever answers come from the community.
however, i cannot program a way to send data back to the client machine, because my program is busy listening and displaying the text sent by the client machine.
The same socket that was returned from a listening-socket by accept() can be used for both sending and receiving data. So your socket is never "busy" just because you're reading from it ... you can write back on the same socket.
If you need to both read and write concurrently, then share the socket returned from accept() across two different threads. Since two different buffers are being used by the networking stack for sending and receiving on the socket, a dedicated thread for reading and another dedicated thread for writing to the socket will be thread-safe without the use of mutexes.
I would go with fork() - create a child process and now you have two different processes that can do two different things on two different sockets- one can receive and the other can send. I have no personal experience with coding a client/server like this yet, but that would be my first stab at solving your issue...
As #bdonlan mentioned in a comment, you definitely need a multiplexing call like select or preferably poll (or related syscalls like pselect, ppoll ...). These multiplexing calls are the primitive to wait on several channels at once (with pselect and ppoll able to atomically wait for both I/O events and signals). Read also the select tutorial man page. Of course, you can wait for several file descriptors, and you can wait for both reading & writing abilities (even on the same socket, if needed), in the same select or poll syscall.
All event-based loops and frameworks are using these multiplexing calls (like poll or select). You could also use libevent, or even (particularly when coding a graphical user interface application) some GUI toolkit like Gtk or Qt, which are all based around a central event loop.
I don't think that having a multi-process or multi-threaded application is useful in your case. You just need some event loop.
You might also ask to get a SIGIO signal when data arrives on your socket using fcntl with F_SETOWN, but this is not very useful for you. Then you often want to have your socket non-blocking.
I'm writing a TCP server/client application on Windows, to become familiar with the Winsock API. I come from an UNIX background and would like to know which of these could be the best approach to implement the application:
First the specification
Must scale well on multiprocessor and single-processor systems.
No hardset limit of connections.
Application can both listen for connections, acting as server, and act as client.
Multi threaded.
First approach:
Non-blocking select-like socket for listening, in the 'server' thread.
for each client connecting we spawn a separate thread.
Second approach:
Blocking socket for listening, in the 'server' thread.
for each client connecting we spawn a separate thread.
Third approach:
Non-blocking select-like socket for listening, in the 'server' thread.
No separate thread for each incoming connection, the protocol would need state information kept across sessions I suppose.
I wonder what is the most efficient and scalable approach, and especially if it can work with a UDP socket too.
Note: I'm writing the application in plain and old C. No .NET nor C++ involved, C++ exceptions disabled too.
As Gary says, I/O Completion Ports are the most efficient way to manage multiple network connections in a non-blocking/async manner on Windows platforms.
With IOCP you get notified when your networking operations complete and you can process these completions with a small number of threads. You get to decide how many threads you allocate to process the completions and the kernel decides when to use the threads that you're providing. It uses them in a LIFO order, to reduce context switching, so that if you are only using the minimal number of threads required at any point and you're reusing the same threads rather than cycling through all of the threads that you have available for use.
The asynchronous nature of IOCP programming can be a little confusing to start with, but once you get the hang of it it's fairly straight forward.
I have some free IOCP server code which demonstrates the basics and provides some example servers that are pretty easy to build on. You can find the code here: http://www.serverframework.com/products---the-free-framework.html. That page also links to some articles that I wrote to explain the code.
Relating this to the detail of your question. You should be looking at a variation on your third approach. Use AcceptEx() to accept new connections, this can be used in an asynchronous manner and so you don't need a separate thread for connection acceptance and can use the threads that are also processing your overlapped/async read and write operations.
I've written an asynchronous client which does not use blocking sockets, so if you're interested in that approach, then take a look at my client: http://codesprout.blogspot.com/2011/04/asynchronous-http-client.html
It's an HTTP client, but I've shown very little HTTP protocol processing in there, it's all just .NET sockets. The server would work in a similar way: you can take advantage of the *Async methods such as AsseptAsync.
Under Windows, the best performances are achieved by using I/O completion calls.
This is because the lists and queuing mechanism is done in the kernel, far from the heavy user-mode overhead (which drags your code down if you dare to do the hard work yourself).
Unfortunately, Windows I/O completion calls need to allocate many threads to scale and this is quickly killing the performances (as compared to Linux epoll which can scale independently of the number of worker threads you decide to involve in the task).
Recently, I discovered http://gwan.com/ a Web server which came from Windows and was then ported under Linux. And their authors describe the problem in details on their forum.
If i use UDP sockets for interprocess communication, can i expect that all send data is received by the other process in the same order?
I know this is not true for UDP in general.
No. I have been bitten by this before. You may wonder how it can possibly fail, but you'll run into issues of buffers of pending packets filling up, and consequently packets will be dropped. How the network subsystem drops packets is implementation-dependent and not specified anywhere.
In short, no. You shouldn't be making any assumptions about the order of data received on a UDP socket, even over localhost. It might work, it might not, and it's not guaranteed to.
No, there is no such guarantee, even with local sockets. If you want an IPC mechanism that guraantees in-order delivery you might look into using full-duplex pipes with popen(). This opens a pipe to the child process that either can read or write arbitrarily. It will guarantee in-order delivery and can be used with synchronous or asynchronous I/O (select() or poll()), depending on how you want to build the application.
On unix there are other options such as unix domain sockets or System V message queues (some of which may be faster) but reading/writing from a pipe is dead simple and works. As a bonus it's easy to test your server process because it is just reading and writing from Stdio.
On windows you could look into Named Pipes, which work somewhat differently from their unix namesake but are used for precisely this sort of interprocess communication.
Loopback UDP is incredibly unreliable on many platforms, you can easily see 50%+ data loss. Various excuses have been given to the effect that there are far better transport mechanisms to use.
There are many middleware stacks available these days to make IPC easier to use and cross platform. Have a look at something like ZeroMQ or 29 West's LBM which use the same API for intra-process, inter-process (IPC), and network communications.
The socket interface will probably not flow control the originator of the data, so you will probably see reliable transmission if you have higher level flow control but there is always the possibility that a memory crunch could still cause a dropped datagram.
Without flow control limiting kernel memory allocation for datagrams I imagine it will be just as unreliable as network UDP.