I'm need to start some helper processes when web interface of mine router is being used, and shut them down after some time, if activity in webface was stopped (to save RAM when webface isn't used).
Is there any way (except prace() ) to know, when another process (server) accepted any network connection?
I'm tried to parse /proc/net/tcp for socket inodes placed in /proc/pidof httpd/fd but it is very unrelaible, and catch only full webface reloading, but not navigating over it.
Here is source I written this way:
https://dl.dropboxusercontent.com/u/100376233/zyxel/nethelperd.tar.bz2
OR: is there any way to catch only accept() syscall using ptrace() and not disturb traced process if it's called another syscall?
I would recommend that you use libpcap with a filter such as tcp port X, where X is the port hosting the web interface. Whenever you receive a packet, you can reset your timer. If the timer fires before you receive a packet, you can shut down the services. You can also reduce the overhead of this by not capturing the whole packet. libpcap allows you to specify the maximum number of bytes to capture per packet.
Related
I like to know what should be the execution pattern of Multiple Threads of a Server to implement TCP in request-response cycle of hi-performance Server (like dozens of packets with single or no system call on Linux using Packet MMAP or some other way).
Design 1) For simplicity, Start two thread in main at the start of a Server program. one thread just getting packets directly from network interface(s) like wlan0/eth0. and once number of packets read in one cycle (using while loop with poll() in Linux). wake up the other thread using conditional variable signal call. and after waking up, other thread (sender) process and send packet as tcp response.
Design 2) Start receiver thread at the start of main program. The packet receiver thread reads packets from interfaces using while loop and poll(). When number of packets received, create sender thread and pass number of packets received in one cycle to sender as parameter. Sender thread process the packets and respond as tcp response.
(I think, Design 2 will be more easy to implement but is there any design issue or possible performance issue with this approach this is the question). Since creating buffer to pass to sender thread from receiver thread need to be allocated prior to receiving packets. So I know the size of buffer to allocate. Also in this execution pattern I am creating new thread (which will return and end execution after processing packets and responding tcp response). I like to know what will be the performance issue with this approach since I am creating new thread every time I get a batch of packet from interfaces.
In first approach I am not creating more than two threads (or limited number of threads and threads can be tracked easily for logging and debugging since I will know how many thread are initially created) In second approach I don't know how many threads are hanging around and executing concurrently.
I need any advise how real website like youtube/ or others may have handled this in there hi-performance server if they had followed this way of implementing their front facing servers.
First when going to a 'real' website the magic lies in having a load balancers and a whole bunch of worker nodes to take the load and you easily exceed the boundary of a single system. For example take a look at the following AWS reference architecture for serving web pages at scale AWS Cloud Architecture for serving web whitepaper.
That being said taking this one level down it is always interesting to look at how other well-known products have solved this issue. For example NGINX has an excellent infographic available and matching blogpost describing their architecture and threading.
I'm writing a server in C ++ for both Windows and Unix systems.
A key feature of this server is that it must be able to receive and send network packets at any time.
Specifically, the server must be able to send data to the client not only in response to their messages, but also be able to send packets to them asynch in push.
I'm having difficulty in implementing a solution that uses the select() function in the scenario described above.
The solution I have currently implemented does not convince me at all and I think it can be implemented with better patterns/solutions.
I currently have a dedicated thread (selector) that performs the select by listening on events in the reading for the server socket (to accept new connections) and for the sockets connected to the server.
This is the main select() loop:
if((sel_res_ = select(nfds_+1, &read_FDs_, NULL, &excep_FDs_, &sel_timeout)) > 0){
if(FD_ISSET(serv_socket, &read_FDs_)){
//we have to handle a newly connection.
...
if(sel_res_ > 1){
//in addition to the newly connection, there is also some other message incoming on client sockets.
...
}
}else{
//we have to handle incoming messages on client sockets
...
}
}
This solution works well for receiving the data and to respond to client requests in synchronous form.
However, the server must also be able to send asynchronous data, and send when necessary, packets in push.
To do this I currently use separate threads that perform directly the send() on the client sockets.
This solution does not convince me, and I would like to centralize the packets receiving and sending on the selector thread.
The main difficulty is that the select() by its nature is blocking and I have no control until a client does not send any packet or the timeout is triggered.
The solution to set a timeout very low does not convince me; I see it as an easy solution that is actually doing active wait, and not only, however, the worst case I would pay the price of the timeout before sending the push packet.
I thought a more 'elegant' solution; I think, will work well, but only for a Unix/Linux platform.
I thought to use an anonymous pipe and insert into the select() read_FDs_ the anonymous pipe read descriptor.
In this way, when a thread wants to send a data in push, it writes something on this pipe, interrupting the select() and returning control to the selector that can then predispose to send the data to the client, without significant loss of time.
I think that this solution, unfortunately, cannot be implemented on Windows because the select() function on that system works only with fds that are actually sockets.
So the question is: Is there some well known solution that can be used to address this kind of scenario (both Linux and Windows)?
You can create a self connected UDP socket, this works equally well on Windows and Linux.
Basically, you create a UDP socket, bind() it to INADDR_LOOPBACK and port 0, and connect() it to itself (with the address taken from getsockname()).
At this point, you can send yourself a single byte message (or something more specific) to wake yourself up.
I currently have a client listening for packets in its own thread. I was told to try to implement an ISR so that the packet received from the recv() call can be handled immediately, instead of waiting for that thread to get scheduled.
EDIT: this is in windows now, but it will ported to a DSP later.
ISRs by definition run in kernel space. Unless you are in an embedded system without memory protection, you will need to add kernel code to your project. Furthermore, to reimplement recv, it will need to handle IP and TCP or UDP as necessary to extract the data from the ethernet packets.
The overhead of rescheduling and switching to a thread is minimal, and needs to happen anyway unless the packet is handled entirely in the kernel. Most operating systems have a highest-priority thread setting, sometimes called "real-time," which causes user space code to run with minimal delay after the driver receives data. This is often used for audio/video I/O as well as networking.
I was trying to learn the usage of option SO_KEEPALIVE in socket programming in C language under Linux environment.
I created a server socket and used my browser to connect to it. It was successful and I was able to read the GET request, but I got stuck on the usage of SO_KEEPALIVE.
I checked this link keepalive_description#tldg.org but I could not find any example which shows how to use it.
As soon as I detect the client's request on accept() function I set the SO_KEEPALIVE option value 1 on the client socket. Now I don't know, how to check if the client is down, how to change the time interval between the probes sent etc.
I mean, how will I get the signal that the client is down? (Without reading or writing at the client - I thought I will get some signal when probes are not replied back from client), how should I program it after setting the option SO_KEEPALIVE on).
Also if suppose the probes are sent every 3 secs and the client goes down in between I will not get to know that client is down and I may get SIGPIPE.
Anyways importantly I wanna know how to use SO_KEEPALIVE in the code.
To modify the number of probes or the probe intervals, you write values to the /proc filesystem like
echo 600 > /proc/sys/net/ipv4/tcp_keepalive_time
echo 60 > /proc/sys/net/ipv4/tcp_keepalive_intvl
echo 20 > /proc/sys/net/ipv4/tcp_keepalive_probes
Note that these values are global for all keepalive enabled sockets on the system, You can also override these settings on a per socket basis when you set the setsockopt, see section 4.2 of the document you linked.
You can't "check" the status of the socket from userspace with keepalive. Instead, the kernel is simply more aggressive about forcing the remote end to acknowledge packets, and determining if the socket has gone bad. When you attempt to write to the socket, you will get a SIGPIPE if keepalive has determined remote end is down.
You'll get the same result if you enable SO_KEEPALIVE, as if you don't enable SO_KEEPALIVE - typically you'll find the socket ready and get an error when you read from it.
You can set the keepalive timeout on a per-socket basis under Linux (this may be a Linux-specific feature). I'd recommend this rather than changing the system-wide setting. See the man page for tcp for more info.
Finally, if your client is a web browser, it's quite likely that it will close the socket fairly quickly anyway - most of them will only hold keepalive (HTTP 1.1) connections open for a relatively short time (30s, 1 min etc). Of course if the client machine has disappeared or network down (which is what SO_KEEPALIVE is really useful for detecting), then it won't be able to actively close the socket.
As already discussed, SO_KEEPALIVE makes the kernel more aggressive about continually verifying the connection even when you're not doing anything, but does not change or enhance the way the information is delivered to you. You'll find out when you try to actually do something (for example "write"), and you'll find out right away since the kernel is now just reporting the status of a previously set flag, rather than having to wait a few seconds (or much longer in some cases) for network activity to fail. The exact same code logic you had for handling the "other side went away unexpectedly" condition will still be used; what changes is the timing (not the method).
Virtually every "practical" sockets program in some way provides non-blocking access to the sockets during the data phase (maybe with select()/poll(), or maybe with fcntl()/O_NONBLOCK/EINPROGRESS&EWOULDBLOCK, or if your kernel supports it maybe with MSG_DONTWAIT). Assuming this is already done for other reasons, it's trivial (sometimes requiring no code at all) to in addition find out right away about a connection dropping. But if the data phase does not already somehow provide non-blocking access to the sockets, you won't find out about the connection dropping until the next time you try to do something.
(A TCP socket connection without some sort of non-blocking behaviour during the data phase is notoriously fragile, as if the wrong packet encounters a network problem it's very easy for the program to then "hang" indefinitely, and there's not a whole lot you can do about it.)
Short answer, add
int flags =1;
if (setsockopt(sfd, SOL_SOCKET, SO_KEEPALIVE, (void *)&flags, sizeof(flags))) { perror("ERROR: setsocketopt(), SO_KEEPALIVE"); exit(0); };
on the server side, and read() will be unblocked when the client is down.
A full explanation can be found here.
I am a beginner to networking and I have a few questions regarding networking.
1)How can a process execute code that is sent from a different computer on the network. Generally a process's code segment cannot be changed once its loaded to ensure protection. (Also I can execute some arbitrary code to corrupt the process's memory)
2)Also can a process hear to multiple ports ? And multiple processes can hear to a same port ? For example two https associated with port 80. How to distinguish between the processes and how to ensure protection ?
3)Also I would like to know how listen is implemented in sockets. Are they implemented as software interrupts ?
Any good book recommendations are very much appreciated.
Thanks & Regards,
Mousey.
Q: How can a process execute code sent from another machine?
A: Generally, this is a bad idea as the security concerns are difficult to fully explore. However, this can be done by saving the network-delivered code to a separate executable and then launching this new program. This can also be done on most systems by just treating the raw bytes received as code; load the bytes into the heap (not the stack!), cast the address to a function pointer, and call it. Again though, this is almost certainly a bad idea.
Q: Can a process listen on multiple ports simultaneously?
A: Yes. By the way, HTTPS is port 443. HTTP is port 80.
Q: Can multiple processes listen on the same port (with the same protocol, on the same address)?
A: No. Other processes might be able to eavesdrop and also receive the packets, but they're not directly bound to the port. In general, only one process can be bound to a given protocol/port/address 3-tuple.
Q: How is blocking while listening on a socket implemented?
A: By the operating system, in its own fashion. Generally a thread is moved into the "blocking" state when it calls accept, read, or poll/select on a non-ready socket, and will not receive CPU time until some data have arrived.
1)How can a process execute code that is sent from a different computer on the network. Generally a process's code segment cannot be changed once its loaded to ensure protection.
This has nothing to do with networking. Once you receive the data through a socket, it's in your local memory. What you do after that is OS-specific. For example, on Windows, you can use VirtualProtect to mark pages as executable.
2)Also can a process hear to multiple ports ?
Sure, just create a different socket for each port you want to listen to. Of course, to use them simultaneously, you either need to use non-blocking sockets or run each socket in a separate thread.
3)Also I would like to know how listen is implemented in sockets. Are they implemented as software interrupts ?
This is entirely OS-specific. listen just sets up the socket so that it can accept connections. Any connection requests that arrive after this (this probably happens somewhere in the TCP/IP driver) are put in a queue by the OS. When you later call accept, the OS pulls out the first pending connection from this queue and returns a socket to that.