This question already has answers here:
close vs shutdown socket?
(9 answers)
Closed 11 months ago.
When we call the shutdown() function on a socket with argument SHUT_RDWR, we stop read/write possibility on the socket, but the socket is still not destroyed. I can't understand the purpose of SHUT_RDWR. What does it give to us, and why do we need a socket without read/write possibility? I expect SHUT_RDWR should work as the close() function, but it is not.
What SHUT_RDWR does is it closes the connection on the socket while keeping the handle for the data (the file descriptor) open.
For example, suppose a socket has been used for two-way communication and your end of the connection shuts it down with SHUT_RDWR. Then if the person on the other end of the connection tries to write (and perhaps read as well?), it will fail because the connection has been closed.
However, the file descriptor is still open, which means that if there is any data left for you to receive/read, you can go ahead and do it, and then close the file descriptor.
In other words, shutdown() is about closing off the connection, and close() is about closing the file descriptor and discarding any data that may be in the file description's "buffer."
Advanced Programming in the UNIX Environment also has this reason:
Given that we can close a socket, why is shutdown needed? There are several
reasons. First, close will deallocate the network endpoint only when the last active
reference is closed. If we duplicate the socket (with dup, for example), the socket won’t
be deallocated until we close the last file descriptor referring to it. The shutdown
function allows us to deactivate a socket independently of the number of active file
descriptors referencing it.
This question already has answers here:
How to view thread id of a process which has opened a socket connection?
(2 answers)
Closed 3 years ago.
I have a process with 100 threads.
I know that only one thread is using a specific fd.
For example, this fd is a socket descriptor, and only one thread is using this socket with send() and receive().
How can I find out, with C, on Linux, the ID of this thread?
Is there a smarter way than attaching to each thread with ptrace and waiting until one of them will be detected?
File descriptors are part of the process. And since a file descriptor is just a nonnegative integer, and can be used by all threads of the same processes without explicit rebinding, asking "which thread holds an fd" is not a question applicable to the Linux process/threading model.
If you really want an answer then it would be: All the threads do!
This question already has an answer here:
C : "same file descriptors of all client connections" (client server programming)
(1 answer)
Closed 8 years ago.
As the argument of accept() for new client socket,
the listener socket is in shared memory area and is shared by all forked server processes.
but each server processesaccept()returns the same socket descriptor afteraccept()` is called by all different forked processes.
Does the fork() also makes separate area for socket descriptors and each forked process
manage the area separately?
Is that why they produce duplicate socket descriptors?
I intended to use select() to detect changes on all socket descriptors,
but because they produce all same descriptors, I couldn't make it out..
Yes, socket descriptors' (as well as file descriptors) values are managed on a per-process base.
I have a sample application of a SIP server listening on both tcp and udp ports 5060.
At some point in the code, I do a system("pppd file /etc/ppp/myoptions &");
After this if I do a netstat -apn, It shows me that ports 5060 are also opened for pppd!
Is there any method to avoid this? Is this standard behaviour of the system function in Linux?
Thanks,
Elison
Yes, by default whenever you fork a process (which system does), the child inherits all the parent's file descriptors. If the child doesn't need those descriptors, it SHOULD close them. The way to do this with system (or any other method that does a fork+exec) is to set the FD_CLOEXEC flag on all file descriptors that shouldn't be used by the children of you process. This will cause them to be closed automatically whenever any child execs some other program.
In general, ANY TIME your program opens ANY KIND of file descriptor that will live for an extended period of time (such as a listen socket in your example), and which should not be shared with children, you should do
fcntl(fd, F_SETFD, fcntl(fd, F_GETFD) | FD_CLOEXEC);
on the file descriptor.
As of the 2016? revision of POSIX.1, you can use the SOCK_CLOEXEC flag or'd into the type of the socket to get this behavior automatically when you create the socket:
listenfd = socket(AF_INET, SOCK_STREAM|SOCK_CLOEXEC, 0);
bind(listenfd, ...
listen(listemfd, ...
which guarentees it will be closed properly even if some other simultaneously running thread does a system or fork+exec call. Fortunately, this flag has been supported for awhile on Linux and BSD unixes (but not OSX, unfortunately).
You should probably avoid the system() function altogether. It's inherently dangerous, in that it invokes the shell, which can be tampered with and rather non-portable, even between Unicies.
What you should do is the fork()/exec() dance. It goes something like this
if(!fork()){
//close file descriptors
...
execlp("pppd", "pppd", "file", "/etc/ppp/myoptions", NULL);
perror("exec");
exit(-1);
}
Yes, this is standard behavior of fork() in Linux, from which system() is implemented.
The identifier returned from the socket() call is a valid file descriptor. This value is usable with file-oriented functions such as read(), write(), ioctl(), and close().
The converse, that every file descriptor is a socket, is not true. One cannot open a regular file with open() and pass that descriptor to, e.g., bind() or listen().
When you call system() the child process inherits the same file descriptors as the parent. This is how stdout (0), stdin (1), and stderr (2) are inherited by child processes. If you arrange to open a socket with a file descriptor of 0, 1 or 2, the child process will inherit that socket as one of the standard I/O file descriptors.
Your child process is inheriting every open file descriptor from the parent, including the socket you opened.
As others have stated, this is standard behavior that programs depend on.
When it comes to preventing it you have a few options. Firstly is closing all file descriptors after the fork(), as Dave suggests. Second, there is the POSIX support for using fcntl with FD_CLOEXEC to set a 'close on exec' bit on a per-fd basis.
Finally, though, since you mention you are running on Linux, there are a set of changes designed to let you set the bit right at the point of opening things. Naturally, this is platform dependent. An overview can be found at http://udrepper.livejournal.com/20407.html
What this means is that you can use a bitwise or with the 'type' in your socket creation call to set the SOCK_CLOEXEC flag. Provided you're running kernel 2.6.27 or later, that is.
system() copies current process and then launch a child on top of it. (current process is no more there. that is probably why pppd uses 5060. You can try fork()/exec() to create a child process and keep parent alive.
This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
Why is writing a closed TCP socket worse than reading one?
Why doesn't an erroneous return value suffice?
What can I do in a signal handler that I can't do by testing the return value for EPIPE?
Back in the old days almost every signal caused a Unix program to terminate. Because inter-process communication by pipes is fundamental in Unix, SIGPIPE was intended to terminate programs which didn't handle write(2)/read(2) errors.
Suppose you have two processes communicating through a pipe. If one of them dies, one of the ends of the pipe isn't active anymore. SIGPIPE is intended to kill the other process as well.
As an example, consider:
cat myfile | grep find_something
If cat is killed in the middle of reading the file, grep simply doesn't have what to do anymore and is killed by a SIGPIPE signal. If no signal was sent and grep didn't check the return value of read, grep would misbehave in some way.
As with many other things, my guess is that it was just a design choice someone made that eventually made it into the POSIX standards and has remained till date. That someone may have thought that trying to send data over a closed socket is a Bad Thing™ and that your program needs to be notified immediately, and since nobody ever checks error codes, what better way to notify you than to send a signal?