I'm learning socket programming in C. I have gotten my server to create a socket that was successful, but when I try to bind my socket to a port nothing happens. No error occurs and it is not successful. It's as if the bind() function is not even executing at all.
I've checked out the documentation on the bind() function here but there's no mention of why it won't execute at all. I've also tried searching through this site with no avail.
I also tried following this tutorial from start to finish but the error (or lack thereof) still occurs.
Here is my full code leading up to the problem:
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <sys/socket.h>
#include <sys/types.h>
#include <netinet/in.h>
#include <arpa/inet.h>
#include "include.h"
int main() {
// Descriptors. Used to check the status of functions such as socket, listen, bind etc.
// If a descriptor is equal to 0, then everything is okay. Else, if they are equal to -1, something went wrong.
int socketDescriptor, newSocketDescriptor = 1;
// The process ID of a child process (the client) when a new one is spawned (the client connects).
pid_t childPID;
// A string to hold the commands being sent a received.
char* commandBuffer = calloc(BUFFER_SIZE, sizeof(char));
// A structure to hold information on the server address.
struct sockaddr_in serverAddress;
memset(&serverAddress, '\0', sizeof(serverAddress));
// Fill in the server address information.
// Set the address family to AF_INET, which specifies we will be using IPv4.
// htons() takes the given int and converts it to the appropriate format. Used for port numbers.
// inet_addr() takes the given string and converts it to the appropriate format. Used for IP addresses.
serverAddress.sin_family = AF_INET;
serverAddress.sin_port = htons(PORT);
serverAddress.sin_addr.s_addr = inet_addr("127.0.0.1");
// A structure to hold information a client when a new one connects to this server.
struct sockaddr_in clientAddress;
memset(&clientAddress, '\0', sizeof(clientAddress));
// socklen_t defines the length of a socket structure. Need this for the accept() function.
socklen_t addressSize;
// Creating the socket.
// AF_NET specifies that we will be using IPv4 addressing.
// SOCK_STREAM specifies that we will be using TCP to communicate.
socketDescriptor = socket(AF_INET, SOCK_STREAM, 0);
if (socketDescriptor < 0) {
perror("ERROR CREATING SOCKET");
exit(1);
}
else
printf("Socket created successfully.\n");
// Binding to the specified port. 0 if everything is fine, -1 if there was an error.
if (bind(socketDescriptor, (struct sockaddr*) & serverAddress, sizeof(struct sockaddr_in)) < 0) {
perror("ERROR BINDNING");
exit(1);
}
else
printf("Socket bound to %s:%s.\n", serverAddress.sin_addr.s_addr, serverAddress.sin_port);
The last if statement at the bottom is where the code fails. It should either print and error or print "Socket bound to 127.0.0.1:80" but neither happens. See an example here.
I'm lost for what to do.
A server socket won't show up in a netstat listing unless you call listen after binding the socket.
Also, you're using the %s format specifier in your printf after the bind call on serverAddress.sin_addr.s_addr and serverAddress.sin_port. These are not strings but integers. Using the wrong format specifier invokes undefined behavior and is likely causing your program to crash. Using the correct format specifier such as %d or %x will fix this.
if (bind(socketDescriptor, (struct sockaddr*)&serverAddress, sizeof(struct sockaddr_in)) < 0) {
perror("ERROR BINDNING");
exit(1);
}
else
// use %x to print instead
printf("Socket bound to %x:%x.\n", serverAddress.sin_addr.s_addr, serverAddress.sin_port);
if (listen(socketDescriptor, 3) < 0) {
perror("listen failed");
} else {
printf("socket is listening\n");
}
Related
I suspect this has an easy solution I'm overlooking, probably to do with the client or how this is set up.
Anyways, I'm trying to set up a simple Echo server/client to understand the basics of socket programming. I have a virtual machine running Linux Mint, and the host is running Windows 10. The virtual machine I am setting to run the server c code, and the Windows will be running the client.
I started off making the server code
//Echo Server for UNIX: Using socket programming in C, a client sends a string
//to this server, and the server responds with the same string sent back to the client
#include <stdio.h>
#include <sys/socket.h>
#include <sys/types.h>
#include <netinet/in.h>
#include <arpa/inet.h>
#include <unistd.h>
#include <string.h>
int main()
{
char stringBuffer[50]; //string buffer for reading incoming and resending
int listener, communicator, c; //store values returned by socket system call
if((listener = socket(AF_INET, SOCK_STREAM, 0)) == -1) //creates a new socket
puts("Could not create socket");
puts("Socket Created");
struct sockaddr_in servAddr, client; //structure from <netinet/in.h> for address of server
servAddr.sin_family = AF_INET; //addressing scheme set to IP
servAddr.sin_port = htons(8888); //server listens to port 5000
servAddr.sin_addr.s_addr = inet_addr("127.0.0.1"); //symbolic constant of server IP address
//binds the socket to the address of the current host and port# the server will run on
if (bind(listener, (struct sockaddr *) &servAddr, sizeof(servAddr)) < 0){
puts("Bind failed");
return 1;
}
puts("Bind Successful");
listen(listener, 5); //listens for up to 5 connections at a time
c = sizeof(struct sockaddr_in);
if ((communicator = accept(listener, (struct sockaddr*)&client, (socklen_t*)&c ))<0)
puts("accept failed");
puts("Connection Accepted");
//wait until someone wants to connect, then whatever is sent can be read from communicator, which can then be sent back
while(1){
bzero(stringBuffer, 50); //sets buffer to 0
read(communicator, stringBuffer, 50); //reads from communicator into buffer
write(communicator, stringBuffer, strlen(stringBuffer)+1); //returns back
}
return 0;
}
after that I tested it out by opening another terminal in the guest machine and typed "telnet localhost 8888" and input whatever strings I wanted.
This test worked so now, onto my Windows machine to create the client side of the socket programming:
#include <winsock.h>
#include <stdio.h>
#include <string.h>
#pragma comment(lib,"ws2_32.lib") //Winsock Library
int main(int argc, char *argv[])
{
WSADATA wsadata; //variable for using sockets in windows
SOCKET sock; //socket variable for network commands
char sendString[50], recieveString[50]; //variables for sending and recieving messages to/from server
//check if WSA initialises correctly
if (WSAStartup(MAKEWORD(2,2), &wsadata) != 0)
printf("Error Code: %d", WSAGetLastError());
//creates new socket and saves into sock
if ((sock = socket(AF_INET, SOCK_STREAM, 0)) == INVALID_SOCKET)
printf("Could not create socket: %d", WSAGetLastError());
printf("Socket created\n");
struct sockaddr_in servAddr;
servAddr.sin_addr.s_addr = inet_addr("127.0.0.1"); //sets the IP address to the same machine as the server
servAddr.sin_family = AF_INET; //addressing scheme set to TCP/IP
servAddr.sin_port = htons(8888); //server address is on port 8888
//connects to device with specifications from servAddr
if (connect(sock, (struct sockaddr *)&servAddr, sizeof(servAddr)) < 0) {
printf("Connection Error %d\n", WSAGetLastError());
return 1;
}
printf("Connection Accepted\n");
while(1){
fgets(sendString, 50, stdin); //uses stdin to get input to put into sendString
//sends sendString to server using sock's properties
if (send(sock, sendString, strlen(sendString) + 1, 0) < 0); {
printf("Send Failed");
return 0;
}
//reads from server into recieveString
if ((recv(sock, recieveString, 50, 0)) == SOCKET_ERROR)
printf("Recieve Failed");
printf("%s", recieveString); //prints out recieveString
}
}
Now, with the server still running, when I try out the client-side, I get the response "Connection Error" (from line 35). Having looked at both Unix and WinSock examples, I'm unsure as to why I would be failing the connection. I suspect it might have something to do with a windows to linux VM but I'm not sure.
---UPDATE---
Having updated the accidental semicolon and added the WSAGetLastError, it's showing an error code of 10061; This translates to
"Connection refused.
No connection could be made because the target computer actively refused it. This usually results from trying to connect to a service that is inactive on the foreign host—that is, one with no server application running."
[after the 3rd edit:]
Sry, just re-read your question. The important thing is here:
The virtual machine I am setting to run the server c code, and the Windows will be running the client.
127.0.0.1 is an address always only local to an IP enabled box. So you your server is listening on the interface 127.0.0.1 local to the Linux VM and the client tries to connect to 127.0.0.0 local to the Windows box. Those two interfaces are not the same. The result is the obvious, namely the client does not find anything to connect to.
127.0.0.1 (the so called "IPv4 local loopback interface") can only be used for connections local to exactly one box.
if (connect(sock, (struct sockaddr *)&servAddr, sizeof(servAddr)) < 0); {
printf("Connection Error");
return 1;
}
This is just a trivial syntax mistake. You are entering the block unconditionally. Remove the first semicolon.
However there is a much more important point to be made. When you get an error from a system call such as connect(), you must print the error. Not just some message of your own devising. Otherwise you don't know whether you simply have a bug, or a temporary problem, or a long-lasting problem, or a permanent problem.
Change the printf() to:
printf("Connect error %s\n", WSAGetLastError());
and then don't continue as though the error didn't happen.
Note that this applies to all system calls, specifically including socket(), bind(), listen(), connect(), accept(), recv(), send(), and friends.
I'm writing a TCP server in C and find something unusual happens once the listening fd get "Too many open files" error. The accept call doesn't block anymore and returns -1 all the time.
I also tried closing the listening fd and re-opening, re-binding it, but didn't seem to work.
My questions are why accept keeps returning -1 in this situation, what am I supposed to do to stop it and make the server be able to accept new connections after any old clients closed? (the socket is of course able to accept correctly again when some connections closed)
====== UPDATE: clarification ======
The problem occurs just because the number of active clients is more than the limit of open fds, so I don't close any of the accepted fds in the sample code, just to make it reproduce more quickly.
I add the timestamp each time accept returns to the output and slow down connect frequency to once in 2 seconds, then I find that in fact the "Too many open files" error occurs immediately after the lastest success accept. So I think that is because when the maxium fds is reached, each call to accept will return immediately, and the return value is -1. (What I thought is that accept would still block, but returns -1 at the next incoming connect. The behavior of accept in this situation is my own theory, not from the man page. If it's wrong, please let me know).
So to my second question, to make it stop, I think it's a solution that stop to call accept before any connection is closed.
Also update the sample codes. Thanks for your help.
====== Sample codes ======
Here is how I test it. First set ulimit -n to a low value (like 16) and run the server program compiled from the following C source; then use the Python script to create several connections
/* TCP server; bind :5555 */
#include <stdio.h>
#include <unistd.h>
#include <time.h>
#include <stdlib.h>
#include <string.h>
#include <netdb.h>
#include <sys/types.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <arpa/inet.h>
#define BUFSIZE 1024
#define PORT 5555
void error(char const* msg)
{
perror(msg);
exit(1);
}
int listen_port(int port)
{
int parentfd; /* parent socket */
struct sockaddr_in serveraddr; /* server's addr */
int optval; /* flag value for setsockopt */
parentfd = socket(AF_INET, SOCK_STREAM, 0);
if (parentfd < 0) {
error("ERROR opening socket");
}
optval = 1;
setsockopt(parentfd, SOL_SOCKET, SO_REUSEADDR,
(const void *)&optval , sizeof(int));
bzero((char *) &serveraddr, sizeof(serveraddr));
serveraddr.sin_family = AF_INET;
serveraddr.sin_addr.s_addr = htonl(INADDR_ANY);
serveraddr.sin_port = htons((unsigned short)port);
if (bind(parentfd, (struct sockaddr *) &serveraddr, sizeof(serveraddr)) < 0) {
error("ERROR on binding");
}
if (listen(parentfd, 5) < 0) {
error("ERROR on listen");
}
printf("Listen :%d\n", port);
return parentfd;
}
int main(int argc, char **argv)
{
int parentfd; /* parent socket */
int childfd; /* child socket */
int clientlen; /* byte size of client's address */
struct sockaddr_in clientaddr; /* client addr */
int accept_count; /* times of accept called */
accept_count = 0;
parentfd = listen_port(PORT);
clientlen = sizeof(clientaddr);
while (1) {
childfd = accept(parentfd, (struct sockaddr *) &clientaddr, (socklen_t*) &clientlen);
printf("accept returns ; count=%d ; time=%u ; fd=%d\n", accept_count++, (unsigned) time(NULL), childfd);
if (childfd < 0) {
perror("error on accept");
/* the following 2 lines try to close the listening fd and re-open it */
// close(parentfd);
// parentfd = listen_port(PORT);
// the following line let the program exit at the first error
error("--- error on accept");
}
}
}
The Python program to create connections
import time
import socket
def connect(host, port):
s = socket.socket()
s.connect((host, port))
return s
if __name__ == '__main__':
socks = []
try:
try:
for i in xrange(100):
socks.append(connect('127.0.0.1', 5555))
print ('connect count: ' + str(i))
time.sleep(2)
except IOError as e:
print ('error: ' + str(e))
print ('stop')
while True:
time.sleep(10)
except KeyboardInterrupt:
for s in socks:
s.close()
why accept keeps returning -1 in this situation
Because you've run out of file descriptors, just like the error message says.
what am I supposed to do to stop it and make the server be able to accept new connections after any old clients closed?
Close the clients. The problem is not accept() returning -1, it is that you aren't closing accepted sockets once you're finished with them.
Closing the listening socket isn't a solution. It's just another problem.
EDIT By 'finished with them' I mean one of several things:
They have finished with you, which is shown by recv() returning zero.
You have finished with them, e.g. after sending a final response.
When you've had an error sending or receiving to/from them other than EAGAIN/EWOULDBLOCK.
When you've had some other internal fatal error that prevents you dealing further with that client, for example receiving an unparseable request, or some other fatal application error that invalidates the connection or the session, or the entire client for that matter.
In all these cases you should close the accepted socket.
The answer of EJP is correct, but it does not tell you how to deal with the situation. What you have to do is actually do something with the sockets that you get as accept returns. Simple calling close on them you won't receive anything of course but it would deal with the resource depletion problem. What you have to do to have a correct implementation is start receiving on the accepted sockets and keep receiving until you receive 0 bytes. If you receive 0 bytes, that is an indication that the peer is done using his side of the socket. That is your trigger to call close on the socket as well and deal with the resource problem.
You don't have to stop listening. That would stop your server from being able to process new requests and that is not the problem here.
The solution I implemented here was to review the value of the new (accepted) fd and if that value was equal or higher then the allowed server capacity, then a "busy" message is sent and the new connection is closed.
This solution is quite effective and allows you to inform your clients about the server's status.
Everything compiles without errors and warnings. I start the program. I visit localhost:8080 and the program stops - great. I try to run the program again and I get Error: unable to bind message. Why?
Code:
#include <sys/types.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <arpa/inet.h>
#include <errno.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#define PORT 8080
#define PROTOCOL 0
#define BACKLOG 10
int main()
{
int fd;
int connfd;
struct sockaddr_in addr; // For bind()
struct sockaddr_in cliaddr; // For accept()
socklen_t cliaddrlen = sizeof(cliaddr);
// Open a socket
fd = socket(AF_INET, SOCK_STREAM, PROTOCOL);
if (fd == -1) {
printf("Error: unable to open a socket\n");
exit(1);
}
// Create an address
//memset(&addr, 0, sizeof addr);
addr.sin_addr.s_addr = INADDR_ANY;
addr.sin_family = AF_INET;
addr.sin_port = htons(PORT);
if ((bind(fd, (struct sockaddr *)&addr, sizeof(addr))) == -1) {
printf("Error: unable to bind\n");
printf("Error code: %d\n", errno);
exit(1);
}
// List for connections
if ((listen(fd, BACKLOG)) == -1) {
printf("Error: unable to listen for connections\n");
printf("Error code: %d\n", errno);
exit(1);
}
// Accept connections
connfd = accept(fd, (struct sockaddr *) &cliaddr, &cliaddrlen);
if (connfd == -1) {
printf("Error: unable to accept connections\n");
printf("Error code: %d\n", errno);
exit(1);
}
//read(connfd, buffer, bufferlen);
//write(connfd, data, datalen);
// close(connfd);
return 0;
}
Use the SO_REUSEADDR socket option before calling bind(), in case you have old connections in TIME_WAIT or CLOSE_WAIT state.
Uses of SO_REUSEADDR?
In order to find out why, you need to print the error; the most likely reason is that another program is already using the port (netstat can tell you).
Your print problem is that C format strings use %, not &. Replace the character in your print string, and it should work.
First, have a look into the following example:
Socket Server Example
Second: The reason why the second bind fails is, because your application crashed, the socket is still bound for a number of seconds or even minutes.
Check with the "netstat" command if the connection is still open.
Try putting the following code just before bind()
int opt = 1;
if (setsockopt(<Master socket FD>, SOL_SOCKET, SO_REUSEADDR, (char *)&opt, sizeof(opt))<0) {perror("setsockopt");exit(EXIT_FAILURE);}if(setsockopt(<Master socket FD>, SOL_SOCKET, SO_REUSEPORT, (char *)&opt, sizeof(opt))<0) {
perror("setsockopt");exit(EXIT_FAILURE);}
Reason behind socket bind error 98:
Socket is 4 tuple (server ip, server port , client ip, client port)
When any two sockets tuples matches , error 98 is thrown
When you terminate the code on server side, it means you are ending connection with tcp client .
Now server is the one which sends FIN to client and goes to TIME_WAIT state.
Typically , in TIME_WAIT sate server sends ack packets continuously to client , assuming that if any ack gets lost in between .
Time out it depends on implementation of code . It could be from 30 seconds to 2 minutes or more.
If you run the code again , server is in TIME_WAIT , hecne port is already in use . This is because any service running on server will use fixed port which is not the case with client .
That is why in real life, server will never send FIN to client .It is client who sends FIN in order to end connection.
Even if client connects again before timeout of TIME_WAIT, he will be connected to server because , he will use now a different port thus socket tuple changes .
If it is implemented in reverse way , if server sends FIN , there after any new connection would not be accept till timeout ends .
Why port is busy ?
It is because in TIME_Wait , the one who sends FIN first, must transmit ack packets continuously till timeout expires.
I am trying to write a web server that listens on both IPv4 and IPv6 addresses. However, the code that I originally wrote did not work. Then I found out that the IPv6 structures work for both IPv4 and IPv6. So now I use the IPv6 structures however, only the IPv4 addresses work. This post, why can't i bind ipv6 socket to a linklocal address, which said to add server.sin6_scope_id = 5; so I did that but it still does not accept IPv6 telnet connections. Any help would be greatly appreciated because I am thoroughly stumped.
Thanks!
My code is below:
void initialize_server(int port, int connections, char* address)
{
struct sockaddr_in6 socket_struct;
/*Creates the socket*/
if ((sock_fd = socket(AF_INET, SOCK_STREAM, 0)) < 0)
{
syslog(LOG_ERR, "%s\n", strerror(errno));
exit(EXIT_FAILURE);
}/*Ends the socket creation*/
/*Populates the socket address structure*/
socket_struct.sin6_family = AF_INET6;
if(address == NULL)
socket_struct.sin6_addr=in6addr_any;
else
{
inet_pton(AF_INET6, "fe80::216:3eff:fec3:3c22", (void *)&socket_struct.sin6_addr.s6_addr);
}
socket_struct.sin6_port =htons(port);
socket_struct.sin6_scope_id = 0;
if (bind(sock_fd, (struct sockaddr*) &socket_struct, sizeof(socket_struct)) < 0)
{
syslog(LOG_ERR, "%s\n", strerror(errno));
exit(EXIT_FAILURE);
}//Ends the binding.
if (listen(sock_fd, connections) <0)
{
syslog(LOG_ERR, "%s\n", strerror(errno));
exit(EXIT_FAILURE);
}//Ends the listening function
}//ends the initialize server function.
Saying "server.sin6_scope_id = 5;" is arbitrary. I fought with this awhile myself and discovered you need to use the actual scope of the actual interface you want to bind on. It can be found with an obsure but useful little function.
#include <net/if.h>
server.sin6_scope_id=if_nametoindex("eth0");
Of course, hardcoding it to one particular adapter is bad, shortsighted coding. A more complete solution is to loop through all of them and match on the ip address you're binding. The following is not perfect in that it doesn't account for quirks like having non-canonical addresses and two adapters with the same ip, etc. But besoverall, this sample function works great and should get you started.
#include <string.h> // strcmp
#include <net/if.h> // if_nametoindex()
#include <ifaddrs.h> // getifaddrs()
#include <netdb.h> // NI_ constants
// returns 0 on error
unsigned getScopeForIp(const char *ip){
struct ifaddrs *addrs;
char ipAddress[NI_MAXHOST];
unsigned scope=0;
// walk over the list of all interface addresses
getifaddrs(&addrs);
for(ifaddrs *addr=addrs;addr;addr=addr->ifa_next){
if (addr->ifa_addr && addr->ifa_addr->sa_family==AF_INET6){ // only interested in ipv6 ones
getnameinfo(addr->ifa_addr,sizeof(struct sockaddr_in6),ipAddress,sizeof(ipAddress),NULL,0,NI_NUMERICHOST);
// result actually contains the interface name, so strip it
for(int i=0;ipAddress[i];i++){
if(ipAddress[i]=='%'){
ipAddress[i]='\0';
break;
}
}
// if the ip matches, convert the interface name to a scope index
if(strcmp(ipAddress,ip)==0){
scope=if_nametoindex(addr->ifa_name);
break;
}
}
}
freeifaddrs(addrs);
return scope;
}
You're creating a socket in the AF_INET family, but then trying to bind it to an address in the AF_INET6 family. Switch to using AF_INET6 in your call to socket().
Suppose the listening socket passed to accept has non-default options set on it with setsockopt. Are these options (some or all of them?) inherited by the resulting file descriptors for accepted connections?
Several of the socket options are handled at lower levels of the system. While most of the socket options could be set using the setsockopt. Reference:man setsockopt And since you are mentioning only POSIX on any Linux, in general, as your scope. The accept() (Reference: man accept) does have a certain amount of discretion on what socket options should be inherited and what options to reject from the listening fd.
accept() does not modify the original socket passed to it as argument. The new socket returned by accept() does not inherit file status flags such as O_NONBLOCK,O_ASYNC from the listening socket.
So, instead of relying on the inheritance or non-inheritance of the listening socket properties(which is bound to vary across implementations and licenses), the accepted socket should be explicitly set with the desired socket options.(Best practice)
man pages and the implementation codes in your machine would be the most relevant specification for the accept() behavior.There's no common or standard specification existing across multiple variants of Linux.
No, they're not necessarily inherited. Try this sample, which sets the receive buffer size (SO_RCVBUF) on the initial socket to a non-default value and then compares the result with the inherited socket. Run this code, which listens on TCP port 12345, and then connect to it from any other program.
#include <errno.h>
#include <netinet/in.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/socket.h>
void die(const char *f)
{
printf("%s: %s\n", f, strerror(errno));
exit(1);
}
int main(void)
{
int s = socket(AF_INET, SOCK_STREAM, 0);
if(s < 0)
die("socket");
int rcvbuf;
socklen_t optlen = sizeof(rcvbuf);
if(getsockopt(s, SOL_SOCKET, SO_RCVBUF, &rcvbuf, &optlen) < 0)
die("getsockopt (1)");
printf("initial rcvbuf: %d\n", rcvbuf);
rcvbuf *= 2;
if(setsockopt(s, SOL_SOCKET, SO_RCVBUF, &rcvbuf, sizeof(rcvbuf)) < 0)
die("setsockopt");
printf("set rcvbuf to %d\n", rcvbuf);
struct sockaddr_in sin;
memset(&sin, 0, sizeof(sin));
sin.sin_family = AF_INET;
sin.sin_port = htons(12345);
sin.sin_addr.s_addr = INADDR_ANY;
if(bind(s, (struct sockaddr *)&sin, sizeof(sin)) < 0)
die("bind");
if(listen(s, 10) < 0)
die("listen");
struct sockaddr_in client_addr;
socklen_t addr_len = sizeof(client_addr);
int s2 = accept(s, (struct sockaddr *)&client_addr, &addr_len);
if(s2 < 0)
die("accept");
printf("accepted connection\n");
optlen = sizeof(rcvbuf);
if(getsockopt(s2, SOL_SOCKET, SO_RCVBUF, &rcvbuf, &optlen) < 0)
die("getsockopt (2)");
printf("new rcvbuf: %d\n", rcvbuf);
return 0;
}
Result on a machine running Linux 3.0.0-21-generic:
initial rcvbuf: 87380
set rcvbuf to 174760
accepted connection
new rcvbuf: 262142
Socket options is the place where things go that don't fit elsewhere. So, it's expected for different socket options to have different inheriting behaviour. Whether to inherit or not a socket option is decided on a case by case basis.
The answer is No for POSIX conforming implementations, as I read it.
From the POSIX-2017 spec for accept():
The accept() function shall extract the first connection on the queue of pending connections, create a new socket with the same socket type protocol and address family as the specified socket, and allocate a new file descriptor for that socket.
Note it is explicitly a "new socket", not a "full or partial copy of the socket being unqueued", so should have no options different from the default for that socket type and address family. While the copy behavior may be desirable, this is left as an extension interface a platform may have. I haven't seen that any platform does implement one, however, so it could be added to the standard. It is therefore on the application to use getsockopt()/setsockopt() to copy any attributes, that differ from the defaults, from the queue socket to the returned socket, not the responsibility of the interface, before any use of that socket to send or receive data.