My program establishes an HTTP server. After calling listen() and accept(), how do I read a GET request like this:
GET /path HTTP/1.1\r\n\r\n
Assuming you are using blocking socket, first you need to know if data is available to read. That you can do using select api.
Next accept returns one socket FD to you. Using that socket FD you can receive data.
Accept code example
struct sockaddr_in client_addr;
int addr_len;
int new_fd;
addr_len = sizeof(struct sockaddr_in);
new_fd = accept(socket_fd, (struct sockaddr *)&client_addr,(socklen_t *)&addr_len);
Select and read example
struct timeval timeout;
timeout.tv_sec = 0;
timeout.tv_usec = 100;
ret_value = select(max_fd + 1, READ_FD_SETS, NULL , NULL , &timeout);
if ((ret_value <= 0) && (errno!=EINTR))
{
//error
}
else
{
//read data now
rc = recv(new_fd, buffer, buffer length, 0)
}
Note: If you want better performance you may like R&D on epoll sockets.
Related
I would like to manage the sockets so that I can only accept connections once and then listen to the file descriptor of each socket and intercept the messages sent.
After a lot of trouble, I don't understand why my code systematically goes into the FD_ISSET() function while I put the file descriptor in my fd_set.
From what I've seen on other topics, it seems that you have to accept the connection systematically? Except that if I do that my code goes into an infinite wait.
int main(int argc, char **argv) {
int socket = 0;
// My client 1 socket
int pOneSocket = 0;
// My client 2 socket
int pTwoSocket = 0;
struct sockaddr_in serv_addr;
int test = 0;
// Create the server socket to listen
createSocket(&socket,1337, &serv_addr);
fd_set readfds;
FD_ZERO(&readfds);
//Enter the server socket to readfs
FD_SET(socket, &readfds);
while (1) {
// Wait for changes on the server socket
test = select(1024, &readfds, NULL, NULL, NULL);
// If the socket fd is not on the readfs, then accept a new connecion
if (FD_ISSET(test, &readfds) == 0)
accept_connection(pOneSocket ? &pTwoSocket : &pOneSocket,
&socket, &serv_addr, &readfds);
// Handle the connection and retreive command from file descriptor
handleConnection(&test, &pOneSocket);
}
return 0;
}
On my handleConnection, I have all the code related to retreive commands:
void accept_connection(int *new_socket, int *socket, struct sockaddr_in *address, fd_set *readfds) {
int addrlen = sizeof(*address);
if((*new_socket = accept(*socket, (struct sockaddr *) address, (socklen_t *) &addrlen)) < 0) {
fprintf(stderr, "\n Accept failed \n");
exit(84);
}
FD_SET(*new_socket, readfds);
}
Currently, my code works to accept a connection, but gets stuck on the first command sent from the client.
select() modifies the fd_set object(s) you pass to it, so each loop iteration needs to make a copy of them, not pass the original fd_sets you're using to track your connections. Something like this:
FD_SET(socket, &readfds);
while (1) {
// Wait for changes on the sockets we want to read
fd_set ready = readfds;
test = select(1024, &ready, NULL, NULL, NULL);
// If the socket fd is ready, then accept a new connecion
if (FD_ISSET(socket, &ready))
accept_connection(pOneSocket ? &pTwoSocket : &pOneSocket,
&socket, &serv_addr, &readfds);
... check the other fds that have previously been accepted to see if they're ready
I am using UNIX domain datagram sockets to send records from multiple clients to a single server in a multithreaded program. Everything is done within one process; I'm sending records from multiple threads to a single thread that acts as the server. All threads are assigned to separate cores using their affinity masks.
My problem is when I use select() to retrieve records from client sockets that have records in the socket buffer. I am using the same basic setup I used with a single client socket (and it worked in that context), but now it hangs (apparently it blocks) when I call recvfrom. That's surprising because the select() function has already identified the socket as available for reading.
int select_clientsockets(int64_t srvrfd, int64_t * claddr, int fds_array[], int fd_count, void * recvbuf){
int fds_ready;
int abc;
int64_t cli_addr;
FD_ZERO(&fdset);
FD_SET(0,&fdset);
socklen_t * len = (socklen_t * ) sizeof(struct sockaddr_un);
fds_ready = select(3, &fdset, NULL, NULL, 0);
for (int i = 0; i < fd_count; i++){
fds_array[i] = 0;
if (FD_ISSET(i, &fdset)) {
fds_array[i] = 1;
cli_addr = claddr[i];
server_receive(srvrfd, recvbuf, 720, cli_addr);}
}
return 0;
}
The select function calls server_receive on clients where select says data are available:
int64_t server_receive(int64_t sfd, void * buf, int64_t msgLen, int64_t claddr)
{
socklen_t * len = (socklen_t * ) sizeof(struct sockaddr_un);
int numBytes = recvfrom(sfd, buf, BUF_SIZE, 0, (struct sockaddr *) claddr, len);
if (numBytes == -1)
return 0;
return numBytes;
}
The client socket address is taken from the 3-element array "claddr" (for 3 client sockets) where the corresponding position for each client socket is filled in when the socket is created. At socket creation I also call FD_SET to set the client address into the fd_set. I think I should get the client socket address from fd_set instead, BUT they're both the same pointer value so I don't know why that would make a difference. For internet domain datagram sockets we can use getpeername() but I don't know if there is an analogous function for UNIX domain sockets -- or even if that's the problem.
Thanks very much for any help with this.
UPDATE:
Client fds are added to the global fdset struct on socket creation:
int64_t * create_socket_client(struct sockaddr_un claddr, int64_t retvals[])
{
int sfd, j;
size_t msgLen;
ssize_t numBytes;
char resp[BUF_SIZE];
retvals[0] = 0;
retvals[1] = 0;
sfd = socket(AF_UNIX, SOCK_DGRAM, 0);
if (sfd == -1)
return retvals;
memset(&claddr, 0, sizeof(struct sockaddr_un));
claddr.sun_family = AF_UNIX;
snprintf(claddr.sun_path, sizeof(claddr.sun_path), "/tmp/ud_ucase_cl.%ld", (long) getpid());
FD_SET(sfd,&fdset);
retvals[0] = sfd;
retvals[1] = (int64_t)&claddr;
return retvals;
}
FD_ZERO(&fdset);
FD_SET(0,&fdset);
socklen_t * len = (socklen_t * ) sizeof(struct sockaddr_un);
fds_ready = select(3, &fdset, NULL, NULL, 0);
for (int i = 0; i < fd_count; i++){
fds_array[i] = 0;
if (FD_ISSET(i, &fdset)) {
Your code empties fdset then adds only 0 to fdset. So when you call select and pass it fdset, you are asking it only to check socket 0 for readiness.
You later check if sockets 0 to one less than fd_count are in fdset, but only zero could possibly be because it's the only one you asked about.
Where is the list of sockets you want to check for readiness?
I have a while(1) loop that uses recvfrom to get data that has been sent to a domain socket from another process (P2).
The while loop needs to do 2 things, firstly listen for incoming data from P2, and secondly run another function checkVoltage().
So it runs a little something like this:
while(true)
{
listenOnSocket() /*listens for 100 u seconds*/
checkVoltage();
}
My issue is this: the listenOnSocket() function uses the recvfrom function to check for an input from another process. It spends 100usecs listening, then times out and proceeds to run the checkVoltage() function. So it spends like 99% of the time in the listenOnSocket() function. My issue is that if P2 sends information to the socket during the checkVoltage() function, then it will result in an error, stating: sending datagram message: No such file or directory.
Is there a way to have this loop check for any data that has been sent to the socket previously? That way if P2 sends data during the checkVoltage() function, it will not result in an error.
Thanks.
EDIT:
So the listenOnSocket() function creates a socket with the name FireControl when I run P1 (the program that receives data from P2) the FireControl file vanishes for a split second then reappears. If P2 sends data to P1 during this short period, it results in the error mentioned up top.
So I guess this means I should separate the creation of the socket from the recvfrom function, because the short period where the new socket is created it does not exist - if that makes sense.
I'm a dope, I should've separated them in the first place!
EDIT2: Here is listenOnSocket():
command listenOnSocket(int timeout, float utimeout) /*Returns null payload when no input is detected*/
{
command payload;
int sock;
socklen_t* length;
struct sockaddr_un name;
char buf[1024];
struct timeval tv;
tv.tv_sec = timeout;
tv.tv_usec = utimeout;
/* Create socket from which to read. */
sock = socket(AF_UNIX, SOCK_DGRAM, 0);
if (sock < 0)
{
perror("opening datagram socket");
payload = nullPayload;
}
/* Create name. */
name.sun_family = AF_UNIX;
strcpy(name.sun_path, NAME);
unlink(name.sun_path);
/* Bind the UNIX domain address to the created socket */
if (bind(sock, (struct sockaddr *) &name, sizeof(struct sockaddr_un)))
{
perror("binding name to datagram socket\n");
payload = nullPayload;
}
/*Socket has been created at NAME*/
if (timeout != 0 || utimeout != 0)
{
setsockopt(sock, SOL_SOCKET, SO_RCVTIMEO, (char *)&tv, sizeof(struct timeval));
}
else
{
tv.tv_sec = 0;
tv.tv_usec = 0;
setsockopt(sock, SOL_SOCKET, SO_RCVTIMEO, (char *)&tv, sizeof(struct timeval));
}
/* Read from the socket */
if (recvfrom(sock, &payload, sizeof(command), 0, (struct sockaddr *)&name, &length) < 0) /*Less than zero results from a timeout*/
{
payload = nullPayload;
}
unlink(NAME);
return payload;
}
and here is the loop that calls it:
while (1)
{
buffer = getADCValue();
checkVoltage();
temp = listenOnSocket(0, 100); /*Look for a new command*/
doStuffWithTempIfItHasChanged();
}
}
I guess this means I should separate the creation of the socket from the recvfrom function, because the short period where the new socket is created it does not exist
That is correct. If you open and close the socket every time in your listenOnSocket() socket, (a) you will lose any datagrams that got queued that you didn't read, and (b) sends while the socket is closed will fail ... of course. Nothing for them to send to.
Once you've bound the socket, the datagrams will accumulate in a buffer and can be read later using recvfrom. That said, if the buffer overflows, messages may be discarded.
I'm trying to proxy HTTP requests to another HTTP server. The hostname and port number of the upstream HTTP server, respectively, are server_proxy_hostname and server_proxy_port.
The first step is to do a DNS lookup of the server_proxy_hostname.
Secondly, I create a network socket and connet it to the IP address I got from DNS.
Last step: Wait for new data on both sockets. When data arrives, I immediately read it to a buffer and then write it to the other
socket. This maintains a 2-way communication between the HTTP client and
the upstream HTTP server.
If any of the socket is closed, I close the other one.
The problem right now is that it is not working. (It times out)
I believe that the way I'm getting my IP addresses is correct, so the problem has to be in either my while loop (it never terminates?) or the part where I called connect(). I tried adding error termination for read() and select() but that didn't work either.
void handle_proxy_request(int fd) {
char * targetHostName = server_proxy_hostname;
int targetPort = server_proxy_port;
struct hostent *info;
info = gethostbyname(targetHostName);
struct in_addr ** ipAddresslist;
ipAddresslist = (struct in_addr **) (info -> h_addr_list);
struct in_addr * ipAddress = ipAddresslist[0];
printf("ip address is %s\n", inet_ntoa(*ipAddress));
/*ip for in_addr struct*/
unsigned long ip = inet_addr(inet_ntoa(*ipAddress));
struct in_addr addressIp = {ip};
struct sockaddr_in address = {PF_INET, htons(targetPort), addressIp};
int socket_num = socket(PF_INET, SOCK_STREAM, 0);
connect(socket_num, (struct sockaddr *)&address, sizeof(address));
/*portion for select()*/
char buf[10000];
int nfds = (fd > socket_num)?fd:socket_num;
fd_set readSet;
fd_set writeSet;
while (1) {
FD_ZERO(&readSet);
FD_ZERO(&writeSet);
FD_SET(fd, &readSet);
FD_SET(socket_num, &readSet);
FD_SET(fd, &writeSet);
FD_SET(socket_num, &writeSet);
int selectReturn = select(nfds, &readSet, &writeSet, NULL, NULL);
if (selectReturn == -1){
break;
}
if(FD_ISSET(fd, &readSet)){
int readStat = read(fd, buf, sizeof(buf));
int status = write(socket_num, buf, sizeof(buf));
if (status == -1 || readStat == -1){
close(socket_num);
close(fd);
break;
}
/*memset(buf, 0, sizeof(buf));*/
}
if(FD_ISSET(socket_num, &readSet)){
int readStat2 = read(socket_num, buf, sizeof(buf));
int status2 = write(fd, buf, sizeof(buf));
if (status2 == -1 || readStat2 == -1){
close(socket_num);
close(fd);
break;
}
}
}
}
int socket_num = socket(PF_INET, SOCK_STREAM, 0);
Unchecked. Check this for errors.
connect(socket_num, (struct sockaddr *)&address, sizeof(address));
Ditto.
FD_SET(fd, &writeSet);
FD_SET(socket_num, &writeSet);
Remove. This is poor practice. Sockets are almost always ready to write, so you shouldn't use the writeSet unless you have previously encountered a case where a socket wasn't ready to write, i.e. write() returned -1 with errno == EAGAIN/EWOULDBLOCK.
int readStat = read(fd, buf, sizeof(buf));
int status = write(socket_num, buf, sizeof(buf));
That should be
int status = write(socket_num, buf, readStat);
in both socket cases.
and it should be preceded by tests for readStat == 0, indicating end of stream, and readStat == -1, indicating an error, which you should trace.
You can't get a timeout in this code, as you haven't set any.
There's a wrinkle. If you get end of stream reading a socket you should shutdown the other socket for output. If you get end of stream on a socket you've already shutdown for output, close them both. This correctly propagates FINs in both directions at the correct times.
When you get any error from any system call, you must immediately call perror() or log it with the result strerror(), before you call any other system calls.
I have a problem with a server socket under Linux. For some reason unknown to me the server socket vanishes and I get a Bad file descriptor error in the select call that waits for an incomming connection. This problem always occurs when I close an unrelated socket connection in a different thread. This happens on an embedded Linux with 2.6.36 Kernel.
Does anyone know why this would happen? Is it normal that a server socket can simply vanish resulting in Bad file descriptor?
edit:
The other socket code implements a VNC Server and runs in a completely different thread. The only thing special in that other code is the use of setjmp/longjmp but that should not be a problem.
The code that create the server socket is the following:
int server_socket = socket(PF_INET, SOCK_STREAM, IPPROTO_TCP);
struct sockaddr_in saddr;
memset(&saddr, 0, sizeof(saddr));
saddr.sin_family = AF_INET;
saddr.sin_addr.s_addr = htonl(INADDR_ANY);
saddr.sin_port = htons(1234);
const int optionval = 1;
setsockopt(server_socket, SOL_SOCKET, SO_REUSEADDR, &optionval, sizeof(optionval));
if (bind(server_socket, (struct sockaddr *) &saddr, sizeof(saddr)) < 0) {
perror("bind");
return 0;
}
if (listen(server_socket, 1) < 0) {
perror("listen");
return 0;
}
I wait for an incomming connection using the code below:
static int WaitForConnection(int server_socket, struct timeval *timeout)
{
fd_set read_fds;
FD_ZERO(&read_fds);
int max_sd = server_socket;
FD_SET(server_socket, &read_fds);
// This select will result in 'EBADFD' in the error case.
// Even though the server socket was not closed with 'close'.
int res = select(max_sd + 1, &read_fds, NULL, NULL, timeout);
if (res > 0) {
struct sockaddr_in caddr;
socklen_t clen = sizeof(caddr);
return accept(server_socket, (struct sockaddr *) &caddr, &clen);
}
return -1;
}
edit:
When the problem case happens i currently simply restart the server but I don't understand why the server socket id should suddenly become an invalid file descriptor:
int error = 0;
socklen_t len = sizeof (error);
int retval = getsockopt (server_socket, SOL_SOCKET, SO_ERROR, &error, &len );
if (retval < 0) {
close(server_socket);
goto server_start;
}
Sockets (file descriptors) usually suffer from the same management issues as raw pointers in C. Whenever you close a socket, do not forget to assign -1 to the variable that keeps the descriptor value:
close(socket);
socket = -1;
As you would do to C pointer
free(buffer);
buffer = NULL;
If you forget to do this yo can later close socket twice, as you would free() memory twice if it was a pointer.
The other issue might be related to the fact that people usually forget: file descriptors in UNIX environment start from 0. If somewhere in the code you have
struct FooData {
int foo;
int socket;
...
}
// Either
FooData my_data_1 = {0};
// Or
FooData my_data_2;
memset(&my_data_2, 0, sizeof(my_data_2));
In both cases my_data_1 and my_data_2 have a valid descriptor (socket) value. And later, some piece of code, responsible for freeing FooData structure may blindly close() this descriptor, that happens to be you server's listening socket (0).
1- close your socket:
close(sockfd);
2- clear your socket file descriptor from select set:
FD_CLR(sockfd,&master); //opposite of FD_SET
You don't distinguish the two error cases in your code, both can fail select or accept. My guess is that you just have a time out and that select returns 0.
print retval and errno in an else branch
investigate the return value of accept seperately
ensure that errno is reset to 0 before each of the system calls
In Linux once you create a connection and it get closed then you have to wait for some time before making new connection.
As in Linux, socket doesn't release the port no. as soon as you close the socket.
OR
You reuse the socket, then bad file descriptor want come.