I have a small problem, in practice I have to let two clients communicate (which perform different functions), with my concurrent server,
I discovered that I can solve this using the select, but if I try to implement it in the code it gives me a segmentation error, could someone help me kindly?
I state that before with a single client was a fable, now unfortunately implementing the select, I spoiled a bit 'all,
I should fix this thing, you can make a concurrent server with select ()?
can you tell me where I'm wrong with this code?
int main (int argc , char *argv[])
{
int list_fd,conn_fd;
int i,j;
struct sockaddr_in serv_add,client;
char buffer [1024];
socklen_t len;
time_t timeval;
char fd_open[FD_SETSIZE];
pid_t pid;
int logging = 1;
char swi;
fd_set fset;
int max_fd = 0;
int waiting = 0;
int compat = 0;
sqlite3 *db;
sqlite3_open("Prova.db", &db);
start2();
start3();
printf("ServerREP Avviato \n");
if ( ( list_fd = socket(AF_INET, SOCK_STREAM, 0) ) < 0 ) {
perror("socket");
exit(1);
}
if (setsockopt(list_fd, SOL_SOCKET, SO_REUSEADDR, &(int){ 1 }, sizeof(int)) < 0)
perror("setsockopt(SO_REUSEADDR) failed");
memset((void *)&serv_add, 0, sizeof(serv_add)); /* clear server address */
serv_add.sin_family = AF_INET;
serv_add.sin_port = htons(SERVERS_PORT2);
serv_add.sin_addr.s_addr = inet_addr(SERVERS_IP2);
if ( bind(list_fd, (struct sockaddr *) &serv_add, sizeof(serv_add)) < 0 ) {
perror("bind");
exit(1);
}
if ( listen(list_fd, 1024) < 0 ) {
perror("listen");
exit(1);
}
/* initialize all needed variables */
memset(fd_open, 0, FD_SETSIZE); /* clear array of open files */
max_fd = list_fd; /* maximum now is listening socket */
fd_open[max_fd] = 1;
//max_fd = max(conn_fd, sockMED);
while (1) {
FD_ZERO(&fset);
FD_SET(conn_fd, &fset);
FD_SET(sockMED, &fset);
len = sizeof(client);
if(select(max_fd + 1, &fset, NULL, NULL, NULL) < 0){exit(1);}
if(FD_ISSET(conn_fd, &fset))
{
if ( (conn_fd = accept(list_fd, (struct sockaddr *)&client, &len)) <0 )
perror("accept error");
exit(-1);
}
/* fork to handle connection */
if ( (pid = fork()) < 0 ){
perror("fork error");
exit(-1);
}
if (pid == 0) { /* child */
close(list_fd);
close(sockMED);
Menu_2(db,conn_fd);
close(conn_fd);
exit(0);
} else { /* parent */
close(conn_fd);
}
if(FD_ISSET(sockMED, &fset))
MenuMED(db,sockMED);
FD_CLR(conn_fd, &fset);
FD_CLR(sockMED, &fset);
}
sqlite3_close(db);
exit(0);
}
I cannot understand how you are trying to use select here, and why you want to use both fork to let a child handle the accepted connection socket, and select.
Common designs are:
multi processing server:
The parent process setups the listening socket and loops on waiting actual connections with accept. Then it forks a child to process the newly accepted connection and simple waits for next one.
multi threaded server:
A variant of previous one. The master thread starts a new thread to process the newly accepted connection instead of forking a new process.
asynchronous server:
The server setups a fd_set to know which sockets require processing. Initially, only the listening socket is set. Then the main loop is (in pseudo code:
loop on select
if the listening socket is present in read ready sockets, accept the pending connection and add is to the `fd_set`, then return to loop
if another socket is present in read ready socket
read from it
if a zero read (closed by peer), close the socket and remove it from the `fd_set`
else process the request and return to loop
The hard part here is that is processing takes a long time, the whole process is blocked, and it processing involves sending a lot of data, you will have to use select for the sending part too...
Related
I'm initializing a daemon in C in a Debian:
/**
* Initializes the daemon so that mcu.serial would listen in the background
*/
void init_daemon()
{
pid_t process_id = 0;
pid_t sid = 0;
// Create child process
process_id = fork();
// Indication of fork() failure
if (process_id < 0) {
printf("Fork failed!\n");
logger("Fork failed", LOG_LEVEL_ERROR);
exit(1);
}
// PARENT PROCESS. Need to kill it.
if (process_id > 0) {
printf("process_id of child process %i\n", process_id);
exit(0);
}
//unmask the file mode
umask(0);
//set new session
sid = setsid();
if(sid < 0) {
printf("could not set new session");
logger("could not set new session", LOG_LEVEL_ERROR);
exit(1);
}
// Close stdin. stdout and stderr
close(STDIN_FILENO);
close(STDOUT_FILENO);
close(STDERR_FILENO);
}
The main daemon runs in the background and monitors a serial port to communicate with a microcontroller - it reads peripherals (such as button presses) and passes information to it. The main functional loop is
int main(int argc, char *argv[])
{
// We need the port to listen to commands writing
if (argc < 2) {
fprintf(stderr,"ERROR, no port provided\n");
logger("ERROR, no port provided", LOG_LEVEL_ERROR);
exit(1);
}
int portno = atoi(argv[1]);
// Initialize serial port
init_serial();
// Initialize server for listening to socket
init_server(portno);
// Initialize daemon and run the process in the background
init_daemon();
// Timeout for reading socket
fd_set setSerial, setSocket;
struct timeval timeout;
timeout.tv_sec = 0;
timeout.tv_usec = 10000;
char bufferWrite[BUFFER_WRITE_SIZE];
char bufferRead[BUFFER_READ_SIZE];
int n;
int sleep;
int newsockfd;
while (1)
{
// Reset parameters
bzero(bufferWrite, BUFFER_WRITE_SIZE);
bzero(bufferRead, BUFFER_WRITE_SIZE);
FD_ZERO(&setSerial);
FD_SET(fserial, &setSerial);
FD_ZERO(&setSocket);
FD_SET(sockfd, &setSocket);
// Start listening to socket for commands
listen(sockfd,5);
clilen = sizeof(cli_addr);
// Wait for command but timeout
n = select(sockfd + 1, &setSocket, NULL, NULL, &timeout);
if (n == -1) {
// Error. Handled below
}
// This is for READING button
else if (n == 0) {
// This timeout is okay
// This allows us to read the button press as well
// Now read the response, but timeout if nothing returned
n = select(fserial + 1, &setSerial, NULL, NULL, &timeout);
if (n == -1) {
// Error. Handled below
} else if (n == 0) {
// timeout
// This is an okay tiemout; i.e. nothing has happened
} else {
n = read(fserial, bufferRead, sizeof bufferRead);
if (n > 0) {
logger(bufferRead, LOG_LEVEL_INFO);
if (strcmp(stripNewLine(bufferRead), "ev b2") == 0) {
//logger("Shutting down now", LOG_LEVEL_INFO);
system("shutdown -h now");
}
} else {
logger("Could not read button press", LOG_LEVEL_WARN);
}
}
}
// This is for WRITING COMMANDS
else {
// Now read the command
newsockfd = accept(sockfd, (struct sockaddr *) &cli_addr, &clilen);
if (newsockfd < 0 || n < 0) logger("Could not accept socket port", LOG_LEVEL_ERROR);
// Now read the command
n = read(newsockfd, bufferWrite, BUFFER_WRITE_SIZE);
if (n < 0) {
logger("Could not read command from socket port", LOG_LEVEL_ERROR);
} else {
//logger(bufferWrite, LOG_LEVEL_INFO);
}
// Write the command to the serial
write(fserial, bufferWrite, strlen(bufferWrite));
sleep = 200 * strlen(bufferWrite) - timeout.tv_usec; // Sleep 200uS/byte
if (sleep > 0) usleep(sleep);
// Now read the response, but timeout if nothing returned
n = select(fserial + 1, &setSerial, NULL, NULL, &timeout);
if (n == -1) {
// Error. Handled below
} else if (n == 0) {
// timeout
sprintf(bufferRead, "err\r\n");
logger("Did not receive response from MCU", LOG_LEVEL_WARN);
} else {
n = read(fserial, bufferRead, sizeof bufferRead);
}
// Error reading from the socket
if (n < 0) {
logger("Could not read response from serial port", LOG_LEVEL_ERROR);
} else {
//logger(bufferRead, LOG_LEVEL_INFO);
}
// Send MCU response to client
n = write(newsockfd, bufferRead, strlen(bufferRead));
if (n < 0) logger("Could not write confirmation to socket port", LOG_LEVEL_ERROR);
}
close(newsockfd);
}
close(sockfd);
return 0;
}
But the CPU usages is always at 100%. Why is that? What can I do?
EDIT
I commented out the entire while loop and made the main function as simple as:
int main(int argc, char *argv[])
{
init_daemon();
while(1) {
// All commented out
}
return 0;
}
And I'm still getting 100% cpu usage
You need to set timeout to the wanted value on every iteration, the struct gets modified on Linux so I think your loop is not pausing except for the first time, i.e. select() is only blocking the very first time.
Try to print tv_sec and tv_usec after select() and see, it's modified to reflect how much time was left before select() returned.
Move this part
timeout.tv_sec = 0;
timeout.tv_usec = 10000;
inside the loop before the select() call and it should work as you expect it to, you can move many delcarations inside the loop too, that would make your code easier to maintan, you could for example move the loop content to a function in the future and that might help.
This is from the linux manual page select(2)
On Linux, select() modifies timeout to reflect the amount of time not slept; most other implementations do not do this. (POSIX.1-2001 permits either behavior.) This causes problems both when Linux code which reads timeout is ported to other operating systems, and when code is ported to Linux that reuses a struct timeval for multiple select()s in a loop without reinitializing it. Consider timeout to be undefined after select() returns.
I think the bold part in the qoute is the important one.
I'm new to socket programming and I've been introduced to the select() system call. My question is, lets say I'm writing a server in C (which I am attempting to do) and I want to use the select() call in my implementation for practice. I'm trying to write a server that receives information from a client, so my approach is to use select(), followed by read() and just output the information.
According to the documentation I've read select() returns the number of file descriptors in the input set which are ready for i/o. My question is, how do know which file descriptors in the original set are the ones that are ready for i/o? I can't seem to find this in my searches or examples I've looked at for the past while.
Let's say my code looks like the below:
int main() {
/* Create socket/server variables */
int select_value;
int this_socket;
int maxfd;
struct sockadder_in address;
fd_set allset;
/* Bind the socket to a port */
main_socket = socket(AF_INET, SOCK_STREAM, 0);
if (main_socket < 0) {
perror("socket()");
exit(1);
}
Connect(main_socket, (struct sockaddr *)&address, sizeof(address));
/* Add the socket to the list of fds to be monitored */
FD_ZERO(&allset);
FD_SET(main_socket, &allset);
fd_set read_ready = allset;
fd_set write_ready = allset;
while (1) {
/* Listen for a connection */
/* Accept a connection */
select_value = Select(maxfd+1, &read_ready, &write_ready, NULL, NULL);
if (select_value == -1) {
perror("select()");
exit(1);
}
else if(select_value > 0) {
/* How to access i/o ready file descriptors
now that we know there are some available? */
}
}
}
One can do this using the FD_ISSET macro that is part of <sys/select.h>.
When your select unblocks and a file descriptor is ready, you can test all of your file descriptors using the FD_ISSET macro in a simple loop. This can be translated to the following sample :
for (i = 0; i < FD_SETSIZE; ++i) {
if (FD_ISSET (i, &read_fd_set)) {
if (i == bound_socket) {
// A new client is waiting to be accepted
new = accept(sock, (struct sockaddr *) &clientname, &size);
// ...
FD_SET (new, &active_fd_set);
}
else {
// There is something to be read on the file descriptor.
data = read_from_client_on(i);
}
}
}
Of course, this is just sample which is obviously lacking any error handling, which you should handle in your application.
I want to make a simple chat application for unix.
I have created one server which supports multiple clients. When ever a new client connects to the server a new process is created using fork command. Now the problem is all the child processes share the same stdin on the server, cause of this in order to send a message to 2nd clien 1st child prosess has to terminte. In order to resolve this I would like to run each child process in a new terminal.
This can be achieved by writing the code for the child process code in a new file and executing it like xterm -e sh -c .(i have not tried this though).
What i really want is not to have two file just to fireup a new terminal and run rest of the code in it.
int say(int socket)
{
char *s;
fscanf(stdin,"%79s",s);
int result=send(socket,s,strlen(s),0);
return result;
}
int main()
{
int listener_d;
struct sockaddr_in name;
listener_d=socket(PF_INET,SOCK_STREAM,0);
name.sin_family=PF_INET;
name.sin_port=(in_port_t)htons(30000);
name.sin_addr.s_addr=htonl(INADDR_ANY);
int c = bind(listener_d,(struct sockaddr *)&name,sizeof(name)); //Bind
if(c== -1)
{
printf("\nCan't bind to socket\n");
}
if(listen(listener_d,10) == -1) // Listen
{
printf("\nCan't listen\n");
}
puts("\nWait for connection\n");
while(1)
{
struct sockaddr_storage client_addr;
unsigned int address_size = sizeof(client_addr);
int connect_d = accept(listener_d,
(struct sockaddr*)&client_addr,&address_size); //Accept
if(connect_d== -1)
{
printf("\nCan't open secondary socket\n");
}
if(!fork())
{
close(listener_d);
char *msg = "welcome Sweetone\n";
if(send(connect_d,msg,strlen(msg),0))
{
printf("send");
}
int k=0;
while(k<5)
{
say(connect_d);
++k;
}
close(connect_d);
exit(0);
}
close(connect_d);
}
close(listener_d);
return 0;
}
I think the message sending between your client and servers is a bit unusual. It is more common, in this simple "just test how it works" scenario to have the clients sending messages to the server. As an example I could mention a simple echo service, which mirrors everything a client sends, back to the client. Is this design forced by some requirements?
Critique aside, I have two separate changes that could make your current design work. They both involve changing the reading of input in the subservers.
Alternative 1:
Instead of reading from stdin, create a named pipe ( see man 3 mkfifo), fex /tmp/childpipe"pid_of_subserver_here". You could create the pipe in say() and open it for reading. Then use echo (man echo) to write to the pipe echo "My message" > /tmp/childpipe"NNNN". Before exiting the child, remember to remove the pipe with unlink()
Alternative 2:
Create an unnamed pipe between server and each subserver. This makes the code much more messy, but avoids creating named pipes and using echo. Example code is included below. It has insufficient error handling (like most example code) and does not handle disconnecting client properly.
Example usage: 1) start server ./a.out 2) (connect client in external window (e.g. nc localhost 30000) 3) write to client 1 by typing "1Hello client one" 4) (connect second client in third window etc) 4) Write to second client by typing "2Hello second client"
#include <stdlib.h>
#include <sys/types.h>
#include <sys/socket.h>
#include <stdio.h>
#include <arpa/inet.h>
#include <string.h>
#include <unistd.h>
enum max_childeren{
MAX_CHILDEREN = 50
};
int say(int socket)
{
char buf[513] = {0};
fgets(buf, sizeof(buf), stdin);
int result=send(socket, buf, strlen(buf),0);
return result;
}
int main()
{
int listener_d;
struct sockaddr_in name;
listener_d=socket(PF_INET,SOCK_STREAM,0);
name.sin_family=PF_INET;
name.sin_port=(in_port_t)htons(30000);
name.sin_addr.s_addr=htonl(INADDR_ANY);
int on = 1;
if (setsockopt(listener_d, SOL_SOCKET, SO_REUSEADDR, &on, sizeof(on)) < 0){
perror("setsockopt()");
}
int c = bind(listener_d,(struct sockaddr *)&name,sizeof(name)); //Bind
if(c== -1)
{
printf("\nCan't bind to socket\n");
}
if(listen(listener_d,10) == -1) // Listen
{
printf("\nCan't listen\n");
}
// Edited here
int number_of_childeren = 0;
int pipes[2] = {0};
int child_pipe_write_ends[MAX_CHILDEREN] = {0};
fd_set select_fds;
FD_ZERO(&select_fds);
puts("\nWait for connection\n");
while(1)
{
struct sockaddr_storage client_addr;
unsigned int address_size = sizeof(client_addr);
// Edited here, to multiplex IO
FD_SET(listener_d, &select_fds);
FD_SET(STDIN_FILENO, &select_fds);
int maxfd = listener_d + 1;
int create_new_child = 0;
int connect_d = -1; // moved here
select(maxfd, &select_fds, NULL, NULL, NULL);
if (FD_ISSET(listener_d, &select_fds)){
connect_d = accept(listener_d,
(struct sockaddr*)&client_addr,&address_size); //Accept
if(connect_d== -1)
{
printf("\nCan't open secondary socket\n");
exit(EXIT_FAILURE);
}
create_new_child = 1;
}
char buf[512] ={0};
char *endptr = NULL;
if (FD_ISSET(STDIN_FILENO, &select_fds)){
fgets(buf, sizeof(buf), stdin);
long int child_num = strtol(buf, &endptr, 10);
if (child_num > 0 && child_num <= number_of_childeren) {
write(child_pipe_write_ends[child_num - 1], endptr, strnlen(buf, sizeof(buf)) - (endptr - buf));
}
else {
printf("Skipping invalid input: %s\n", buf);
}
}
if (create_new_child != 1)
continue;
number_of_childeren++; // Edited here
int error = pipe(pipes);
if (error != 0){
//handle errors
perror("pipe():");
exit(EXIT_FAILURE);
}
child_pipe_write_ends[number_of_childeren - 1] = pipes[1];
if(!fork())
{
error = dup2(pipes[0], STDIN_FILENO);
if (error < 0){ // could also test != STDIN_FILENO but thats confusing
//handle errors
perror("dup2");
exit(EXIT_FAILURE);
}
close(pipes[0]);
close(listener_d);
char *msg = "welcome Sweetone\n";
if(send(connect_d,msg,strlen(msg),0))
{
printf("send\n");
}
int k=0;
while(k<5)
{
say(connect_d);
++k;
}
close(connect_d);
exit(0);
}
close(connect_d);
close(pipes[0]);
}
close(listener_d);
return 0;
}
The code needs refactoring into functions. It is too long. I tried to do the least possible amount of changes, so I left the restructuring as an exercise.
fscanf(stdin,"%79s",s);
Why? Is it tcp-chat? You have some socket for each client and if yoy want to "say" something then you must to use client. It's true logick.
The server usually sends a service messages only. It's true logick too.
But if you want new terminal then you can try to use a exec's family from unistd.h .
I'm writing a client server application and I'm using poll to multiplex between several client sockets and stdin, where I can insert commands (example: stop the server). I believe the structure (the "logic") of my code is correct, however it's not behaving the way I expect it to:
struct pollfd pfd[NSERVER]; //defined as 10
pfd[0].fd = fileno(stdin);
pfd[0].events = POLLIN;
pfd[1].fd = socktfd; //server bind, listen socket
pfd[1].events = POLLIN;
struct sockaddr_storage remoteaddr; // client address
socklen_t addrlen;
char remoteIP[INET6_ADDRSTRLEN];
addrlen = sizeof remoteaddr;
char buf[1024]; // buffer
int pos=2;
while(poll(pfd,1,0) >= 0)
{
if(pfd[0].revents & POLLIN) { //stdin
//process input and perform command
}
if(pfd[1].revents & POLLIN) {
/* new connection */
int connsockfd = accept(socktfd, (struct sockaddr *)&remoteaddr,&addrlen);
pfd[pos].fd=connsockfd;
}
int i=2;
//Loop through the fd in pfd for events
while (i<=NSERVER)
{
if (pfd[i].revents & POLLIN) {
int c=recv(pfd[i].fd, buf, sizeof buf, 0);
if(c<=0) {
if (c==0)
{
/* Client closed socket */
close(pfd[i].fd);
}
}else
{//Client sent some data
c=send(pfd[i].fd,sbuff,z,0);
if (c<=0)
{
Error;
}
free(sbuff);
}
}
i++;
}
}
I've removed some code inside the recv and send to make the code easier to read.
It fails to behave (it just hangs, doesn't accept connections or reacts to input from stdin).
Note: I would prefer to use poll over select, so please don't point to select :-).
Thanks in advance for any assistance.
you should set every pfd[i].fd = -1, so they get ignored initially by poll().
poll(pfd, 1, 0) is wrong and should at least be poll(pfd, 2, 0) or even poll(pfd, NSERVER, 0).
while(i<=NSERVER) should be while(i<NSERVER)
Your program probably hangs, because you loop through the pfd array, which is not initialized and containes random values for .fd and .revents, so it wants to send() or recv() on some random FD which might block. Do if(pdf[i].fd < 0) {i++; continue;} in the i<NSERVER loop.
You also don't set pfd[pos].events = POLLIN on newly accepted sockets. Don't set POLLOUT unless you have something to send, because it will trigger almost every time.
I have two nodes communicating with a socket. Each node has a read thread and a write thread to communicate with the other. Given below is the code for the read thread. The communication works fine between the two nodes with that code. But I am trying to add a select function in this thread and that is giving me problems (the code for select is in the comments. I just uncomment it to add the functionality). The problem is one node does not receive messages and only does the timeout. The other node gets the messages from the other node but never timesout. That problem is not there (both nodes send and receive messages) without the select (keeping the comments /* */).
Can anyone point out what the problem might be? Thanks.
void *Read_Thread(void *arg_passed)
{
int numbytes;
unsigned char *buf;
buf = (unsigned char *)malloc(MAXDATASIZE);
/*
fd_set master;
int fdmax;
FD_ZERO(&master);
*/
struct RWThread_args_template *my_args = (struct RWThread_args_template *)arg_passed;
/*
FD_SET(my_args->new_fd, &master);
struct timeval tv;
tv.tv_sec = 2;
tv.tv_usec = 0;
int s_rv = 0;
fdmax = my_args->new_fd;
*/
while(1)
{
/*
s_rv = -1;
if((s_rv = select(fdmax+1, &master, NULL, NULL, &tv)) == -1)
{
perror("select");
exit(1);
}
if(s_rv == 0)
{
printf("Read: Timed out\n");
continue;
}
else
{
printf("Read: Received msg\n");
}
*/
if( (numbytes = recv(my_args->new_fd, buf, MAXDATASIZE-1, 0)) == -1 )
{
perror("recv");
exit(1);
}
buf[numbytes] = '\0';
printf("Read: received '%s'\n", buf);
}
pthread_exit(NULL);
}
You must set up master and tv before each call to select(), within the loop. They are both modified by the select() call.
In particular, if select() returned 0, then master will now be empty.