I am new to socket programming and i want to read a sequence of integers from the client program and send the array with these integers to the server program and do some calculations there. But how do i do that? the array that im sending with write must be char* ? maybe read a line from stdin and clean it up from other characters than numbers and send it to server and then take each number seperately? but how do i do that? here is my code..
#include <stdio.h>
#include <stdlib.h>
#include <errno.h>
#include <string.h>
#include <sys/types.h>
#include <sys/socket.h>
#include <sys/un.h>
int main() {
int sockfd, answer=1;
struct sockaddr_un serv_addr;
if ((sockfd = socket(AF_UNIX, SOCK_STREAM, 0)) == -1) {
perror("ERROR opening socket");
exit(1);
}
serv_addr.sun_family = AF_UNIX;
strcpy(remote.sun_path, "askisi3");
if (connect(sockfd, (struct sockaddr *)&serv_addr, sizeof(serv_addr)) == -1) {
perror("ERROR connecting");
exit(1);
}
do{
printf("Enter a sequence of integers.\n");
//code here...
printf("Type 0 for exit or any number to continue.\n");
scanf("%d",&answer)
}while(answer=0);
return 0;
}
We are not going to write the key code for you, but in answer to your specific questions:
the array that im sending with write must be char* ?
The first thing to understand is that from the perspective of the communication channel itself, there are no arrays and no pointers, only a stream of bytes.
The second thing to understand is that the value of a pointer itself is meaningful only to one process, so sending that is useless. You may, however, want to send some or all of the data to which a given pointer points. In fact, that's precisely what the write() function does -- it sends some number of the bytes to which the provided pointer points.
The third thing to understand is that the details of what you should send depend on some kind of agreement between the communicating parties about what will be sent and in what form it will be sent. This is called an application-layer "protocol" (not to be confused with a network protocol such as TCP). Since you are writing both client and server, you get to choose that protocol.
maybe read a line from stdin and clean it up from other characters than numbers and send it to server and then take each number seperately?
That would be a viable alternative.
but how do i do that?
That is too broad a question for this venue.
Related
I'm required to make a 'height sensing subsystem' to read the data sent from a moonlander by making a UDP protocol. The client is already set up for me, and is a 64bit executable on linux run by using ./simulator. So I need to make the UDP server in linux to connect with the client.
The client sends readings from many subsystems in the moonlander, but I only need to read one of them, which is the laser altimeter reading that corresponds to the a type specified by 0xaa01, there are other types such as 0xaa##, and 0xff##, but those correspond to different subsystems of the moonlander I assume. The data sent from the ./simulator file is sent through the type, which I then need to decode to find if its the laser altimeter, and then I need to decode the values to convert into distance to find when the moonlander has touched down. I need to read the time first, which has a size of 4 bytes and is an unsigned 32 bit integer, and the laser altimeter reading is 3 unsigned 16-bit integers that correspond to 3 different measurements (as there are 3 different sensors on the altimeter, max height of 1000m, convert by dividing by 65.535 which is UINT16_MAX, and multiplying by 100 to convert to cm). I need to then take those readings, convert them into height, and then acknowledge that we've landed once we've hit 40cm away from the ground.
How do I read the data from the ./simulator file? The problem is that when I run the ./receiver file, it stops working at the recvfrom() function as in my code below. In the instructions, they tell me to connect to port 12778, which works, but I'm not receiving anything.
#include <arpa/inet.h>
#include <netinet/in.h>
#include <sys/socket.h>
#include <sys/types.h>
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <stdbool.h>
// Create a UDP datagram socket
int main() {
int fd = socket(AF_INET, SOCK_DGRAM, 0);
if (fd < 0)
{
perror("Can't connect");
exit(EXIT_FAILURE);
}
struct sockaddr_in addr;
memset(&addr, 0, sizeof(addr));
addr.sin_family = AF_INET; // use IPv4
addr.sin_addr.s_addr = INADDR_ANY; // bind to all interfaces
addr.sin_port = htons(12778); // the port we want to bind
// Bind to the port specified above
if (bind(fd, (const struct sockaddr *)&addr, sizeof(addr)) < 0)
{
perror("cant connect");
exit(EXIT_FAILURE);
}
printf("here");
// Listen for data on our port (this is blocking)
char buffer[4096];
int n = recvfrom(fd, buffer, 4096, MSG_WAITALL, NULL, NULL);
printf("Recieved!");
}
In a simple program where I'm trying to send command-line inputs from client to server, I keep getting a "Broken Pipe" for the server side. I send a string to the server and the server returns the string as lower-case to the client.
Server:
#include <sys/types.h>
#include <sys/socket.h>
#include <netdb.h>
#include <stdio.h>
#include<string.h>
#include <ctype.h>
#include <unistd.h>
int main()
{
char str[100];
int listen_fd, comm_fd;
struct sockaddr_in servaddr;
listen_fd = socket(AF_INET, SOCK_STREAM, 0);
bzero( &servaddr, sizeof(servaddr));
servaddr.sin_family = AF_INET;
servaddr.sin_addr.s_addr = htons(INADDR_ANY);
servaddr.sin_port = htons(37892);
bind(listen_fd, (struct sockaddr *) &servaddr, sizeof(servaddr));
listen(listen_fd, 10);
comm_fd = accept(listen_fd, (struct sockaddr*) NULL, NULL);
while(1){
bzero( str, 100);
read(comm_fd,str,100);
for(int i = 0; i < strlen(str); i++){
str[i] = tolower(str[i]);
}
printf("Echoing back - %s",str);
write(comm_fd, str, strlen(str)+1);
}
}
Client
#include <sys/types.h>
#include <sys/socket.h>
#include <netdb.h>
#include <stdio.h>
#include<string.h>
#include<ctype.h>
#include <unistd.h>
int main(int argc,char **argv)
{
int sockfd,n;
char sendline[100];
char recvline[100];
struct sockaddr_in servaddr;
sockfd=socket(AF_INET,SOCK_STREAM,0);
bzero(&servaddr,sizeof servaddr);
servaddr.sin_family=AF_INET;
servaddr.sin_port=htons(37892);
inet_pton(AF_INET,"127.0.0.1",&(servaddr.sin_addr));
connect(sockfd,(struct sockaddr *)&servaddr,sizeof(servaddr));
if(argc==1) printf("\nNo arguments");
if (1){
{
bzero( sendline, 100);
bzero( recvline, 100);
strcpy(sendline, argv[1]);
write(sockfd,sendline,strlen(sendline)+1);
read(sockfd,recvline,100);
printf("%s",recvline);
}
}
}
The problem I found was that when the client's side is done sending the string, the command line input does not work like fgets() where the loop will wait for another user input. If I change the if(1) in the client's side to a while(1), it will obviously run an infinite loop as no new inputs are being added.
The dilemma is, how would I be able to keep the server's side running to continuously return the string to the client while processing single requests from the command line on the client's side?
Your program has two problems:
1) read() works differently than you think:
Normally read() will read up to a certain number of bytes from some file or stream (e.g. socket).
Because read() does not distinguish between different types of bytes (e.g. letters, the end-of-line marker or even the NUL byte) read() will not work like fgets() (reading line-wise).
read() is also allowed to "split" the data: If you do a write(..."Hello\n"...) on the client the server may receive "Hel" the first time you call read() and the next time it receives "lo\n".
And of course read() can concatenate data: Call write(..."Hello\n"...) and write(..."World\n"...) on the client and one single read() call may receive "Hello\nWorld\n".
And of course both effects may appear at the same time and you have to call read() three times receiving "Hel", "lo\nWo" and "rld\n".
TTYs (= the console (keyboard) and serial ports) have a special feature (which may be switched off) that makes the read() call behave like fgets(). However only TTYs have such a feature!
In the case of sockets read() will always wait for at least one byte to be received and return the (positive) number of bytes received as long as the connection is alive. As soon as read() returns zero or a negative value the connection has been dropped.
You have to use a while loop that processes data until the connection has been dropped.
You'll have to check the data received by read() if it contains the NUL byte to detect the "end" of the data - if "your" data is terminated by a NUL byte.
2) As soon as the client drops the connection the handle returned by accept() is useless.
You should close that handle to save memory and file descriptors (there is a limit on how many file descriptors you can have open at one time).
Then you have to call accept() again to wait for the client to establish a new connection.
Your client sends one request and reads one response.
It then exits without closing the socket.
Your server runs in a loop reading requests and sending responses.
Your server ignores end of stream.
Little or none of this code is error-checked.
I'm working on a server/client application in C
Actually I'm trying to allow the server to accept new clients and receiving data at (almost) the same time.
I send data two times,the first time I send a login and it works.
The second times I send some string data and it's like the client send them again and again but I've checked and they're sent only once.
Can someone please help me ?
I use gcc -Wall -pedantic to compile them.
Here is the client code : An argument is needed and it can be any text
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <string.h>
#include <sys/un.h>
#include <sys/socket.h>
#define PATH "soquette"
#define BACKLOG 2
#define TAILLE_BUFFER 256
#define TIME_SLEEP 10
int main(int argc,char ** argv){
if(argc == 2){
struct sockaddr_un addr;
int serveur_socket;
ssize_t taille_lue;
char buffer[TAILLE_BUFFER];
char * buffer2;
if((serveur_socket = socket(PF_UNIX,SOCK_STREAM,0))<0){
perror("socket \n");
exit(EXIT_FAILURE);
}
memset(&addr,0,sizeof(struct sockaddr_un));
addr.sun_family = AF_UNIX;
strncpy(addr.sun_path,PATH,sizeof(addr.sun_path)-1);
if(connect(serveur_socket,(struct sockaddr *)&addr,sizeof(struct sockaddr_un))<0){
perror("connect \n");
exit(EXIT_FAILURE);
}
printf("pseudo %s \n",argv[1]);
if(write(serveur_socket,argv[1],strlen(argv[1])*sizeof(char))<0){
perror("1 st write \n");
exit(EXIT_FAILURE);
}
sleep(5);
taille_lue = read(STDIN_FILENO,buffer,TAILLE_BUFFER);
buffer2 = malloc(sizeof(int) + taille_lue * sizeof(char));
sprintf(buffer2,"%ld",taille_lue);
strcat(buffer2,buffer);
if(write(serveur_socket,buffer2,sizeof(buffer2))<0){
perror("write \n");
exit(EXIT_FAILURE);
}
printf("message envoyé %s \n",buffer2);
free(buffer2);
exit(EXIT_SUCCESS);
}
else{
printf("bad arguments number \n");
exit(EXIT_SUCCESS);
}
exit(EXIT_SUCCESS);
}
And here is the server side.
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <string.h>
#include <sys/un.h>
#include <sys/socket.h>
#include <signal.h>
#include <fcntl.h>
#define PATH "soquette"
#define NB_MAX_CONNECTION 10
#define TAILLE_BUFFER 256
#define NB_BOUCLE 10
#define TIME_WAIT 10
int socket_server;
void signal_handler(){
printf("signal handler \n");
if(close(socket_server)==-1){
perror("close \n");
}
if(unlink(PATH)==-1){
perror("unlink \n");
}
exit(EXIT_SUCCESS);
}
int main(){
int i,retval,j,fd_max,new_fd;
ssize_t taille_recue;
struct sockaddr_un addr;
char buffer[TAILLE_BUFFER];
struct timeval tv;
fd_set rfds,active_fd_set;
if(signal(SIGINT,signal_handler)==SIG_ERR){
perror("signal \n");
}
tv.tv_sec=TIME_WAIT;
tv.tv_usec=0;
FD_ZERO(&rfds);
FD_ZERO(&active_fd_set);
printf("server launch \n");
if((socket_server = socket(PF_UNIX,SOCK_STREAM,0))<0){
perror("socket \n");
exit(EXIT_FAILURE);
}
memset(&addr,0,sizeof(struct sockaddr_un));
addr.sun_family = PF_UNIX;
strncpy(addr.sun_path,PATH,sizeof(addr.sun_path)-1);
if((bind(socket_server,(struct sockaddr *)&addr,sizeof(struct sockaddr_un))==-1)){
perror("bind \n");
exit(EXIT_FAILURE);
}
if(listen(socket_server,NB_MAX_CONNECTION)==-1){
perror("listen \n");
exit(EXIT_FAILURE);
}
FD_SET(socket_server,&active_fd_set);
fd_max = socket_server;
for(i=0;i<NB_BOUCLE;i++){
FD_ZERO(&rfds);
rfds = active_fd_set;
printf("tour number %d \n",i);
if((retval = select(fd_max+1,&rfds,NULL,NULL,&tv))<0){
perror("select \n");
}
for(j=0;j<=fd_max;j++){
if(FD_ISSET(j,&rfds)){
if(j == socket_server){
if((new_fd = accept(socket_server,NULL,NULL))<0){
perror("accept \n");
signal_handler();
exit(EXIT_FAILURE);
}
printf("new client \n");
FD_SET(new_fd,&active_fd_set);
if(read(new_fd,buffer,TAILLE_BUFFER)<0){
perror("read 1\n");
}
else{
printf("read from buffer %s \n",buffer);
fd_max = new_fd;
}
}
else{
printf("client already in the list \n");
if((taille_recue = read(j,buffer,sizeof(int)))<0){
if(taille_recue == 0){
close(j);
FD_CLR(j,&rfds);
}
else{
signal_handler();
perror("read server 2 \n");
exit(EXIT_FAILURE);
}
}
else{
printf("read from buffer %s \n",buffer);
FD_CLR(j,&rfds);
}
}
}
}
}
printf("fermeture du serveur \n");
close(socket_server);
unlink(PATH);
exit(EXIT_SUCCESS);
}
Here is the client output
/client 1
pseudo 1
salut
message envoyé 6salut
/0
and here is the server output
MacBook-Pro-de-Kevin:tp10 kevin$ ./server
server launch
tour number 0
new client
read from buffer 1
tour number 1
client already in the list
read from buffer 6sal
tour number 2
client already in the list
read from buffer ut
/
tour number 3
client already in the list
read from buffer ut
/
tour number 4
client already in the list
read from buffer ut
/
tour number 5
client already in the list
read from buffer ut
/
tour number 6
client already in the list
read from buffer ut
/
tour number 7
client already in the list
read from buffer ut
/
tour number 8
client already in the list
read from buffer ut
/
tour number 9
client already in the list
read from buffer ut
/
fermeture du serveur
The server does not handle connected sockets correctly
In the first place, when it accepts a new connection, the server immediately tries to read data from the socket. There may be no data available at that point, so the read can block. Although that does not explain the problem you asked about, it conflicts with your objective.
The server assumes that the fd of any newly-accepted connection must be the maximum fd in the set. Although it is not impacting you yet, that assumption is not safe. File descriptors are freed up and made available for reuse when they are closed.
The server does not update fd_max when it closes a connection. However, although this may result in subsequent select() calls not conforming strictly to that function's specification, it probably does not cause any actual misbehavior.
The server and client do not handle I/O correctly
You appear to assume that the client's write() calls always write the full number of bytes specified to them, and that up to all bytes written will be read by the the server's next read(). These assumptions are not safe in general, though you do have a good chance of them being met for unix-domain sockets. In general, for both read() and write() you must consider the return value not only to spot errors / end-of-file, but also to ensure that all expected bytes are written / read. You must be prepared to loop in order to transfer all needed bytes.
In a select()-based scenario, the looping described in the previous point needs to be via the select() loop, else you likely introduce blocking. You may therefore need to do per-connection accounting of how many more bytes you expect to read / write at any given time. Indeed, unless your server does nothing but shuffle bytes from sources to sinks as fast as it can, it very likely will need to maintain some per-connection state.
it's odd that for an established connection, you attempt to read only the number of bytes in an int on any given read, when in fact more bytes than that may be available and the buffer can accommodate more. That's here:
if((taille_recue = read(j,buffer,sizeof(int)))<0){
Now consider the above quoted line carefully: only when read() returns a negative value is the if block executed. In particular, that block is not executed when read() returns 0 to indicate end-of-file, but it is in that block, not the else block, where you test for the end-of-file condition. This is what causes the behavior you asked about. An open file positioned at EOF is always ready to read, but you mishandle the EOF signal from the read(), treating it as if data had been read instead of recognizing it for what it is.
Additionally, if you want to print the buffer contents via printf() and a %s field descriptor then you must be certain either to insert a null character ('\0') into the buffer after the valid data, or to use a maximum field width that limits the output to the number of valid bytes in the buffer.
I'm writing a TCP server in C and find something unusual happens once the listening fd get "Too many open files" error. The accept call doesn't block anymore and returns -1 all the time.
I also tried closing the listening fd and re-opening, re-binding it, but didn't seem to work.
My questions are why accept keeps returning -1 in this situation, what am I supposed to do to stop it and make the server be able to accept new connections after any old clients closed? (the socket is of course able to accept correctly again when some connections closed)
====== UPDATE: clarification ======
The problem occurs just because the number of active clients is more than the limit of open fds, so I don't close any of the accepted fds in the sample code, just to make it reproduce more quickly.
I add the timestamp each time accept returns to the output and slow down connect frequency to once in 2 seconds, then I find that in fact the "Too many open files" error occurs immediately after the lastest success accept. So I think that is because when the maxium fds is reached, each call to accept will return immediately, and the return value is -1. (What I thought is that accept would still block, but returns -1 at the next incoming connect. The behavior of accept in this situation is my own theory, not from the man page. If it's wrong, please let me know).
So to my second question, to make it stop, I think it's a solution that stop to call accept before any connection is closed.
Also update the sample codes. Thanks for your help.
====== Sample codes ======
Here is how I test it. First set ulimit -n to a low value (like 16) and run the server program compiled from the following C source; then use the Python script to create several connections
/* TCP server; bind :5555 */
#include <stdio.h>
#include <unistd.h>
#include <time.h>
#include <stdlib.h>
#include <string.h>
#include <netdb.h>
#include <sys/types.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <arpa/inet.h>
#define BUFSIZE 1024
#define PORT 5555
void error(char const* msg)
{
perror(msg);
exit(1);
}
int listen_port(int port)
{
int parentfd; /* parent socket */
struct sockaddr_in serveraddr; /* server's addr */
int optval; /* flag value for setsockopt */
parentfd = socket(AF_INET, SOCK_STREAM, 0);
if (parentfd < 0) {
error("ERROR opening socket");
}
optval = 1;
setsockopt(parentfd, SOL_SOCKET, SO_REUSEADDR,
(const void *)&optval , sizeof(int));
bzero((char *) &serveraddr, sizeof(serveraddr));
serveraddr.sin_family = AF_INET;
serveraddr.sin_addr.s_addr = htonl(INADDR_ANY);
serveraddr.sin_port = htons((unsigned short)port);
if (bind(parentfd, (struct sockaddr *) &serveraddr, sizeof(serveraddr)) < 0) {
error("ERROR on binding");
}
if (listen(parentfd, 5) < 0) {
error("ERROR on listen");
}
printf("Listen :%d\n", port);
return parentfd;
}
int main(int argc, char **argv)
{
int parentfd; /* parent socket */
int childfd; /* child socket */
int clientlen; /* byte size of client's address */
struct sockaddr_in clientaddr; /* client addr */
int accept_count; /* times of accept called */
accept_count = 0;
parentfd = listen_port(PORT);
clientlen = sizeof(clientaddr);
while (1) {
childfd = accept(parentfd, (struct sockaddr *) &clientaddr, (socklen_t*) &clientlen);
printf("accept returns ; count=%d ; time=%u ; fd=%d\n", accept_count++, (unsigned) time(NULL), childfd);
if (childfd < 0) {
perror("error on accept");
/* the following 2 lines try to close the listening fd and re-open it */
// close(parentfd);
// parentfd = listen_port(PORT);
// the following line let the program exit at the first error
error("--- error on accept");
}
}
}
The Python program to create connections
import time
import socket
def connect(host, port):
s = socket.socket()
s.connect((host, port))
return s
if __name__ == '__main__':
socks = []
try:
try:
for i in xrange(100):
socks.append(connect('127.0.0.1', 5555))
print ('connect count: ' + str(i))
time.sleep(2)
except IOError as e:
print ('error: ' + str(e))
print ('stop')
while True:
time.sleep(10)
except KeyboardInterrupt:
for s in socks:
s.close()
why accept keeps returning -1 in this situation
Because you've run out of file descriptors, just like the error message says.
what am I supposed to do to stop it and make the server be able to accept new connections after any old clients closed?
Close the clients. The problem is not accept() returning -1, it is that you aren't closing accepted sockets once you're finished with them.
Closing the listening socket isn't a solution. It's just another problem.
EDIT By 'finished with them' I mean one of several things:
They have finished with you, which is shown by recv() returning zero.
You have finished with them, e.g. after sending a final response.
When you've had an error sending or receiving to/from them other than EAGAIN/EWOULDBLOCK.
When you've had some other internal fatal error that prevents you dealing further with that client, for example receiving an unparseable request, or some other fatal application error that invalidates the connection or the session, or the entire client for that matter.
In all these cases you should close the accepted socket.
The answer of EJP is correct, but it does not tell you how to deal with the situation. What you have to do is actually do something with the sockets that you get as accept returns. Simple calling close on them you won't receive anything of course but it would deal with the resource depletion problem. What you have to do to have a correct implementation is start receiving on the accepted sockets and keep receiving until you receive 0 bytes. If you receive 0 bytes, that is an indication that the peer is done using his side of the socket. That is your trigger to call close on the socket as well and deal with the resource problem.
You don't have to stop listening. That would stop your server from being able to process new requests and that is not the problem here.
The solution I implemented here was to review the value of the new (accepted) fd and if that value was equal or higher then the allowed server capacity, then a "busy" message is sent and the new connection is closed.
This solution is quite effective and allows you to inform your clients about the server's status.
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#include <unistd.h>
#include <sys/types.h>
#include <sys/socket.h>
int main()
{
int sock;
struct sockaddr sock_name = {AF_UNIX, "Fred"};
socklen_t len=sizeof(struct sockaddr)+5;
if( (sock=socket(AF_UNIX,SOCK_STREAM,0)) ==-1)
{
printf("error creating socket");
return -1;
}
if( bind(sock,&sock_name,len) != 0 )
{
printf("socket bind error");
return -1;
}
close(sock);
return 0;
}
After the first run, this program keeps reporting binding error. I tried to change the name of the sockaddr. It works again. But after changing it back to "Fred" (in this case), the error continues. Is something being stored in memory I didn't clear? Why does this happen and how could I fix it?
I guess I have found the problem. After the first run, I find a file named "Fred" in the current directory. I removed the file and my program worked again. Why does bind method generate a file in the current directory?
When used with Unix domain sockets, bind(2) will create a special file at the specified path. This file identifies the socket in much the same way a host and port identify a TCP or UDP socket. Just like you can't call bind twice to associate two different sockets with a given host and port*, you can't associate more than one Unix socket
But why doesn't the file disappear when you call close(2)? After all, closing a TCP socket makes the host and port it was bound to available for other sockets.**
That's a good question, and the short answer is, it just doesn't.
So it's customary (at least in example code) to call unlink(2) prior to binding. The Unix domain socket section of Beej's IPC guide has a nice example of this.
*With versions of the Linux kernel >= 3.9, this isn't exactly true.
**After TIME_WAIT or immediately if you use the SO_REUSEADDR socket option.
EDIT
You said this is your teacher's code, but I suggest that you replace your printf calls with perror:
if( bind(sock,&sock_name,len) != 0 )
{
perror("socket bind error");
return -1;
}
...which will print out a human-readable representation of the real problem encountered by bind(2):
$ ./your-example-executable
$ ./your-example-executable
socket bind error: Address already in use
Programming doesn't have to be so inscrutable!
When you successfully open a socket, it stays open until it is closed (even if your program terminates).
It appears that the question code is not closing the socket in the event of an (such as the failure of bind()).
Two processes cannot generally open the same socket.
Each time the code is executed, it is a new process, attempting to open the same socket.
The code needs a better scheme to handle errors.
This is how I would do it:
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#include <unistd.h>
#include <sys/types.h>
#include <sys/socket.h>
#define MY_FALSE (0)
#define MY_TRUE (-1)
int main()
{
int rCode=0;
int sock = (-1);
char *socketFile = "Fred");
struct sockaddr sock_name = {AF_UNIX, socketFile};
socklen_t len=sizeof(struct sockaddr)+5;
int bound = MY_FALSE;
if((sock=socket(AF_UNIX,SOCK_STREAM,0)) ==-1)
{
printf("error creating socket");
rCode=(-1);
goto CLEANUP;
}
if( bind(sock,&sock_name,len) != 0 )
{
printf("socket bind error");
rCode=(-1);
goto CLEANUP;
}
bound=MY_TRUE;
This single 'cleanup' area can be used to free allocated memory, close sockets & files, etc.
CLEANUP:
if((-1) != sock)
close(sock);
if(bound)
unlink(socketFile);
return 0;
}