How to let a socket server accept only once [duplicate] - c

This question already has answers here:
How can I refuse a socket connection in C?
(5 answers)
Closed 4 years ago.
I have a really big headache.
If there is a case that I have many socket client and only one socket server, how can I make the server only accept the first connect() order he received, and fail all the other connect() from other client? I'm using C in linux btw
I'm really struggle with this question. In other words, how can I make a client recognizance that "Hey,this port number of this ip is busy, I need to try another one maybe."
My teacher told me that I will get a timeout error if i just let a client connect to a server, while another client has connect to the same ip same port recently. But, well, I just cannot find where is the error. Both of the connect() functions in two clients return 0.
Maybe I misunderstand my teacher's word,but since the program doesn't crash and no negative number returned from functions, where is it,how can I find it?
Thanks a lot.

Try using a value of 0 or 1 for backlog in your listen() call.
I haven't tried it, but the man page claims that extra connections will get a ECONNREFUSED error.
Immediately after calling accept to get your actual socket to the first client close the listener. Otherwise another client can queue up.

Related

TCP Sockets in C: Does the recv() function trigger sending the ACK?

Im working with TCP Sockets in C but yet dont really understand "how far" the delivery of data is ensured.
My main problem is that in my case the server sometimes sends a message to the client and expects an answer shortly after. If the client doesnt answer in time, the server closes the connection.
When reading through the manpages of the recv() function in C, I found the MSG_PEEK Flag which lets me look/peek into the Stream without actually reading the data.
But does the server even care if I read from the stream at all?
Lets say the server "pushes" a series of messages into the stream and a Client should receive them.
As long as the Client doesnt call recv() those messages will stay in the Stream right?
I know about ACK messages being send when receiving data, but is ACK sent when i call the recv() function or is the ACK already sent when the messsage successfully reached its destination and could (emphasising could) be received by the client if it choses to call recv()?
My hope is to trick the server into thinking the message wasnt completely send yet, because the client has not called recv() yet. Therefore the Client could already evaluate the message by using the MSG_PEEK flag and ensure it always answers in time.
Of course I know the timout thing with my server depends on the implementation. My question basically is, if PEEKING lets the server think the message hasnt reached it destination yet or if the server wont even care and when ACK is sent when using recv().
I read the manpages on recv() and wiki on TCP but couldnt really figure out how recv() takes part in the process. I found some similar questions on SO but no answer to my question.
TL;DR
Does the recv() function trigger sending the ACK?
No, not on any regular OS. Possibly on an embedded platform with an inefficient network stack. But it's almost certainly the wrong problem anyway.
Your question about finessing the details of ACK delivery is a whole can of worms. It's an implemention detail, which means it is highly platform-specific. For example, you may be able to modify the delayed ACK timer on some TCP stacks, but that might be a global kernel parameter if it even exists.
However, it's all irrelevant to your actual question. There's almost no chance the server is looking at when the packet was received, because it would need it's own TCP stack to even guess that, and it still wouldn't be reliable (TCP retrans can keep backing off and retrying for minutes). The server is looking at when it sent the data, and you can't affect that.
The closest you could get is if the server uses blocking writes and is single-threaded and you fill the receive window with un-acked data. But that will probably delay the server noticing you're late rather than actually deceiving it.
Just make your processing fast enough to avoid a timeout instead of trying to lie with TCP.

Debugging a Socket Program

I am currently working on a file server in C.
When the client requests a file from the server, it writes to a socket. The server then writes back the data with a header on it. The client reads the header and then reads the actual data. When the client is being debugged, the server terminates the connection before the client has a chance to read the data.
To address this problem, I put in code to write a byte of 0 to the server when the client is done. The server, has a final read of the socket, looking for that byte but when the client is running under the debugger, it does not wait for the read on the server.
The socket is created with the following call on the server:
int socketId = socket(AF_INET, SOCK_STREAM, 0);
What should I do?
There are many challenges with writing client-server code. In this case you are also writing a protocol but may not realize it. Your protocol needs to be defined in a way that makes it clear what is expected from each side of the communication and the scenarios are non-trivial.
Here are some related questions:
(java) basic java socket programming problem
(c) Socket Programming Problem
(c) Socket Programming -- recv() is not receiving data correctly
What if the file contains a byte of 0?
You don't need this. Just close the socket. If the peer receives a clean close, it must have already received the entire file.
It sounds like you have no error checking in your unposted code.
We found the problem yesterday. The client was writing more bytes than the server was reading due to the fact that a variable was declared of the wrong type. Thanks for the responses.
Bob

How to write a function to disconnect client from server in C? [duplicate]

This question already has answers here:
close vs shutdown socket?
(9 answers)
Closed 7 years ago.
I tried server client tcp code in c, I want to add some functionality to disconnect client from server. I search on google for it. I found function shutdown(), I am not getting idea how to do it ?
To disconnect a client from the server, just close the fd linked to this client.
To disconnect a client to a server from the client, just close the client socket.
This is a quick way to disconnect, but don't forget to send a last exit message when you are leaving from a server / disconnecting a client.
No need of shutdown here. (Except if you share fd between processes, but without more information we cannot be more precise)

running program a second time returns EADDRINUSE for bind [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicates:
Releasing bound ports on process exit
difference between “address in use” with bind() in Windows and on Linux - errno=98
I have a simple server application I'm writing for Linux and it works decently the first time I run it, but for some reason it's not releasing the port on exit. It seems like I have to wait for some kind of timeout before I can rerun the application to get the port. Otherwise I get an EADDRINUSE error on the bind call.
I feel like I must be doing something stupid, but I have been banging my head against the problem for a while and haven't figured it out, so if someone can point me in the right direction that'd be great. I've tried closing the bound and accepted sockets many times, and at different points, but no luck.
Take a look at these questions and answers:
difference between "address in use" with bind() in Windows and on Linux - errno=98
Closing a listening TCP socket in C
Releasing bound ports on process exit

Using SO_REUSEADDR - What happens to previously open socket?

In network programming in unix, I have always set the SO_REUSEADDR option on the socket being used by server to listen to connections on. This basically says that another socket can be opened on the same port on the machine. This is useful when recovering from a crash and the socket was not properly closed - the app can be restarted and it will simply open another socket on the same port and continue listening.
My question is, what happens to the old socket? Without a doubt, all data/connections will still be received on the old socket. Does it get closed automatically by the OS?
A socket is considered closed when the program that was using it dies. That much is handled by the OS, and the OS will refuse to accept any further communication from the dead conversation. However, if the socket was closed unexpectedly, the computer on the other end might not know that the conversation is over, and may still be attempting to communicate.
That is why there is, designed into the TCP spec, a waiting period before that same port number can be reused. Because in theory, however unlikely, it may be possible for a packet from the old conversation to arrive with the appropriate IP address, port numbers, and sequence numbers such that the receiving server mistakenly inserts it into the wrong TCP stream by accident.
The SO_REUSEADDR option overrides that behavior, allowing you to reuse the port immediately. Effectively, you're saying: "I understand the risks and would like to use the port anyway."
Yes, the OS automatically closes the previous socket when the old process ends. The reason you can't normally listen on the same port right away is because the socket, though closed, remains in the 2MSL state for some amount of time (generally a few minutes). The OS automatically transitions the old socket out of this state when the timeout expires.

Resources