I'm creating a tftp-like program but instead of stop and wait, I'm trying to use a go-back-n approach. I'm not exactly sure how to go about this as I have very little socket programming experience.
I have my client sending all of the data with sendto, and am currently just not calling recvfrom because it will wait until I get a response, but I don't want it to wait. I want to check if there was a response, but if not, keep sending data.
Can someone point me in the right direction? Please let me know if more information is needed, I'm having trouble elaborating.
Thanks!
Create a non-blocking socket and use select() (or poll() or whatever other mechanism you have at hand) to wait for both writability and readability of the socket. Then respond appropriately to each state independently when it arises.
I've never done this with UDP, but I see no reason that it shouldn't (a quick Google seems to reaffirm that).
Related
I know that, here, on SO, are many questions themed like this. I've read through most of the similar questions and can not find an answer for my case.
I use kqueue for server/client socket echo application. The program uses exclusively BSD socket API. The program is work in progress. Now I am at the point of getting EOF from socket.
My setup follows.
Start server, that waits for connections, and accepts one socket.
Start client that connects.
No user data sent by this time. Close the client with SIGINT.
Server kqueue gets EOF flag with no errors.
read system call returns zero with no errors.
The problem is that I get no indication that connection was fully closed. I can not determine if I have to shutdown read end, or completely close a socket. I get no indication of EOF with the write end. And that is expected, since I did not register for the write event(no data were sent by now).
How to properly tell, if the socket was fully closed?
Update
I know that what follows may belong to other post. I think that this update is tightly connected with the question, and the question will benefit as a whole.
To the point. Since I get a read EOF, but not a write EOF(the socket is closed before any data comes in, or out), can I somehow query socket for its state?
What I learned from other network related questions, here, on SO, that network stack may get some packets on a socket. Like FIN, or RST. It will be a sure win for me to just get the socket state, in the particular case.
As a second option, will it help to add one-time write event after I got a read EOF, just to get a write EOF? Will the write EOF event trigger?
I know I will get write error eventually. But, until that time, the socket will be a dead weight.
It will be of a great convenience to getsockopt for the write end close. Or, at least, queuing an event for read endpoint shutdown, after the read returned EOF.
I did not found similar getsockopt options, and I am not sure about queue'ing write event. The source code for kevent, and a network stack in general, is too tough for me.
That is why I ask.
If read or recv returns 0 then that means the other end closed the connection. It's at least a half-close for writing (from the other peer), which means there's nothing more to be received from that connection.
Unless the protocol specifies that it's only a half-close and that you can continue to send data, it's generally best to simply do a full closing of the connection from your side.
I need your help to solve this problem.
I have to create a multi-threaded client-server program on unix, based on AF_UNIX
sockets, that must handle up to some thousands simultaneous connections and also must do different things based on the type of signal received, like shutdown when server receives a SIGINT.
I thought of doing this disabling, initially, SIGINT and the other signals from the main's thread sigmask, then starting up a dispatching thread, that keeps (I know that's really inefficient this) waiting on select() for I/0 requests, accepts the new connection and then reads exactly sizeof(request) bytes, where request is a well-known structure, then creating also a thread that handles the signals received, the only one that re-enables the signals, using sigwait(), and finally starting up the other server thread to execute the real work.
I have this questions:
I would like to let select() return even if the dispatcher thread is stuck in it. I've red of a self-pipe trick about this, but I think I made it wrong, because even if I let the signal-handling thread write in the pipe that's in the select's read set, select() won't return. How could I let select() return?
I've read something about epoll(), that's the efficient to handle many simultaneous connections efficiently. Should i, and if how, use it? I can't figure it out only reading man epoll, and on my text book it's not even mentioned.
There are some good practices that I could use for handling system's failures? I almost check every system call's return value to, eventually, handle the error to free memory and other stuff like this, but my code keeps growing a lot, and almost for the same operations repeated many times. How could I write a cleanup function that could free memory before returning with abort()?
Anyway, thanks a lot in advice for your help, this platform is really amazing, and when I'll get more expert, I'll pay the community back giving my help!
(Sorry for my English, but it's not my mother language)
I'm developing an instant messaging application.
This is the situation which I need help:
A routine in my code fgets() the message the user has entered.
Now I need to wake up a thread which has a routine to send the message to the socket etc. I'm not really sure how to do this.
If I'm using a mutex: I dont want my first thread to ever wait. Hence i dont want to use this.
Similarly I cant use cond_variable.
Please tell me how to get this.
Duck's point about not overthinking it is a good one.
Another way you could go is to use a pipe. Your console handling thread writes a message to the pipe, and the network thread does a blocking read from the pipe.
What you might end up with is the network thread doing a select() on both the console pipe and the network socket. Then it would wake up and do things when it either had something to send, or something to receive from the network. Snazzy!
I wanted to create a simple chat application with no common server on which to connect to and route their data. However, I don't know how to do it without taking turns, which is odd for a chat program.
I figured I could do multithreading but the information I found so far was just about threading with concern to client requests(to go around the client queue thing). I absolutely haven't tried multithreading before. I also don't know if it's the only way. I also though of doing something event driven but I couldn't make ncurses to work on VS(it linked and compiled successfully but there's something wrong in the library itself, it seems).
So basically how do I make a chat program and not take turns. After all, calling recv() just holds until it receives something so during that time I can't call any stdin functions.
Use an event loop.
1) Did anything happen?
2) If so, handle it.
3) If not, wait for something to happen or for a certain amount of time.
4) Go to step 1.
Now, you just have to make everything that can happen (such as data being received on the socket) an event that you can wait for in step 3. For sockets, you do that with WSAEventSelect. You can wait for events with WaitForMultipleEvents.
Alternatively, you can arrange to have Winsock send your program a Windows message whenever data is received on a socket with WSAAsyncSelect.
Before you call recv, check if data is available. You can use select or poll to use that.
See select reference and maybe winsock FAQ.
A couple of days ago I had to investigate a problem where my application was showing abnormally high CPU usage when it was (apparently) in idle state. I tracked the problem down to a loop which was meant to block on a recvfrom call while the socket had been set to O_NONBLOCK-ing resulting in a spin lock. There were two ways of solving the problem: set the socket to blocking or poll for available data on the socket using poll or select. I chose the former as it was simpler. But I am wondering why any one would create a non-blocking socket and then poll on it separately. Doesn't a blocking socket do the same? What are the uses cases where one would use a non-blocking socket and poll combination? Are there any advantages to it in general cases?
Using poll() or select() with a non-blocking file descriptor gives you two advantages:
You can set a timeout to block for;
You can wait for any of a set of file descriptors to become useable.
If you only have a single file descriptor (socket) to wait for, and you don't mind waiting indefinitely on it, then yes; you can just use a blocking call.
The second advantage is really the killer use case for select() and friends. It means that you can handle multiple socket connections, as well as standard input and standard output and possibly file I/O, all with a single thread of control.
I´m posting here, because although the question is old. It came up in my google search somehow and has definitely not been answered properly.
The accepted answer merely highlights two advantages of using non-blocking sockets but does not really go into detail or answer the actual question.
NOTE : Unfortunately most online "tutorials" or code snippets only feature blocking socket code, so knowledge on non-blocking sockets is less spread.
As to when you would you use one compared to the other ... in general blocking sockets are only used in online code snippets. In all (good) production applications non-blocking sockets are used. I´m not ignorant, if you know of an implementation that uses blocking sockets (and sure that´s very well possible in combination with threads) - or let´s be more specific that uses blocking sockets in a single thread - please do let me know.
Now I can give you a very easy to understand example, and there are many others out there. Let´s take the example of a gaming server. Games advances at ticks, regular intervals where the game state progresses whether or not the player provides input (mouse / keyboard) to change the state of the game. Now when sockets come into play in Multiplayer games - if you were to use blocking sockets the game state would not advance unless the players were sending updates - so if they have internet problems, the game state would never consistently update and propagate changes to all players. You would have a rather choppy experience.
Now using non-blocking sockets, you can run the gameserver on a single-thread, updating the gamestate as well as the sockets, with a ... let´s say 50ms timeout interval - and socket data is only read from connected users when they actually send something, and then fed into the server simulation, processed and fed into the game state calculation for the next tick.
resulting in a spin lock.
That condition normally is called a tight loop.
There were two ways of solving the problem: set the socket to blocking or poll for available data on the socket using poll or select. I chose the former as it was simpler.
Are you sure that other code parts do not already use poll() (or select()) and expect the socket to be in non-blocking mode?
Otherwise, then yes, the switch to the blocking mode is the simplest solution.
Best backward-compatible solution would have been before calling recvfrom() to use poll() to wait for the socket to become readable. That way ensures that other parts of the code would work precisely as before.
But I am wondering why any one would create a non-blocking socket and then poll on it separately. Doesn't a blocking socket do the same?
For the case of recvfrom() no major difference is known to me.
What are the uses cases where one would use a non-blocking socket and poll combination? Are there any advantages to it in general cases?
Could be a simple coding mistake. Or somebody might have thought that recv'ing in tight loop would somehow increase the performance.
It is always better to make sockets as nonblocking because even a blocking socket becomes ready state sometimes (when data arrived but has checksum error and that is discarded) - even when there is no data to read. So make it nonblocking, wait for the data availability through poll then read. I think this is the main advantage.