Socket Server with unknown number of receives in loop - c

thank you for reading. I'm currently implementing both the server and client for a socket server in C using linux. Currently i have a working "chat" system where both the server and the socket can send unique messages and the other end would receive that message with the correct length.
example output:
Server side
You:Hello!
client:hi, how are you?
You: fine thanks.
client: blabla
..And the client side would look be as follows:
server: Hello!
you:hi,how are you?
etc etc.
My question is, is there any way for the client/server to be able to send multiple messages before the other replies?
I currently have an endless while loop that waits for a receive and then proceeds to send, and this will repeat until the connection is lost. Using this method i can only send one message before i am forced to wait for a receive. I'm not sure of the correct implementation as I'm still quite new to both sockets and C! Thanks :)

Yes it could be possible.
The main body of your code, does not wait on socket for data. It reads the socket if data is already on it. It is possinle by using select function. After the select call, it reads the socket to display the received messages and sends user messages to other peer if there are ready on input.

A generic solution: You must use threading, and i'd propose to run the receiving part in a separate thread.
Hence, you first code the main thread to only manage sending, just as if the application couldn't receive at all. Apparently you have an edit field somewhere (and a messgae loop somehow). Each time the user presses Enter, you Send from within the Edit field's callback function.
Then you code a separate thread, that calls (and hangs on, blocks on) Receive(). Each time Receive "slips on" (ie. data came in), you do something with the data and then jump back to the Receive entry point. This goes on until you terminate the socket, or by other means decide to in fact not jump back to the Receive entry point.
The only situation where the two threads "touch" each other is when they both want to write text content to the same chat window. Both shall do it immediately as the transmission happens, but potentially both may try to access the chat window at exactly the same moment, causing a crash. Hence you muct apply a locking mechanism here; the one that first tries to access the chat window "gets it", while the locking mechanism keeps the other one on hold until the first releases the lock. Then the second one can do it's job. The locking is after all only a matter of microseconds.
These are immediate actions, free from each other. You don't need to que multiple messages; each one gets processed "as it happens".

Related

How to determine lost connection with kqueue?

I know that, here, on SO, are many questions themed like this. I've read through most of the similar questions and can not find an answer for my case.
I use kqueue for server/client socket echo application. The program uses exclusively BSD socket API. The program is work in progress. Now I am at the point of getting EOF from socket.
My setup follows.
Start server, that waits for connections, and accepts one socket.
Start client that connects.
No user data sent by this time. Close the client with SIGINT.
Server kqueue gets EOF flag with no errors.
read system call returns zero with no errors.
The problem is that I get no indication that connection was fully closed. I can not determine if I have to shutdown read end, or completely close a socket. I get no indication of EOF with the write end. And that is expected, since I did not register for the write event(no data were sent by now).
How to properly tell, if the socket was fully closed?
Update
I know that what follows may belong to other post. I think that this update is tightly connected with the question, and the question will benefit as a whole.
To the point. Since I get a read EOF, but not a write EOF(the socket is closed before any data comes in, or out), can I somehow query socket for its state?
What I learned from other network related questions, here, on SO, that network stack may get some packets on a socket. Like FIN, or RST. It will be a sure win for me to just get the socket state, in the particular case.
As a second option, will it help to add one-time write event after I got a read EOF, just to get a write EOF? Will the write EOF event trigger?
I know I will get write error eventually. But, until that time, the socket will be a dead weight.
It will be of a great convenience to getsockopt for the write end close. Or, at least, queuing an event for read endpoint shutdown, after the read returned EOF.
I did not found similar getsockopt options, and I am not sure about queue'ing write event. The source code for kevent, and a network stack in general, is too tough for me.
That is why I ask.
If read or recv returns 0 then that means the other end closed the connection. It's at least a half-close for writing (from the other peer), which means there's nothing more to be received from that connection.
Unless the protocol specifies that it's only a half-close and that you can continue to send data, it's generally best to simply do a full closing of the connection from your side.

Do both recv() and send() winsock

I wanted to create a simple chat application with no common server on which to connect to and route their data. However, I don't know how to do it without taking turns, which is odd for a chat program.
I figured I could do multithreading but the information I found so far was just about threading with concern to client requests(to go around the client queue thing). I absolutely haven't tried multithreading before. I also don't know if it's the only way. I also though of doing something event driven but I couldn't make ncurses to work on VS(it linked and compiled successfully but there's something wrong in the library itself, it seems).
So basically how do I make a chat program and not take turns. After all, calling recv() just holds until it receives something so during that time I can't call any stdin functions.
Use an event loop.
1) Did anything happen?
2) If so, handle it.
3) If not, wait for something to happen or for a certain amount of time.
4) Go to step 1.
Now, you just have to make everything that can happen (such as data being received on the socket) an event that you can wait for in step 3. For sockets, you do that with WSAEventSelect. You can wait for events with WaitForMultipleEvents.
Alternatively, you can arrange to have Winsock send your program a Windows message whenever data is received on a socket with WSAAsyncSelect.
Before you call recv, check if data is available. You can use select or poll to use that.
See select reference and maybe winsock FAQ.

C: Sockets without stop and wait

I'm creating a tftp-like program but instead of stop and wait, I'm trying to use a go-back-n approach. I'm not exactly sure how to go about this as I have very little socket programming experience.
I have my client sending all of the data with sendto, and am currently just not calling recvfrom because it will wait until I get a response, but I don't want it to wait. I want to check if there was a response, but if not, keep sending data.
Can someone point me in the right direction? Please let me know if more information is needed, I'm having trouble elaborating.
Thanks!
Create a non-blocking socket and use select() (or poll() or whatever other mechanism you have at hand) to wait for both writability and readability of the socket. Then respond appropriately to each state independently when it arises.
I've never done this with UDP, but I see no reason that it shouldn't (a quick Google seems to reaffirm that).

Sockets & Data Persistence

This is potentially a newbie question, but if i open and write some data to a socket, then exit the subroutine so the socket goes out of scope, and then try and read the data from another program, at a later time, will the data still be there or does it die when the original declarations go out of scope ?
Thanks,
N.
Further information :
I am trying to rewrite 2 programs that use files as the interface to communicate. The general flow is :
Main Process : Write Data.
Main Process : Spawn secondary process(es) onto other nodes in a cluster
Main Process : Wait until Secondary Process finished.
Secondary Process : Read Data (written by main)
Secondary Process : Write Data
Secondary Process : exit
Main Process : Read data.
So i essentially want to replace the Write/Read/Write/Read of files with sockets (which should be much faster!)
For TCP sockets you need a bi-directional connection opened before sending data, so the question is irrelevant if you don't have a receiving side.
For UDP, if no one is listening on the socket at the time you're sending data, no one will receive it unless you manage to open a listening program fast enough for the data to be still traveling inside the networking drivers. But don't count on it, because the 'localhost loopback' inside the driver shouldn't take more than a few microseconds to deliver the data.
P.S. Perhaps you can get a more suitable answer if you describe your exact situation in more detail. What are you trying to achieve?
Regarding your "further information". You can't do this with sockets by simple replacing the files with sockets and keeping the current scheme. However, you can try to change the scheme by first spawning the child processes and only then send them the data via sockets. When the children finish, they return an answer to the parent via a socket, and exit.
There's an inefficiency here in a sense, because you have to send the same data to each child separately (unless you can use multicasting).
I'm not sure sockets will be much faster than files for you, but they will certainly be safer for more complex scheme and will also allow distribution among machines that don't share a file-system.
When using a raw socket, if there isn't another endpoint available (connected) at the time that you write the data, the data will be lost. The only way that you could actually write the data without first having connected to the other endpoint would be to use UDP, in which case the data would simply be flushed by the receiving system if no matching endpoint is available.
If you want to have asynchronous delivery you will need to use a message passing system that allows delayed delivery. In this case, the receiver of the message is actually a system process that stores the message until a client requests it. The actual communication takes place between a client on one system and the system process on the other, with the client on the other system obtaining the data locally. You can read more about message passing and its variants at http://en.wikipedia.org/wiki/Message_passing.

dbus: flush connection?

When I do a "dbus_connection_close", do I need to flush the message queue?
In other words, do I need to continue with "dbus_connection_read_write_dispatch" until I receive the "disconnected" indication or is it safe to stop dispatching?
Updated: I need to close the connection to DBus in a clean manner. From reading the documentation, all the clean-up must be done prior to "unreferencing" the connection and this process isn't very well documented IMO.
After some more digging, it appears that there are two types of connection: shared and private.
The shared connection mustn't be closed just unreferenced. Furthermore, it does not appear that the connection must be flushed & dispatched unless the outgoing messages must be delivered.
In my case, I just needed to end the communication over DBus as soon as possible without trying to salvage any outgoing messages.
Thus the short answer is: NO - no flushing / no dispatching needs to be done prior to dbus_connection_unref.
Looking at the documentation for dbus_connection_close(), the only thing that may be invoked is the dispatch status function to indicate that the connection has been closed.
So, ordering here is something you probably want to pay attention to .. i.e getting notified of a closed / dropped connection prior to things left in the message queue.
Looking at the source of the function, it looks like the only thing its going to do is return if fail, i.e. invalid connection / NULL pointer. Otherwise, it (seems) to just hang up.
This means yes, you probably should flush the message queue prior to hanging up.
Disclaimer: I've only had to talk to dbus a few times, I'm not by any means an authority on it.

Resources