how to send data at any moment ( simultaneous ) in windows? - c

I want write a namedpipe client in windows OS, which can send data at any time even the client is receiving data.Example of MSDN only shows that sending data after receive something.And the serial operation is not what I want. Because the data I transfer between client and server is not so big,that is to say, IO operation should not be a time-consuming process, I didn't use OVERLAP in client.
The code that I modify in MSDN example of client is as below:
the main thread keep reading data, and the child-thread keep sending data to server. However, the server blocked when reading data when debugging.
std::thread t([&] {
cbToWrite = (lstrlen(lpvMessage) + 1) * sizeof(TCHAR);
_tprintf(TEXT("Sending %d byte message: \"%s\"\n"), cbToWrite, lpvMessage);
fSuccess = WriteFile(
hPipe, // pipe handle
lpvMessage, // message
cbToWrite, // message length
&cbWritten, // bytes written
NULL); // not overlapped
if (!fSuccess)
{
_tprintf(TEXT("WriteFile to pipe failed. GLE=%d\n"), GetLastError());
return -1;
}
printf("\nMessage sent to server, receiving reply as follows:\n");
});
while (1) // main thread always reading
{
do
{
// Read from the pipe.
fSuccess = ReadFile(
hPipe, // pipe handle
chBuf, // buffer to receive reply
BUFSIZE * sizeof(TCHAR), // size of buffer
&cbRead, // number of bytes read
NULL); // not overlapped
if (!fSuccess && GetLastError() != ERROR_MORE_DATA)
break;
_tprintf(TEXT("\"%s\"\n"), chBuf);
} while (!fSuccess); // repeat loop if ERROR_MORE_DATA
if (!fSuccess)
{
_tprintf(TEXT("ReadFile from pipe failed. GLE=%d\n"), GetLastError());
return -1;
}
}
t.join();
I expect someone can correct that code to get it work, or can you tell the standard practice or suggestions?
Thanks so much!

from CreateFile documentation about FILE_FLAG_OVERLAPPED
If this flag is specified, the file can be used for simultaneous
read and write operations.
If this flag is not specified, then I/O operations are serialized
I/O operations are serialized mean that the new I/O request will be wait until previous not complete. so even use multiple threads here not help, if you not use FILE_FLAG_OVERLAPPED. for example you can from one thread begin read operation and wait until data not exist. if you call write on this file from another thread - write will be wait in I/O subsystem code, until your read not complete. even if you say query file name (via GetFileInformationByHandleEx with FileNameInfo) this request will be serialized and wait until your read not completed.
so only option for simultaneous I/O operations (not only read write but all) use FILE_FLAG_OVERLAPPED when you create file.

Related

SSL_read blocks indefinitely

I am trying to read data off an Openssl linked socket using SSL_read. I perform Openssl operations in client mode that sends command and receives data from a real-world server. I used two threads where one thread handles all Openssl operations like connect, write and close. I perform the SSL_read in a separate thread. I am able to read data properly when I issue SSL_read once.
But I ran into problems when I tried to perform multiple connect, write, close sequences. Ideally I should terminate the thread performing the SSL_read in response to close. This is because for the next connect we would get a new ssl pointer and so we do not want to perform read on old ssl pointer. But problem is when I do SSL_read, I am stuck until there is data available in SSL buffer. It gets blocked on the SSL pointer, even when I have closed the SSL connection in the other thread.
while(1) {
memset(sbuf, 0, sizeof(uint8_t) * TLS_READ_RCVBUF_MAX_LEN);
read_data_len = SSL_read(con, sbuf, TLS_READ_RCVBUF_MAX_LEN);
switch (SSL_get_error(con, read)) {
case SSL_ERROR_NONE:
.
.
.
}
I tried all possible solutions to the problem but non works. Mostly I tried indication for letting me know there might be data in SSL buffer, but none of it returns proper indication.
I tried:
- Doing SSL_pending first to know if there is data in SSL buffer. But this always returns zero
- Doing select on the Openssl socket to see if it returns value bigger than zero. But it always returns zero.
- Making the socket as non-blocking and trying the select, but it doesnt seem to work. I am not sure if I got the code properly.
An example of where I used select for blocking socket is as follows. But select always returns zero.
while(1) {
// The use of Select here is to timeout
// while waiting for data to read on SSL.
// The timeout is set to 1 second
i = select(width, &readfds, NULL,
NULL, &tv);
if (i < 0) {
// Select Error. Take appropriate action for this error
}
// Check if there is data to be read
if (i > 0) {
if (FD_ISSET(SSL_get_fd(con), &readfds)) {
// TODO: We have data in the SSL buffer. But are we
// sure that the data is from read buffer? If not,
// SSL_read can be stuck indefinitely.
// Maybe we can do SSL_read(con, sbuf, 0) followed
// by SSL_pending to find out?
memset(sbuf, 0, sizeof(uint8_t) * TLS_READ_RCVBUF_MAX_LEN);
read_data_len = SSL_read(con, sbuf, TLS_READ_RCVBUF_MAX_LEN);
error = SSL_get_error(con, read_data_len);
switch (error) {
.
.
}
So as you can see I have tried number of ways to get the thread performing SSL_read to terminate in response to close, but I didnt get it to work as I expected. Did anybody get to make SSL_read work properly? Is non-blocking socket only solution to my problem? For blocking socket how do you solve the problem of quitting from SSL_read if you never get a response for command? Can you give an example of working solution for non blocking socket with read?
I can point you to a working example of non-blocking client socket with SSL ... https://github.com/darrenjs/openssl_examples
It uses non-blocking sockets with standard linux IO (based on poll event loop). Raw data is read from the socket and then fed into SSL memory BIO's, which then perform the decryption.
The approach I used was single threaded. A single thread performs the connect, write, and read. This means there cannot be any problems associated with one thread closing a socket, while another thread is trying to use that socket. Also, as noted by the SSL FAQ, "an SSL connection cannot be used concurrently by multiple threads" (https://www.openssl.org/docs/faq.html#PROG1), so single threaded approach avoids problems with concurrent SSL write & read.
The challenge with single threaded approach is that you then need to create some kind of synchronized queue & signalling mechanism for submitting and holding data pending for outbound (eg, the commands that you want to send from client to server), and get the socket event loop to detect when there is data pending for write and pull it from the queue etc. For that I would would look at standard std::list, std::mutex etc, and either pipe2 or eventfd for signalling the event loop.
OpenSSL calls recv() which in turn obeys the SOCKET's timeout, which by default is infinite. You can change the timeout thusly:
void socket_timeout_receive_set(SOCKET handle, dword milliseconds)
{
if(handle==SOCKET_HANDLE_NULL)
return;
struct timeval tv = { long(milliseconds / 1000), (milliseconds % 1000) * 1000 };
setsockopt(handle, SOL_SOCKET, SO_RCVTIMEO, (char *)&tv, sizeof(tv));
}
Unfortunately, ssl_error_get() returns SSL_ERROR_SYSCALL which it returns in other situations too, so it's not easy to determine that it timed out. But this function will help you determine if the connection is lost:
bool socket_dropped(SOCKET handle)
{
// Special thanks: "Detecting and terminating aborted TCP/IP connections" by Vinayak Gadkari
if(handle==SOCKET_HANDLE_NULL)
return true;
// create a socket set containing just this socket
fd_set socket_set;
FD_ZERO(&socket_set);
FD_SET(handle, &socket_set);
// if the connection is unreadable, it is not dropped (strange but true)
static struct timeval timeout = { 0, 0 };
int count = select(0, &socket_set, NULL, NULL, &timeout);
if(count <= 0) {
// problem: count==0 on a connection that was cut off ungracefully, presumably by a busy router
// for connections that are open for a long time but may not talk much, call keepalive_set()
return false;
}
if(!FD_ISSET(handle, &socket_set)) // creates a dependency on __WSAFDIsSet()
return false;
// peek at the next character
// recv() returns 0 if the connection was dropped
char dummy;
count = recv(handle, &dummy, 1, MSG_PEEK);
if(count > 0)
return false;
if(count==0)
return true;
return sec==WSAECONNRESET || sec==WSAECONNABORTED || sec==WSAENETRESET || sec==WSAEINVAL;
}

One socket descriptor always blocked on write. Select not working?

Hello I have a server program and a client program. The server program is working fine, as in I can telnet to the server and I can read and write in any order (like a chat room) without any issue. However I am now working on my client program and when I use 'select' and check if the socket descriptor is set to read or write, it always goes to write and then is blocked. As in messages do not get through until the client sends some data.
How can I fix this on my client end so I can read and write in any order?
while (quit != 1)
{
FD_ZERO(&read_fds);
FD_ZERO(&write_fds);
FD_SET(client_fd, &read_fds);
FD_SET(client_fd, &write_fds);
if (select(client_fd+1, &read_fds, &write_fds, NULL, NULL) == -1)
{
perror("Error on Select");
exit(2);
}
if (FD_ISSET(client_fd, &read_fds))
{
char newBuffer[100] = {'\0'};
int bytesRead = read(client_fd, &newBuffer, sizeof(newBuffer));
printf("%s",newBuffer);
}
if(FD_ISSET(client_fd, &write_fds))
{
quit = transmit(handle, buffer, client_fd);
}
}
Here is code to transmit function
int transmit(char* handle, char* buffer, int client_fd)
{
int n;
printf("%s", handle);
fgets(buffer, 500, stdin);
if (!strchr(buffer, '\n'))
{
while (fgetc(stdin) != '\n');
}
if (strcmp (buffer, "\\quit\n") == 0)
{
close(client_fd);
return 1;
}
n = write(client_fd, buffer, strlen(buffer));
if (n < 0)
{
error("ERROR writing to socket");
}
memset(buffer, 0, 501);
}
I think you are misinterpreting the use of the writefds parameer of select(): only set the bit when you want to write data to the socket. In other words, if there is no data, do not set the bit.
Setting the bit will check if there is room for writing, and if yes, the bit will remain on. Assuming you are not pumping megabytes of data, there will always be room, so right now you will always call transmit() which waits for input from the command line with fgets(), thus blocking the rest of the program. You have to monitor both the client socket and stdin to keep the program running.
So, check for READ action on stdin (use STDIN_FILENO to get the file descriptor for that), READ on client_fd always and just write() your data to the client_fd if the amount of data is small (if you need to write larger data chunks consider non-blocking sockets).
BTW, you forget to return a proper value at the end of transmit().
Sockets are almost always writable, except when the socket send buffer is full, which indicates that you are sending faster than the receiver is receiving.
So your transmit() function will be entered every time around the loop, so it will read some data from stdin, which blocks until you type something, so nothing happens.
You should only select on writability when a prior send() has returned EWOULDBLOCK/EAGAIN. Otherwise you should just send, when you have something to send.
I would throw this code away and use two or three threads in blocking mode.
select is used to check whether a socket has become ready to read or write. If it is blocking for read then that indicates no data to read. If it is blocking in write, then that indicates the TCP buffer is likely full and the remote end has to read some data so that the socket will allow more data to be written. Since the select blocks until one of the socket descriptions is ready, you also need to use timeout in select to avoid waiting for a long time.
In your specific case, if your remote/receiving end keep reading data from the socket then the select will not block for the write on the other end. Otherwise the tcp buffer will become full on the sender side and select will block. Answers posted also indicate the importance of handling EAGAIN or EWOULDBLOCK.
Sample flow:
while(bytesleft > 0)
then
nbytes = write data
if(nbytes > 0)
bytesleft -= nbytes;
else
if write returns with EAGAIN or EWOULDBLOCK
call poll or select to wait for the socket to be come ready
endif
endif
if poll or select times out
then handle the timeout error(e.g. the remote end did not send the
data within expected time interval)
endif
end while
The code also should include handle error conditions and read/write returning with (For example, write/read returning with 0). Also note read/recv returning 0 indicates the remote end closed the socket.

Named pipe messages corrupted (Win32,C)

I have some pipe communication code - received bytes are no match for sent bytes.
There is a loop where 'CallNamedPipe' is called to send messages to server.
Now only 1st message is received intact, all the rest are received partially filled with 0xCD byte.
It seems, that when I free the memory after sending - it is still being read by server thread.
MSDN says that CallNamedPipe() is a complete message sequence: open pipe, sent bytes and close the pipe.
So, this seems strange to me. I must mention, that this code is built by VC++ 6.0 - a very old compiler. Code runs on Windows 7, maybe I need to use compatibility mode? Both client and server executables run on the same physical system, not remotely. Client uses CreateProcess() on startup to start the server. The messages are sent much later on, so racing conditions should not matter, I hope.
Thanks for any advice.
============ Client side (pseudocode): ============
for (iPiece=0; iPiece < nPieces; ++iPiece)
{
buffer = malloc (2048);
// copy some data bytes into buffer (1..2048 bytes)
// log 1st 32 DWORDS from message about to be sent
if (! CallNamedPipe (name, buffer, nBytes, ..., 1000))
{
// diagnostics: call to GetLastError(), etc.
}
free (buffer);
}
============ Server side (pseudocode): ============
DWORD __stdcall ServerThreadProc (PVOID p)
{
UINT cbMaxMsg = 0x10000; // 64K for a pipe message
PVOID buffer = malloc (cbMaxMsg);
HANDLE hPipe;
BOOL fAbort = 0;
hPipe = CreateNamedPipe (name, PIPE_ACCESS_DUPLEX,
PIPE_WAIT | PIPE_TYPE_MESSAGE | PIPE_READMODE_MESSAGE,
PIPE_UNLIMITED_INSTANCES, cbMaxMsg, cbMaxMsg, 1000, NULL);
while (fAbort == 0) // one of pipe messages sets fAbort=1, so thread can return.
{
if (ConnectNamedPipe (hPipe, NULL))
{
DWORD bytesLoaded = 0;
ReadFile (hPipe, buffer, cbMaxMsg, &bytesLoaded, NULL);
if (bytesLoaded)
{
// log 1st 32 DWORDS from received message
// process the pipe message (switch/case)
// data may be written back to client after processing
FlushFileBuffers (hPipe);
}
DisconnectNamedPipe (hPipe);
}
else
{
// diagnostics, GLE(), etc.
}
}
free (buffer);
CloseHandle (hPipe);
return 0;
}
Using PIPE_TYPE_MESSAGE makes the pipe a message pipe. It will give a message like behavior to the server. If you try and read more than the standard message size, it will give you complete messages.
0xCD is a standard fill, I think it is for heap, but not sure.
So far, we are reading 64k of data, and writing 2k of data. It looks like CallNamedPipe doesn't return until the data is accepted (as there is a timeout - set to 1000ms). The behavior of these systems, is that once kernel has the data buffer then the client code is not allowed to change the memory.
I would say the most likely case, is that the buffer is not being filled correctly, and that the amount of data in the server is consistent with messages in the pipe.
You have not provided enough data to verify this.

how is select() alerted to an fd becoming "ready"?

I don't know why I'm having a hard time finding this, but I'm looking at some linux code where we're using select() waiting on a file descriptor to report it's ready. From the man page of select:
select() and pselect() allow a program to monitor multiple file descriptors,
waiting until one or more of the file descriptors become "ready" for some
class of I/O operation
So, that's great... I call select on some descriptor, give it some time out value and start to wait for the indication to go. How does the file descriptor (or owner of the descriptor) report that it's "ready" such that the select() statement returns?
It reports that it's ready by returning.
select waits for events that are typically outside your program's control. In essence, by calling select, your program says "I have nothing to do until ..., please suspend my process".
The condition you specify is a set of events, any of which will wake you up.
For example, if you are downloading something, your loop would have to wait on new data to arrive, a timeout to occur if the transfer is stuck, or the user to interrupt, which is precisely what select does.
When you have multiple downloads, data arriving on any of the connections triggers activity in your program (you need to write the data to disk), so you'd give a list of all download connections to select in the list of file descriptors to watch for "read".
When you upload data to somewhere at the same time, you again use select to see whether the connection currently accepts data. If the other side is on dialup, it will acknowledge data only slowly, so your local send buffer is always full, and any attempt to write more data would block until buffer space is available, or fail. By passing the file descriptor we are sending to to select as a "write" descriptor, we get notified as soon as buffer space is available for sending.
The general idea is that your program becomes event-driven, i.e. it reacts to external events from a common message loop rather than performing sequential operations. You tell the kernel "this is the set of events for which I want to do something", and the kernel gives you a set of events that have occured. It is fairly common for two events occuring simultaneously; for example, a TCP acknowledge was included in a data packet, this can make the same fd both readable (data is available) and writeable (acknowledged data has been removed from send buffer), so you should be prepared to handle all of the events before calling select again.
One of the finer points is that select basically gives you a promise that one invocation of read or write will not block, without making any guarantee about the call itself. For example, if one byte of buffer space is available, you can attempt to write 10 bytes, and the kernel will come back and say "I have written 1 byte", so you should be prepared to handle this case as well. A typical approach is to have a buffer "data to be written to this fd", and as long as it is non-empty, the fd is added to the write set, and the "writeable" event is handled by attempting to write all the data currently in the buffer. If the buffer is empty afterwards, fine, if not, just wait on "writeable" again.
The "exceptional" set is seldom used -- it is used for protocols that have out-of-band data where it is possible for the data transfer to block, while other data needs to go through. If your program cannot currently accept data from a "readable" file descriptor (for example, you are downloading, and the disk is full), you do not want to include the descriptor in the "readable" set, because you cannot handle the event and select would immediately return if invoked again. If the receiver includes the fd in the "exceptional" set, and the sender asks its IP stack to send a packet with "urgent" data, the receiver is then woken up, and can decide to discard the unhandled data and resynchronize with the sender. The telnet protocol uses this, for example, for Ctrl-C handling. Unless you are designing a protocol that requires such a feature, you can easily leave this out with no harm.
Obligatory code example:
#include <sys/types.h>
#include <sys/select.h>
#include <unistd.h>
#include <stdbool.h>
static inline int max(int lhs, int rhs) {
if(lhs > rhs)
return lhs;
else
return rhs;
}
void copy(int from, int to) {
char buffer[10];
int readp = 0;
int writep = 0;
bool eof = false;
for(;;) {
fd_set readfds, writefds;
FD_ZERO(&readfds);
FD_ZERO(&writefds);
int ravail, wavail;
if(readp < writep) {
ravail = writep - readp - 1;
wavail = sizeof buffer - writep;
}
else {
ravail = sizeof buffer - readp;
wavail = readp - writep;
}
if(!eof && ravail)
FD_SET(from, &readfds);
if(wavail)
FD_SET(to, &writefds);
else if(eof)
break;
int rc = select(max(from,to)+1, &readfds, &writefds, NULL, NULL);
if(rc == -1)
break;
if(FD_ISSET(from, &readfds))
{
ssize_t nread = read(from, &buffer[readp], ravail);
if(nread < 1)
eof = true;
readp = readp + nread;
}
if(FD_ISSET(to, &writefds))
{
ssize_t nwritten = write(to, &buffer[writep], wavail);
if(nwritten < 1)
break;
writep = writep + nwritten;
}
if(readp == sizeof buffer && writep != 0)
readp = 0;
if(writep == sizeof buffer)
writep = 0;
}
}
We attempt to read if we have buffer space available and there was no end-of-file or error on the read side, and we attempt to write if we have data in the buffer; if end-of-file is reached and the buffer is empty, then we are done.
This code will behave clearly suboptimal (it's example code), but you should be able to see that it is acceptable for the kernel to do less than we asked for both on reads and writes, in which case we just go back and say "whenever you're ready", and that we never read or write without asking whether it will block.
From the same man page:
On exit, the sets are modified in place to indicate which file descriptors actually changed status.
So use FD_ISSET() on the sets passed to select to determine which FDs have become ready.

open syscall on fifo not blocking?

I'm creating a quite-big project as an homework where I need to create a server program which listen to 2 fifos, where clients will write.
Everything works, but there is something that is making me angry: whenever I do an operation, which is composed from some write/reads between client and server, when I close fifos on client, it looks like server "think" that there is still someone keeping those fifos opened.
Due to this, the server tries to read 64 byte after each operation, obviusly failing (reading 0 bytes). Only one time per operation this thing happens, it doesn't keep trying to read 64 byte
It doesn't create any problem to clients but it's really strange and I hate those type of bugs
I think it's a problem connected to open/close and to the fact that clients use a lock.
Note, flags used on the open operation are specified in this pseudocode text
Server behaviour:
Open Fifo(1) for READING (O_RDONLY)
Open Fifo(2) for WRITING (O_WRONLY)
Do some operations
Close Fifo(1)
Close Fifo(2)
Client behaviour:
Set a lock on Fifo(1) (waiting if there is already one)
Set a lock on Fifo(2) (same as before)
Open Fifo(1) for WRITING (O_WRONLY)
Open Fifo(2) for READING (O_RDONLY)
Do some operations
Close Fifo(1)
Close Fifo(2)
Get lock from Fifo(1)
Get lock from Fifo(2)
I can't post directly the code, except from the functions used for networking because the project is quite big and I don't use syscalls directly. Here you are:
int Network_Open(const char* path,int oflag)
{
return open(path,oflag);
}
ssize_t Network_IO(int fifo,NetworkOpCodes opcode,void* data,size_t dataSize)
{
ssize_t retsize = 0;
errno = 0;
if (dataSize == 0) return 0;
while ((retsize = (opcode == NetworkOpCode_Write? write(fifo,data,dataSize) : read(fifo,data,dataSize))) < 0)
{
if (errno != EINTR) break;
}
return retsize;
}
Boolean Network_Send(int fifo,const void* data,size_t dataSize)
{
return ((ssize_t)dataSize) == Network_IO(fifo,NetworkOpCode_Write,(void*)data,dataSize);
}
Boolean Network_Receive(int fifo,void* data,size_t dataSize)
{
return ((ssize_t)dataSize) == Network_IO(fifo,NetworkOpCode_Read,data,dataSize);
}
Boolean Network_Close(int fifo)
{
if (fifo >= 0)
return close(fifo) == 0;
}
Any help will be appreciated, thanks.
EDIT 1:
Client output: http://pastie.org/2523854
Server output (strace): http://pastie.org/2523858
Zero bytes returned from (blocking) read() indicates an end of file, i.e., that the other end has closed the FIFO. Read the manpage for read.
The zero bytes result from read() means that the other process has finished. Now your server must close the original file descriptor and reopen the FIFO to serve the next client. The blocking operations will resume once you start working with the new file descriptor.
That's the way it is supposed to work.
AFAIK, after you get the zero bytes, further attempts to read on the file descriptor will also return 0 bytes, in perpetuity (or until you close the file descriptor). Even if another process opens the FIFO, the original file descriptor will continue to indicate EOF (the other client process will be hung waiting for a server process to open the FIFO for reading).

Resources