im trying to communicate with a device via a virtual comport (comport to usb adapter,PL2303) on Win10. The device is an Eltek RC250 datalogger.
I have already installed an older PL2303 driver. The devicemanager recognized the device without any errors. Sending and receiving data between the device and the official software is working properly.
My problem is that after ReadFile is executed the program is doing nothing. I think ReadFile is waiting for more input from the device and therefore stucked in this function.
Trying it on a win7 System gets to the same issue.
The message which i write to the device is a valid message.
The following code shows the communication.
hComm = CreateFile("COM3", //port name
GENERIC_READ | GENERIC_WRITE, //Read/Write
0, // No Sharing
NULL, // No Security
OPEN_EXISTING,// Open existing port only
0, // Non Overlapped I/O
NULL); // Null for Comm Devices /* establish connection to serial port */
if (hComm == INVALID_HANDLE_VALUE)
printf("Error in opening serial port");
else
printf("opening serial port successfully");
nNumberOfBytesToWrite = sizeof(message);
resW = WriteFile(
hComm,
message,
nNumberOfBytesToWrite,
&lpNumberOfBytesWritten,
NULL);
do
{
printf("\nread");
resR = ReadFile(
hComm,
&answer,
sizeof(lpNumberOfBytesRead),
&lpNumberOfBytesRead,
NULL);
SerialBuffer[i] = answer;
i++;
}
while (lpNumberOfBytesRead > 0);
return 0;
Please help me, i have no clue what the problem might be.
Thomas
In the ReadFile() call, third parameter should be sizeof(answer) (or possibly just 1 since it appears to be a single byte), but certainly not sizeof(lpNumberOfBytesRead). It is blocking waiting for 4 bytes (size of a DWORD) when presumably answer is a single byte?
Also if you have not explicitly set a Comm timeout, you have no idea how long ReadFile() will wait before returning 0 to exit the loop. If the timeout is indefinite then it will never exist the loop.
There are other potential issues in this call, but without seeing how the parameters are declared, it is not possible to say.
Related
We have a set of USB devices which we monitor using a RPi. The monitoring code polls the devices using hidraw direct interface about once a second. The protocol uses 64 byte packets to send commands and receive data and all responses are 64 bytes long at most.
The same scheme works fine under Windows using the Windows HID driver. On Linux however we use hidraw and find that the device interface gets jammed after a short time resulting in unsuccessful write{}s to the device.
After a lot of investigation I came across a recommendation to try to follow the communication between a host and an hidraw device using this in a terminal:
sudo cat /dev/hidraw0
As it turns out, running this command outputs 4-8 bytes of unreadable characters to the terminal every write() and unexpectedly it also clears the jam for hidraw0. All subsequent write()'s and read()'s to that device work flawlessly.
If that device is disconnected and then reconnected the jam condition returns shortly thereafter. I have single stepped the code and verified that the "junk" is output during the execution of the write().
I tried to add fsync() calls before and after the write() in hope to clear the buffers and avoid this issue but that did not help. The code for the write() and subsequent read() is standard as follows:
#define USB_PACKET 64
#define USB_WRDELAY 10 //ms
FILE* fd;
int errno, res;
char packet[USB_PACKET];
fd = 0;
/* Open the Device with non-blocking reads. */
fd = open("/dev/hidraw0", O_RDWR|O_NONBLOCK);
if (fd < 0) {
perror("Unable to open device");
return 0; // failure
}
memset(packet, 0x0, sizeof(packet));
packet[0] = 0x34; // command code - request for USB device status bytes
fsync();
res = write(fd, &packet, sizeof(packet));
fsync();
if (res < 0) {
printf("Error: %d in USB write()\n", errno);
close(fd);
return 0; // failure
} else {
usleep(1000*USB_WRDELAY ); // delay gives OS and device time to respond
res = read(fd, &packet, sizeof(packet));
if (res < 0) {
printf("Error: %d in USB read()\n", errno);
close(fd);
return 0; // failure
} else {
// good read, packet holds the response data
// process the device data
close(fd);
return 1; // OK
}
}
return 0; // failure
This is a sample of the gibberish we read on the terminal running the cat command for each executed write():
4n��#/5 �
I am not understanding where this junk comes from and how to get rid of it. I tried several things that did not work out such as adding a read() with a timeout before the write - hoping it is some data left from a previous incomplete read().
Also tried to write a smaller buffer as I need only send only a 2 byte command as well as adding a delay between the open() and write().
Unfortunately using the cat in the terminal interferes with the hot plug/unplug detection of the USB devices so it is not a solution we can use in deployment.
I'll appreciate any words of wisdom on this.
I'm creating a C library that manages a lot of peripherical of my embedded device. The S.O. used, is a Linux distro compiled with yocto. I'm trying to make some functions to connect my device to wifi (well-know) router, with netlink (using the libnl commands). With the help of this community, I've developed a function able to scan the routers in the area. Some of you know how to use the libnl command to connecting my device to router wifi?
I've developed the following code, that tries to connect to an AP called "Validator_Test" (that have no authentication password). The software return no error, but my device still remain disconnected from the AP. Some of you know what is wrong in my code? Unfortunately, i've not found any example or documentation for this operation.
static int ap_conn() {
struct nl_msg *msg = nlmsg_alloc();
int if_index = if_nametoindex("wlan0"); // Use this wireless interface for scanning.
// Open socket to kernel.
struct nl_sock *socket = nl_socket_alloc(); // Allocate new netlink socket in memory.
genl_connect(socket); // Create file descriptor and bind socket.
int driver_id = genl_ctrl_resolve(socket, "nl80211"); // Find the nl80211 driver ID.
genlmsg_put(msg, 0, 0, driver_id, 0, (NLM_F_REQUEST | NLM_F_ACK), NL80211_CMD_CONNECT, 0);
nla_put_u32(msg, NL80211_ATTR_IFINDEX, if_index); // Add message attribute, which interface to use.
nla_put(msg, NL80211_ATTR_SSID, strlen("Validator_Test"), "Validator_Test");
nla_put(msg, NL80211_ATTR_MAC, strlen("00:1e:42:21:e4:e9"), "00:1e:42:21:e4:e9");
int ret = nl_send_auto_complete(socket, msg); // Send the message.
printf("NL80211_CMD_CONNECT sent %d bytes to the kernel.\n", ret);
ret = nl_recvmsgs_default(socket); // Retrieve the kernel's answer. callback_dump() prints SSIDs to stdout.
nlmsg_free(msg);
if (ret < 0) {
printf("ERROR: nl_recvmsgs_default() returned %d (%s).\n", ret, nl_geterror(-ret));
return ret;
}
nla_put_failure:
return -ENOSPC;
}
Thanks to all of you!
Thanks for the code.
Based on your code, I modified and did the test here; it works.
The source code is at:
https://github.com/neojou/nl80211/blob/master/test_connect/src/test_connect_nl80211.c
Some suggestions for this:
Make sure the test environment is correct
Before test the code, maybe you can try to use iw to do the test.
iw is the open source tool, which uses netlink also.
you can type "sudo iw wlan0 connect Validator_Test"
and then use iwconfig to see if it is connected or not first.
( Suppose there is no security setting at the AP as you said )
there are two differences between your source code and mine
(1) don't need to set NL80211_ATTR_MAC
(2) ret = nl_recvmsgs_default(socket);
not sure if there is any judgement of the return value of your ap_conn(),
but it seems better to return 0 in ap_conn(), when nl_recvmsgs_default() returns 0.
I am trying to read data off an Openssl linked socket using SSL_read. I perform Openssl operations in client mode that sends command and receives data from a real-world server. I used two threads where one thread handles all Openssl operations like connect, write and close. I perform the SSL_read in a separate thread. I am able to read data properly when I issue SSL_read once.
But I ran into problems when I tried to perform multiple connect, write, close sequences. Ideally I should terminate the thread performing the SSL_read in response to close. This is because for the next connect we would get a new ssl pointer and so we do not want to perform read on old ssl pointer. But problem is when I do SSL_read, I am stuck until there is data available in SSL buffer. It gets blocked on the SSL pointer, even when I have closed the SSL connection in the other thread.
while(1) {
memset(sbuf, 0, sizeof(uint8_t) * TLS_READ_RCVBUF_MAX_LEN);
read_data_len = SSL_read(con, sbuf, TLS_READ_RCVBUF_MAX_LEN);
switch (SSL_get_error(con, read)) {
case SSL_ERROR_NONE:
.
.
.
}
I tried all possible solutions to the problem but non works. Mostly I tried indication for letting me know there might be data in SSL buffer, but none of it returns proper indication.
I tried:
- Doing SSL_pending first to know if there is data in SSL buffer. But this always returns zero
- Doing select on the Openssl socket to see if it returns value bigger than zero. But it always returns zero.
- Making the socket as non-blocking and trying the select, but it doesnt seem to work. I am not sure if I got the code properly.
An example of where I used select for blocking socket is as follows. But select always returns zero.
while(1) {
// The use of Select here is to timeout
// while waiting for data to read on SSL.
// The timeout is set to 1 second
i = select(width, &readfds, NULL,
NULL, &tv);
if (i < 0) {
// Select Error. Take appropriate action for this error
}
// Check if there is data to be read
if (i > 0) {
if (FD_ISSET(SSL_get_fd(con), &readfds)) {
// TODO: We have data in the SSL buffer. But are we
// sure that the data is from read buffer? If not,
// SSL_read can be stuck indefinitely.
// Maybe we can do SSL_read(con, sbuf, 0) followed
// by SSL_pending to find out?
memset(sbuf, 0, sizeof(uint8_t) * TLS_READ_RCVBUF_MAX_LEN);
read_data_len = SSL_read(con, sbuf, TLS_READ_RCVBUF_MAX_LEN);
error = SSL_get_error(con, read_data_len);
switch (error) {
.
.
}
So as you can see I have tried number of ways to get the thread performing SSL_read to terminate in response to close, but I didnt get it to work as I expected. Did anybody get to make SSL_read work properly? Is non-blocking socket only solution to my problem? For blocking socket how do you solve the problem of quitting from SSL_read if you never get a response for command? Can you give an example of working solution for non blocking socket with read?
I can point you to a working example of non-blocking client socket with SSL ... https://github.com/darrenjs/openssl_examples
It uses non-blocking sockets with standard linux IO (based on poll event loop). Raw data is read from the socket and then fed into SSL memory BIO's, which then perform the decryption.
The approach I used was single threaded. A single thread performs the connect, write, and read. This means there cannot be any problems associated with one thread closing a socket, while another thread is trying to use that socket. Also, as noted by the SSL FAQ, "an SSL connection cannot be used concurrently by multiple threads" (https://www.openssl.org/docs/faq.html#PROG1), so single threaded approach avoids problems with concurrent SSL write & read.
The challenge with single threaded approach is that you then need to create some kind of synchronized queue & signalling mechanism for submitting and holding data pending for outbound (eg, the commands that you want to send from client to server), and get the socket event loop to detect when there is data pending for write and pull it from the queue etc. For that I would would look at standard std::list, std::mutex etc, and either pipe2 or eventfd for signalling the event loop.
OpenSSL calls recv() which in turn obeys the SOCKET's timeout, which by default is infinite. You can change the timeout thusly:
void socket_timeout_receive_set(SOCKET handle, dword milliseconds)
{
if(handle==SOCKET_HANDLE_NULL)
return;
struct timeval tv = { long(milliseconds / 1000), (milliseconds % 1000) * 1000 };
setsockopt(handle, SOL_SOCKET, SO_RCVTIMEO, (char *)&tv, sizeof(tv));
}
Unfortunately, ssl_error_get() returns SSL_ERROR_SYSCALL which it returns in other situations too, so it's not easy to determine that it timed out. But this function will help you determine if the connection is lost:
bool socket_dropped(SOCKET handle)
{
// Special thanks: "Detecting and terminating aborted TCP/IP connections" by Vinayak Gadkari
if(handle==SOCKET_HANDLE_NULL)
return true;
// create a socket set containing just this socket
fd_set socket_set;
FD_ZERO(&socket_set);
FD_SET(handle, &socket_set);
// if the connection is unreadable, it is not dropped (strange but true)
static struct timeval timeout = { 0, 0 };
int count = select(0, &socket_set, NULL, NULL, &timeout);
if(count <= 0) {
// problem: count==0 on a connection that was cut off ungracefully, presumably by a busy router
// for connections that are open for a long time but may not talk much, call keepalive_set()
return false;
}
if(!FD_ISSET(handle, &socket_set)) // creates a dependency on __WSAFDIsSet()
return false;
// peek at the next character
// recv() returns 0 if the connection was dropped
char dummy;
count = recv(handle, &dummy, 1, MSG_PEEK);
if(count > 0)
return false;
if(count==0)
return true;
return sec==WSAECONNRESET || sec==WSAECONNABORTED || sec==WSAENETRESET || sec==WSAEINVAL;
}
I have written a proxy which also duplicates traffic. I am trying to duplicate network traffic to a replica server which should receive all the inputs and also process all the requests. However only the responses on the main server are visible to the client. The high level workflow is as follows
Thread 1. Take input from client forward it to a pipe in non-blocking way, and to the server
Thread 2. Read from server and send to client
Thread 3. Read from pipe and forward to replica server
Thread 4. Read from replica server and drop
The code is available in this gist: https://gist.github.com/nipunarora/679d49e81086b5a75195ec35ced646de
The test seems to work for smaller data and transactions, but I seem to be getting the following error when working with iperf and larger data sets:
Buffer overflow? : Resource temporarily unavailable
The specific part in the code where the problem is stemming from:
void forward_data_asynch(int source_sock, int destination_sock) {
char buffer[BUF_SIZE];
int n;
//put in error condition for -1, currently the socket is shutdown
while ((n = recv(source_sock, buffer, BUF_SIZE, 0)) > 0)// read data from input socket
{
send(destination_sock, buffer, n, 0); // send data to output socket
if( write(pfds[1],buffer,n) < 0 )//send data to pipe
{
//fprintf(stats_file,"buffer_overflow \n");
//printf("format string" ,a0,a1);
//int_timeofday();
perror("Buffer overflow? ");
}
//DEBUG_PRINT("Data sent to pipe %s \n", buffer);
}
shutdown(destination_sock, SHUT_RDWR); // stop other processes from using socket
close(destination_sock);
shutdown(source_sock, SHUT_RDWR); // stop other processes from using socket
close(source_sock);
}
The reading process is as follows:
void forward_data_pipe(int destination_sock) {
char buffer[BUF_SIZE];
int n;
sleep(10);
//put in error condition for -1, currently the socket is shutdown
while ((n = read(pfds[0], buffer, BUF_SIZE)) > 0)// read data from pipe socket
{
//sleep(1);
//DEBUG_PRINT("Data received in pipe %s \n", buffer);
send(destination_sock, buffer, n, 0); // send data to output socket
}
shutdown(destination_sock, SHUT_RDWR); // stop other processes from using socket
close(destination_sock);
}
Please note, the pipe has been defined as follows:
/** Make file descriptor non blocking */
int setNonblocking(int fd)
{
int flags;
/* If they have O_NONBLOCK, use the Posix way to do it */
#if defined(O_NONBLOCK)
/* Fixme: O_NONBLOCK is defined but broken on SunOS 4.1.x and AIX 3.2.5. */
if (-1 == (flags = fcntl(fd, F_GETFL, 0)))
flags = 0;
return fcntl(fd, F_SETFL, flags | O_NONBLOCK);
#else
/* Otherwise, use the old way of doing it */
flags = 1;
return ioctl(fd, FIOBIO, &flags);
#endif
}
Could anyone help in fixing what could be the reason of the error?
The problem in your case is that data is sent too fast to the socket that has been set to non-blocking mode. You have several options:
Accept the fact that data may be lost. If you do not want to delay the processing on the main server, this is your only option.
Don't set the socket to non-blocking mode. The default mode, blocking, seems like a better fit for your application if you don't want data to be lost. However, this will also mean that the system may be slowed down.
Use poll(), select(), kqueue(), epoll(), /dev/poll or similar to wait until the socket has enough buffer space available. However, when using this, you should consider why you set the socket to non-blocking mode in the first place if you nevertheless want to block on it. This also leads to slowdown of the system.
What i need:
A usb based communiction device from an embedded target (running linux) to a host (running windows)
What i have:
A tty-device based on community drivers (u_serial.c, f_acm.c) and a userspace program utilizing the open/close/write/read commands.
The problem:
Userspace daemon successfully opens the tty device and reads from it even if the usb cable is not connected. if usb cable is connected while this happens - all usb enumeration is on hold until i cancel the program (cancel the read(), that is).
the same behavior happens when tty is only open()ed, without read().
What i saw:
The open() function of my device is defined like the u_serial.c open().
it looks somthing like this:
static int gs_open(struct tty_struct *tty, struct file *file)
{
/*
*definitions
*/
/*
*basic error checks
*/
//check if the tty device is already open/not opened/currently being opened and deal with each
//if not open:
//<real code starts:>
/* Do the "real open" */
spin_lock_irq(&port->port_lock);
/* allocate circular buffer on first open */
if (port->port_write_buf.buf_buf == NULL) {
spin_unlock_irq(&port->port_lock);
status = gs_buf_alloc(&port->port_write_buf, WRITE_BUF_SIZE);
spin_lock_irq(&port->port_lock);
if (status) {
pr_debug("gs_open: ttyGS%d (%p,%p) no buffer\n",
port->port_num, tty, file);
port->openclose = false;
goto exit_unlock_port;
}
}
tty->driver_data = port;
port->port_tty = tty;
port->open_count = 1;
port->openclose = false;
/* if connected, start the I/O stream */
if (port->port_usb) {
struct gserial *gser = port->port_usb;
pr_debug("gs_open: start ttyGS%d\n", port->port_num);
gs_start_io(port);
if (gser->connect)
gser->connect(gser);
}
pr_debug("gs_open: ttyGS%d (%p,%p)\n", port->port_num, tty, file);
status = 0;
exit_unlock_port:
spin_unlock_irq(&port->port_lock);
return status;
}
Some not related code was removed. see source here.
Note that even if usb is not connected (port->port_usb is 0) - we continue and return 0, which means everything is ok.
also note that if usb is not connected, we do not execute gs_start_io(), which is used to do the actual read from device, and works only if usb is connected.
So one question here is - if the kernel is built with no errors on open() command while usb cable is not connected - why is it holding when i have an open descriptor to the device?
i would assume that either it would fail to open the device in the first place, or ignore the situation and continue the usb connection normally.
also, if this is the correct behavior - how can i deal with/know if the cable is disconnected?
A similar discussion was held here some years ago, with no real conlclusions..
And a final thought - the issue can be dealt with if the userspace program will be notified when usb is connected, but this doesn't mean the current behavior is right.
thanks.