I'm trying to get the size of a chunk of data sent via NetLink (Linux Kernel).
I have tried with size = nlh->nlmsg_len - NLMSG_HDRLEN, but that isn't returning the correct size.
What's the correct way to get the message's data size?
Why would you believe nlh->nlmsg_len - NLMSG_HDRLEN is not returning the {message size without header} for the message you are looking at? If nlmsg_len contains, for example, the value 16 (that's what NLMSG_HDRLEN should be), then the payload of this message is empty. Your buffer may have more messages in it that want to be read.
As a side note, I recommend you use libmnl for Netlink message parsing and construction.
Related
I have a string that was sent over a network and arrived on my server as a Buffer. It has been formatted to my own custom protocol (in theory, haven't implemented yet). I wanted to use the first n bytes for a string that will identify the protocol.
I have done:
data.toString('utf8');
on the whole buffer but that just gives me the whole packet as a string which is not what I want to achieve.
When the message is recieved, how do I convert a subset of the bytes into a string?
Thanks in advance
The Buffer.toString() method accepts start and end parameters, which you can use to slice out just the subset you want for your substring. This may, depending on your implementation, be faster than allocation a new intermediary Buffer like you suggested in your answer.
Check out Node's Buffer.toString() method for more information.
Found out how.
You have to copy the amount of bytes you want into another buffer by calling the method copy on the original buffer i.e:
sourceBuffer.copy(targetBuffer, targetStartIndex, sourceStartIndex, sourceEndIndex)
This will give your targetBuffer the required data which you can then call toString() or any other metod to convert the buffer array into your desired data type.
I know I'm a bit late to this, but why not just something like:
buffer.subarray(0, bytes).toString();
I am communicating with a battery that sends data with CAN Bus protocol (J1939). I use PIC 18F26K83. My goal is to display remaining state of charge on a display. For example I have value %99 in 60 of the arrays and %1 in 40 of them then I will display %99). However, it does not look like a reliable solution because I do not know how many garbage datas I receive. Please note that I cannot use some error detection algorithms such as check sum because I have no access to microcontrollers in batter, I can only use receiver side (display).
Edit: I am aware of CRC in CAN Bus but it seems like some times it does not work since sometimes I get garbage.
Yes, you can use CRC calculation, because CRC is calculated also at receiver side by Communication Controllers. Thats how a CRC error is detected, for instance. To elaborate:
battery sends complete message; message gets interference on phys layer; receiver (your PIC) receives message and calculates the CRC on it; does not match with the CRC tag included in message;
PIC CC will have a REC error +1, and will not ACK the message to the Battery.
You will detect every type of CAN message error on receiver side, except bit Error, which is also irrelevant since it will result in incomplete message.
Basically, you shouldn't be able to rely on the received CAN message content to deduce Battery level, if the message is corrupted (garbage). It is simply discarded before arrives to the Application layer of your PIC.
According to this site:
http://pedrowa.weba.sk/docs/ApiDoc/apidoc_ap_get_client_block.html
This function:
ap_get_client_block(request_rec *r, char *buffer, int bufsiz);
Reads a chunk of POST data coming in from the network when a user requests a webpage.
So far, I have the following core code that reads the data:
while (len_read > 0){
len_read=ap_get_client_block(r,argsbuffer,length);
if((rpos+len_read) > length){rsize=length-rpos;}else{rsize=len_read;}
memcpy((char*)*rbuf+rpos,(char*)argsbuffer,(size_t)rsize);rpos+=rsize;
}
argsbuffer is a character array of length bytes and rbuf is a valid pointer, and the rest of the variables are apr_off_t data type.
If I changed this code to this:
while (len_read > 0){
len_read=ap_get_client_block(r,argsbuffer,length);
if((rpos+len_read) > length){rsize=length-rpos;}else{rsize=len_read;}
memcpy((char*)*rbuf+rpos,(char*)argsbuffer,(size_t)rsize);rpos+=rsize;
if (rpos > wantedlength){len_read=0;}
}
would I be able to close the stream some way and maintain processing speed without corrupt data coming in?
I already executed ap_setup_client_block(r, REQUEST_CHUNKED_ERROR) and made sure ap_should_client_block(r) returned true before processing the first code above. so that is like in a sense, opening a file. ap_get_client_block(r,argsbuffer,length). is like reading a file. Now what about an some ap_ command equivalent to close?
What I want to avoid is corrupt data.
The data that is incoming is in various sizes and I only want to attempt to capture a certain piece of data without having a loop go through the entire set of data every time. Thats why I posted this question.
For example: If I wanted to look for "A=123" as the input data within the first fixed 15 bytes and the first set of data is something like:
S=1&T=2&U=35238952958&V=3468348634683963869&W=fjfdslhjsdflhjsldjhsljsdlkj
Then I want the program to examine only:
S=1&T=2&U=35238
I'd be tempted using the second block of code. The first block of code works but goes through everything.
Anyone have any idea? I want this to execute on a live server as I am improving a security detection system. If anyone knows any other functionality that I should add or remove to my code, let me know. I want to optimize for speed.
I'm using XenDesktop 5.6 (server) and Citrix Receiver 3.6 (client). I've used the Virtual Channel SDK to create a channel between server and client and pass C-style structures back and forth, using the examples found here. I can easily pass simple numeric types (USHORT, etc.) between client and server just by setting the appropriate structure field (e.g. g_pMixHd->dwRetVal = 1) but I cannot do the same with string types (LPBYTES, PSZ, PUCHAR). I have tried allocating memory on client and/or server, updating the structure's length field and other approaches but nothing seems to work.
All I want to do is have my client assign a simple ANSI/ASCII string in the receiving structure and have it passed back to the server. Has anybody done this? Can you help?
Without seeing more details, I'll gues that this is probably because you're using pointer-based strings.
Let's say you have a C structure that has a string member. That member should not be a string, rather it should be a byte array, and you should copy the string into the byte array before sending the packet.
First stackoverflow question! I've searched...I promise. I haven't found any answers to my predicament. I have...a severely aggravating problem to say the least. To make a very long story short, I am developing the infrastructure for a game where mobile applications (an Android app and an iOS app) communicate with a server using sockets to send data to a database. The back end server script (which I call BES, or Back End Server), is several thousand lines of code long. Essentially, it has a main method that accepts incoming connections to a socket and forks them off, and a method that reads the input from the socket and determines what to do with it. Most of the code lies in the methods that send and receive data from the database and sends it back to the mobile apps. All of them work fine, except for the newest method I have added. This method grabs a large amount of data from the database, encodes it as a JSON object, and sends it back to the mobile app, which also decodes it from the JSON object and does what it needs to do. My problem is that this data is very large, and most of the time does not make it across the socket in one data write. Thus, I added one additional data write into the socket that informs the app of the size of the JSON object it is about to receive. However, after this write happens, the next write sends empty data to the mobile app.
The odd thing is, when I remove this first write that sends the size of the JSON object, the actual sending of the JSON object works fine. It's just very unreliable and I have to hope that it sends it all in one read. To add more oddity to the situation, when I make the size of the data that the second write sends a huge number, the iOS app will read it properly, but it will have the data in the middle of an otherwise empty array.
What in the world is going on? Any insight is greatly appreciated! Below is just a basic snippet of my two write commands on the server side.
Keep in mind that EVERYWHERE else in this script the read's and write's work fine, but this is the only place where I do 2 write operations back to back.
The server script is on a Ubuntu server in native C using Berkeley sockets, and the iOS is using a wrapper class called AsyncSocket.
int n;
//outputMessage contains a string that tells the mobile app how long the next message
//(returnData) will be
n = write(sock, outputMessage, sizeof(outputMessage));
if(n < 0)
//error handling is here
//returnData is a JSON encoded string (well, char[] to be exact, this is native-C)
n = write(sock, returnData, sizeof(returnData));
if(n < 0)
//error handling is here
The mobile app makes two read calls, and gets outputMessage just fine, but returnData is always just a bunch of empty data, unless I overwrite sizeof(returnData) to some hugely large number, in which case, the iOS will receive the data in the middle of an otherwise empty data object (NSData object, to be exact). It may also be important to note that the method I use on the iOS side in my AsyncSocket class reads data up to the length that it receives from the first write call. So if I tell it to read, say 10000 bytes, it will create an NSData object of that size and use it as the buffer when reading from the socket.
Any help is greatly, GREATLY appreciated. Thanks in advance everyone!
It's just very unreliable and I have to hope that it sends it all in one read.
The key to successful programming with TCP is that there is no concept of a TCP "packet" or "block" of data at the application level. The application only sees a stream of bytes, with no boundaries. When you call write() on the sending end with some data, the TCP layer may choose to slice and dice your data in any way it sees fit, including coalescing multiple blocks together.
You might write 10 bytes two times and read 5 then 15 bytes. Or maybe your receiver will see 20 bytes all at once. What you cannot do is just "hope" that some chunks of bytes you send will arrive at the other end in the same chunks.
What might be happening in your specific situation is that the two back-to-back writes are being coalesced into one, and your reading logic simply can't handle that.
Thanks for all of the feedback! I incorporated everyone's answers into the solution. I created a method that writes to the socket an iovec struct using writev instead of write. The wrapper class I'm using on the iOS side, AsyncSocket (which is fantastic, by the way...check it out here -->AsyncSocket Google Code Repo ) handles receiving an iovec just fine, and behind the scenes apparently, as it does not require any additional effort on my part for it to read all of the data correctly. The AsyncSocket class does not call my delegate method didReadData now until it receives all of the data specified in the iovec struct.
Again, thank you all! This helped greatly. Literally overnight I got responses for an issue I've been up against for a week now. I look forward to becoming more involved in the stackoverflow community!
Sample code for solution:
//returnData is the JSON encoded string I am returning
//sock is my predefined socket descriptor
struct iovec iov[1];
int iovcnt = 0;
iov[0].iov_base = returnData;
iov[0].iov_len = strlen(returnData);
iovcnt = sizeof(iov) / sizeof(struct iovec);
n = writev(sock, iov, iovcnt)
if(n < 0)
//error handling here
while(n < iovcnt)
//rebuild iovec struct with remaining data from returnData (from position n to the end of the string)
You should really define a function write_complete that completely writes a buffer to a socket. Check the return value of write, it might also be a positive number, but smaller than the size of the buffer. In that case you need to write the remaining part of the buffer again.
Oh, and using sizeof is error-prone, too. In the above write_complete function you should therefore print the given size and compare it to what you expect.
Ideally on the server you want to write the header (the size) and the data atomically, I'd do that using the scatter/gather calls writev() also if there is any chance multiple threads can write to the same socket concurrently you may want to use a mutex in the write call.
writev() will also write all the data before returning (if you are using blocking I/O).
On the client you may have to have a state machine that reads the length of the buffer then sits in a loop reading until all the data has been received, as large buffers will be fragmented and come in in various sized blocks.