How do you manage end of a message in a protocol ? I use msgpack-c and the only solution I found is to send the header before the payload (separately).
Send the header to client :
// header
{
"message_type": "hello",
"payload_size": 10
}
The client received the header, unpack it, and allocate a buffer of "payload_size", receive data from stream, and if the buffer is complete the message is finish.
I want to send header and body succinctly
{
"header": { "message_type":"hello", "payload_size": 10},
"payload": {...} // can come in multiple frame
}
My problem is that I don't know if it's possible to partially unpack the header for knowing the size before receiving the full message (splitted if > 4096kb due to libevent restriction)
How would you do that ? I am open to all solutions.
C++
Using unpack() function
You can use offset parameter of unpack() function.
See https://github.com/msgpack/msgpack-c/wiki/v2_0_cpp_unpacker#client-controls-a-buffer
Here is a code example:
#include <iostream>
#incluee <msgpack.hpp>
int main() {
msgpack::sbuffer buf;
msgpack::pack(buf, std::make_tuple("first message", 123, 56.78));
msgpack::pack(buf, std::make_tuple("second message", 42));
std::size_t off = 0; // cursor of buf
{
// unpack() function starts parse from off (0)
auto oh = msgpack::unpack(buf.data(), buf.size(), off);
// off is updated to 25. 25 is MessagePack formatted byte size
// of ["first message",123,56.78]
// (I use JSON notation but actual format is MessagePack)
std::cout << "off:" << off << std::endl;
std::cout << *oh << std::endl;
}
{
// unpack() function starts parse from off (25)
auto oh = msgpack::unpack(buf.data(), buf.size(), off);
// off is updated to 42.
// 42 - 25 = 17. 17 is MessagePack formatted byte size
// of ["second message",42]
// (I use JSON notation but actual format is MessagePack)
std::cout << "off:" << off << std::endl;
std::cout << *oh << std::endl;
}
}
Output is
off:25
["first message",123,56.78]
off:42
["second message",42]
msgpack-c unpack() manage the position of buffer internally.
You don't need to pass payload_size.
In addition you can mix non-msgpack format data in the buffer.
+--------------------+-----------------------------+--------------------+
| MessagePackBytes1 | Any format user knows size | MessagePackBytes2 |
+--------------------+-----------------------------+--------------------+
Let's say user knows the data structure that contains MessgePackBytes1(unknown size), any format data (known size), and MessgePackBytes1(unknown size).
#include <iostream>
#incluee <msgpack.hpp>
int main() {
msgpack::sbuffer buf;
msgpack::pack(buf, std::make_tuple("first message", 123, 56.78));
std::string non_mp = "non mp format data";
buf.write(non_mp.data(), non_mp.size());
msgpack::pack(buf, std::make_tuple("second message", 42));
std::size_t off = 0; // cursor of buf
{
auto oh = msgpack::unpack(buf.data(), buf.size(), off);
std::cout << "off:" << off << std::endl;
std::cout << *oh << std::endl;
}
{
std::string extracted{buf.data() + off, non_mp.size()};
std::cout << extracted << std::endl;
off += non_mp.size();
}
{
auto oh = msgpack::unpack(buf.data(), buf.size(), off);
std::cout << "off:" << off << std::endl;
std::cout << *oh << std::endl;
}
}
Output is
off:25
["first message",123,56.78]
non mp format data
off:60
["second message",42]
Using unpacker
It is a little advanced but it might fit streaming usecases.
https://github.com/msgpack/msgpack-c/wiki/v2_0_cpp_unpacker#msgpack-controls-a-buffer
Here is an example that unpacking MessagePack from continuous and scattered receiving message.
https://github.com/msgpack/msgpack-c/blob/700167995927f0348fb08ae2579440c1bc135480/example/boost/asio_send_recv.cpp#L41-L64
C
C version is basically similar to C++.
Using unpack() function
C version has the similar unpack function.
Here is the prototype:
msgpack_unpack_return
msgpack_unpack_next(msgpack_unpacked* result,
const char* data, size_t len, size_t* off);
You can pass off as offset similar to C++ version. C doesn't have reference so you need to pass the address of off using &off.
See https://github.com/msgpack/msgpack-c/wiki/v2_0_c_overview#using-unpack-function
If you want to know individual variable length field size such as stirng, you can access size member variable of unpacked object.
For example:
typedef struct {
uint32_t size;
struct msgpack_object* ptr;
} msgpack_object_array;
typedef struct {
uint32_t size;
const char* ptr;
} msgpack_object_str;
typedef struct {
uint32_t size;
const char* ptr;
} msgpack_object_bin;
typedef struct {
int8_t type;
uint32_t size;
const char* ptr;
} msgpack_object_ext;
See https://github.com/msgpack/msgpack-c/wiki/v2_0_c_overview#object
Using unpacker
See https://github.com/msgpack/msgpack-c/wiki/v2_0_c_overview#using-unpacker
Related
I am trying to read continuous data from remote device and I have static variable declared to receive and send ACK . Payload of 0 and 1 holds the sequence number of the data I am getting from remote device .
The problem I have is with variable fragment_num. After it reaches 0x0c it is resetting back to 0.
Its Free RTOS application . Are there any obvious reasons for a static variable to reset to 0 or is there any problem with my code ? Thanks
#define INTIAL_FRAGMENT 0x00
static uint8_t length;
static uint8_t fragment_num ;
uint8_t image[128];
download ()
{
if(((payload[1] << 8) | (payload[0])) == INTIAL_FRAGMENT)
{
memset(image , 0,128);
memcpy(image , payload,(len));
info_download();
length = len;
fragment_num +=1 ;
}
if(((payload[1] << 8) | (payload[0])) == fragment_num)
{
memcpy((image+length+1) , payload,(len));
length += len;
fragment_num ++;
info_download();
}
The problem is likely buffer overflow.
static uint8_t fragment_num ;
uint8_t image[128];
The compiler may have laid out fragment_num right after image in memory. If length or len is incorrect then memcpy() could write past the end of image and overwrite the value of fragment_num.
memcpy((image+length+1) , payload,(len));
I believe you want (image+length) instead of (image+length+1) here. Adding one skips a byte.
You should probably also verify len before memcpy() to make sure it doesn't overflow, e.g.:
if (len > 128)
return -1;
memcpy(image, payload, len);
if (length + len > 128)
return -1;
memcpy(image + length, payload, len);
I am using ESP-IDF SDK to develop a small project to take sensor data through UART. I am following the data sheet which is provided by the manufacturer to parse and calculate the value of different parameter. But the output on serial is not correct and every time I am getting different output which is wrong.
Code:-
#include "freertos/FreeRTOS.h"
#include "freertos/task.h"
#include "esp_system.h"
#include "esp_log.h"
#include "driver/uart.h"
#include "string.h"
#include "driver/gpio.h"
static const int RX_BUF_SIZE = 1024;
#define TXD_PIN (GPIO_NUM_4)
#define RXD_PIN (GPIO_NUM_5)
#define DELAY_IN_MS(t) (((portTickType)t*configTICK_RATE_HZ)/(portTickType)1000)
void init(void) {
const uart_config_t uart_config = {
.baud_rate = 4800,
.data_bits = UART_DATA_8_BITS,
.parity = UART_PARITY_DISABLE,
.stop_bits = UART_STOP_BITS_1,
.flow_ctrl = UART_HW_FLOWCTRL_DISABLE,
.source_clk = UART_SCLK_APB,
};
uart_driver_install(UART_NUM_1, RX_BUF_SIZE * 2, 129, 0, NULL, 0);
uart_param_config(UART_NUM_1, &uart_config);
uart_set_pin(UART_NUM_1, TXD_PIN, RXD_PIN, UART_PIN_NO_CHANGE, UART_PIN_NO_CHANGE);
}
static void rx_task(void *arg)
{
static const char *RX_TASK_TAG = "RX_TASK";
esp_log_level_set(RX_TASK_TAG, ESP_LOG_INFO);
uint8_t data[7] = {0};
uint8_t PR=0,spo2=0,temprature=0;
while (1) {
const int rxBytes = uart_read_bytes(UART_NUM_1, data, 7, 1);
if (rxBytes == 7) {
printf("The rxbytes %d and %s\n",rxBytes,data);
PR = (data[3] & 0x7F) + ((data[2] & 0x40)<<1);
spo2 = (data[4] & 0x7F);
temprature = (data[5] & 0x7F);
printf("PR is %d , Spo2 is %d , temperature is %d \n",PR,spo2,temprature);
ESP_LOGI(RX_TASK_TAG, "Read %d bytes: '%s'", rxBytes, data);
ESP_LOG_BUFFER_HEXDUMP(RX_TASK_TAG, data, rxBytes, ESP_LOG_INFO);
memset(data,0,7);
}
}
}
void app_main(void)
{
init();
xTaskCreate(rx_task, "uart_rx_task", 1024*2, NULL, configMAX_PRIORITIES, NULL);
}
The output of the program on serial monitor is:-
The data sheet provided by the manufacturer is:-
https://drive.google.com/file/d/1lPATxeXXreVZkg9Ufg9BnyCrl4EsbJAj/view?usp=sharing
Correct me if I misaligned any data format to calculate the values.
Assembling messages from an unreliable channel (serial) means you can't really rely on them always arriving in the order you expect without any issues, so you have to take precautions that you don't get junk.
The code assumes that it will always receive these 7-byte messages in 7-byte chunks, and it doesn't always work that way. Line noise or timeouts could cause a proper message to be received in multiple chunks (say, 4 bytes then 3 bytes), or it could cause bytes to be lost.
To see if this is part of the problem, add logging on every read, not just on the ones that you expect:
static void rx_task(void *arg)
{
...
while (1) {
const int rxBytes = uart_read_bytes(UART_NUM_1, data, 7, 1);
// Log ALL reads, not just the ones you expect
ESP_LOGI(RX_TASK_TAG, "Read %d bytes: '%s'", rxBytes, data);
ESP_LOG_BUFFER_HEXDUMP(RX_TASK_TAG, data, rxBytes, ESP_LOG_INFO);
if (rxBytes == 7) {
///
}
}
}
This will probably confirm my hunch.
In any case, you can't ever rely on the fixed-size messages because if it gets out of sync once, it won't ever recover. This means you have to build in your own protections.
Reading the data sheet for the sensor, it says that the first byte of every 7-byte message has the high bit set, so this is perfect for resynchronization: you ignore everything until you get that start byte, then read 6 more bytes, then you have a full message.
So you end up needing two buffers: one for the message you're assembling, and one for doing raw I/O from the sensor, copying to the real message buffer as you verify the sync.
A quick-and-dirty method would look like this:
static void rx_task(void *arg)
{
static const char *RX_TASK_TAG = "RX_TASK";
esp_log_level_set(RX_TASK_TAG, ESP_LOG_INFO);
// sensor message we're trying to build
uint8_t message[7] = {0};
uint8_t *msgnext = message;
while (1) {
uint8_t inbuf[7];
const int rxBytes = uart_read_bytes(UART_NUM_1, inbuf, sizeof inbuf, 1);
ESP_LOGI(RX_TASK_TAG, "Read %d bytes: '%s'", rxBytes, inbuf);
ESP_LOG_BUFFER_HEXDUMP(RX_TASK_TAG, inbuf, rxBytes, ESP_LOG_INFO);
// error/timeout? do something?
if (rxBytes <= 0) continue;
for (int i = 0; i < rxBytes; i++)
{
const uint8_t b = inbuf[i];
if (b & 0x80)
{
// First byte of a message, reset the buffer
msgnext = message;
*msgnext++ = b;
}
else if (msgnext == message)
{
// not synced yet, ignore this byte
continue;
}
else
{
*msgnext++ = b;
if ((msgnext - message) == sizeof message)
{
// WE FOUND A FULL MESSAGE
uint8_t PR = (message[3] & 0x7F) + ((message[2] & 0x40)<<1);
uint8_t spo2 = (message[4] & 0x7F);
uint8_t temperature = (message[5] & 0x7F);
printf("PR is %d, Spo2 is %d, temperature is %d\n",
PR,spo2,temperature);
msgnext = message; // reset to empty the buffer
}
}
}
}
}
The idea is your raw I/O is done into inbuf, and it starts by looking for the sync byte (with the high bit set), and that tells you to start copying data to the real sensor buffer message. Once you get 7 bytes, it shows the result and resets the buffer.
And even if you have a few bytes of valid message data, if another SYNC byte comes in, it assumes the previous message was messed up, so it throws it away and starts a new fresh buffer.
You can add more here, such as support for timeouts, or detecting/logging when a partial message is discarded, but in no case can you avoid this data-framing layer.
Also, it's not necessary that the I/O buffer inbuf to be the same as the message size, and it might make sense to read from the UART in one-byte chunks; in a multi-tasking operating system I probably wouldn't do this, but in the ESP environment it might make sense - dunno. That would simplify the looping some.
EDIT Looking at your actual data dumps, it's clear that your messages are not framed properly because even though you have 7 bytes, the SYNC byte (with the high bit set) is found somewhere in the middle, but not the same place each time. Clearly this is a framing issue.
It is possible to determine the file type from the magic number of file?
If I've understood, the magic number can have different size, maybe a reference dictionary or something like a library could help me?
it is possible to determine the file type from the magic number of file
yes you can ,because each file format has different magic number.
for example FFD8 for .jpg files
See here Magic Numbers in Files
The file command on Linux does precisely that. Study its internals to see how it identifies files using their magic-numbers(signature bytes). The complete source-code is available at darwinsys.com/file.
The following 2 lists are the most comprehensive ones with file-types and their magic-numbers:
- File Signature Table
- Linux Magic Numbers
JmimeMagic is a java library for such
Use libmagic (apt-get install libmagic-dev on Ubuntu systems).
Example below uses the default magic database to query the file passed on the command line. (Essentially an implementation of the file command. See man libmagic for more details/functions.
#include <iostream>
#include <magic.h>
#include <cassert>
int main(int argc, char **argv) {
if (argc == 1) {
std::cerr << "Usage " << argv[0] << " [filename]" << std::endl;
return -1;
}
const char * fname = argv[1];
magic_t cookie = magic_open(0);
assert (cookie !=nullptr);
int rc = magic_load(cookie, nullptr);
assert(rc == 0);
auto f= magic_file(cookie, fname);
if (f ==nullptr) {
std::cerr << magic_error(cookie) << std::endl;
} else {
std::cout << fname << ' ' << f << std::endl;
}
}
I was wondering if it was possible to save out a vector of cv::KeyPoints using the CvFileStorage class or the cv::FileStorage class. Also is it the same process to read it back in?
Thanks.
I am not sure about what you really expect :
The code I provide you is simply an example, to show how the file storage works in the OpenCV C++ bindings. It assumes here that you write in the XML file all the Keypoints separately, with their name being their position in the vector they were stored in.
It assumes aswell that when you read them back, you know the number of them you want to read, if not, the code is a little bit more complex. You'll find a way (if for instance you read the filestorage and test what it gives you, if it doesn't give you anything, then it means there is no more point to read) -it's just an idea, you have to find a solution, maybe this piece of code will be enough for you.
I should precise that i use ostringstream to put the integer in the string and by the way change the place where it will be written in the *.yml file.
//TO WRITE
vector<Keypoint> myKpVec;
FileStorage fs(filename,FileStorage::WRITE);
ostringstream oss;
for(size_t i;i<myKpVec.size();++i) {
oss << i;
fs << oss.str() << myKpVec[i];
}
fs.release();
//TO READ
vector<Keypoint> myKpVec;
FileStorage fs(filename,FileStorage::READ);
ostringstream oss;
Keypoint aKeypoint;
for(size_t i;i<myKpVec.size();<++i) {
oss << i;
fs[oss.str()] >> aKeypoint;
myKpVec.push_back(aKeypoint);
}
fs.release();
Julien,
char* key;
FileStorage f;
vector<Keypoint> keypoints;
//writing
write(f, key, keypoints);
//reading
read(f[key], keypoints);
int main() {
String filename = "data.xml";
FileStorage fs(filename,FileStorage::WRITE);
Vector<Mat> vecMat;
Mat A(3,3,CV_32F, Scalar(5));
Mat B(3,3,CV_32F, Scalar(6));
Mat C(3,3,CV_32F, Scalar(7));
vecMat.push_back(A);
vecMat.push_back(B);
vecMat.push_back(C);
//ostringstream oss;
for(int i = 0;i<vecMat.size();i++) {
stringstream ss;
ss << i;
string str = "x" + ss.str();
fs << str << vecMat[i];
}
fs.release();
vector<Mat> matVecRead;
FileStorage fr(filename,FileStorage::READ);
Mat aMat;
int countlabel = 0;
while(1) {
stringstream ss;
ss << countlabel;
string str = "x" + ss.str();
cout << str << endl;
fr[str] >> aMat;
if (fr[str].isNone() == 1) {
break;
}
matVecRead.push_back(aMat.clone());
countlabel ++;
}
fr.release();
for( unsigned j = 0; j < matVecRead.size(); j++){
cout << matVecRead[j] << endl;
}
}
Put a letter eg 'a' infront of the numbering as the OPENCV XML Format specify the xml KEY must start with a letter.
This is a code to save Vector<Mat> for visual studio 2010, i think it will works for Vector<KeyPoints>
I am trying to send data between a client/Server, the data looks like
typedef Struct Message
{ int id;
int message_length;
char* message_str;
}message;
I am trying to Write and Read this message between a client and server constantly updating the elements in this struct. I have heard Writev may do the trick. i want to send a
message to the server and then the server pulls out the elements and uses those elements as conditionals to execute the proper method?
Assuming you want to do the serialization yourself and not use Google Protocol Buffers or some library to handle it for you, I'd suggest writing a pair of functions like this:
// Serializes (msg) into a flat array of bytes, and returns the number of bytes written
// Note that (outBuf) must be big enough to hold any Message you might have, or there will
// be a buffer overrun! Modifying this function to check for that problem and
// error out instead is left as an exercise for the reader.
int SerializeMessage(const struct Message & msg, char * outBuf)
{
char * outPtr = outBuf;
int32_t sendID = htonl(msg.id); // htonl will make sure it gets sent in big-endian form
memcpy(outPtr, &sendID, sizeof(sendID));
outPtr += sizeof(sendID);
int32_t sendLen = htonl(msg.message_length);
memcpy(outPtr, &sendLen, sizeof(sendLen));
outPtr += sizeof(sendLen);
memcpy(outPtr, msg.message_str, msg.message_length); // I'm assuming message_length=strlen(message_str)+1 here
outPtr += msg.message_length;
return (outPtr-outBuf);
}
// Deserializes a flat array of bytes back into a Message object. Returns 0 on success, or -1 on failure.
int DeserializeMessage(const char * inBuf, int numBytes, struct Message & msg)
{
const char * inPtr = inBuf;
if (numBytes < sizeof(int32_t)) return -1; // buffer was too short!
int32_t recvID = ntohl(*((int32_t *)inPtr));
inPtr += sizeof(int32_t);
numBytes -= sizeof(int32_t);
msg.id = recvID;
if (numBytes < sizeof(int32_t)) return -1; // buffer was too short!
int32_t recvLen = ntohl(*((int32_t *)inPtr));
inPtr += sizeof(int32_t);
numBytes -= sizeof(int32_t);
msg.message_length = recvLen; if (msg.message_length > 1024) return -1; /* Sanity check, just in case something got munged we don't want to allocate a giant array */
msg.message_str = new char[msg.message_length];
memcpy(msg.message_str, inPtr, numBytes);
return 0;
}
With these functions, you are now able to convert a Message into a simple char-array and back at will. So now all you have to do is send the char-array over the TCP connection, receive it at the far end, and then Deserialize the array back into a Message struct there.
One wrinkle with this is that your char arrays will be variable-length (due to the presence of a string which can be different lengths), so your receiver will need some easy way to know how many bytes to receive before calling DeserializeMessage() on the array.
An easy way to handle that is to always send a 4-byte integer first, before sending the char-array. The 4-byte integer should always be the size of the upcoming array, in bytes. (Be sure to convert the integer to big-endian first, via htonl(), before sending it, and convert it back to native-endian on the receiver via htonl() before using it).
Okay, I'll take a stab at this. I'm going to assume that you have a "message" object on the sending side and what you want to do is somehow send it across to another machine and reconstruct the data there so you can do some computation on it. The part that you may not be clear on is how to encode the data for communications and then decode it on the receiving side to recover the information. The simplistic approach of just writing the bytes contained in a "message" object (i.e. write(fd, msg, sizeof(*msg), where "msg" is a pointer to an object of type "message") won't work because you will end up sending the value of a virtual address in the memory of one machine to different machine and there's not much you can do with that on the receiving end. So the problem is to design a way to pass an two integers and a character string bundled up in a way that you can fish them back out on the other end. There are, of course, many ways to do this. Does this describe what you are trying to do?
You can send structs over socket, but you have to serialize them before sending the struct using boost serialization.
Here is a sample code :
#include<iostream>
#include<unistd.h>
#include<cstring>
#include <sstream>
#include <boost/archive/text_oarchive.hpp>
#include <boost/archive/text_iarchive.hpp>
using namespace std;
typedef struct {
public:
int id;
int message_length;
string message_str;
private:
friend class boost::serialization::access;
template <typename Archive>
void serialize(Archive &ar, const unsigned int vern)
{
ar & id;
ar & message_length;
ar & message_str;
}
} Message;
int main()
{
Message newMsg;
newMsg.id = 7;
newMsg.message_length = 14;
newMsg.message_str="Hi ya Whats up";
std::stringstream strData;
boost::archive::text_oarchive oa(strData);
oa << newMsg;
char *serObj = (char*) strData.str().c_str();
cout << "Serialized Data ::: " << serObj << "Len ::: " << strlen(serObj) << "\n";
/* Send serObj thru Sockets */
/* recv serObj from socket & deserialize it */
std::stringstream rcvdObj(serObj);
Message deserObj;
boost::archive::text_iarchive ia(rcvdObj);
ia >> deserObj;
cout<<"id ::: "<<deserObj.id<<"\n";
cout<<"len ::: "<<deserObj.message_length<<"\n";
cout<<"str ::: "<<deserObj.message_str<<"\n";
}
you can compile the program by
g++ -o serial boost.cpp /usr/local/lib/libboost_serialization.a
you must have libboost_serialization.a statically compiled in your machine.
Keeping the sockets 'blocking' will be good and you have to devise for reading these structs from recv buffer.