Documentation for pb_ostream_from_buffer says
After writing, you can check stream.bytes_written to find out how much
valid data there is in the buffer. This should be passed as the
message length on decoding side.
So ideally, when I send the serialized data I need to also send the bytes_written as a parameter separate from the buffer.
The problem is that my interface only allows me to send one variable: the buffer.
QUESTION
How do I specify always serialize the struct with no optimizations so that bufsize in
pb_istream_from_buffer(const pb_byte_t *buf, size_t bufsize)
can be a constant (i.e. the macro that specifies the maximum size) instead of needing to pass stream.bytes_written?
According to the Protocol Buffers encoding specification there are variable size types (like int32, int64, string, etc) and fixed size types (like fixed32, fixed64, double, etc). Now, this variable size encoding is more than just an optimization, it's a part of the design and specification. So disabling this "optimization" by the means of Protocol Buffers is only possible if your data consists exclusively of fixed length types and has no repeated fields as long as the number of repetitions is not fixed. I presume that this is not the case, since you're asking this question. So the short answer is no, it's not possible by means of the library because it would violate the encoding specification.
But in my opinion the desired effect could be easily achieved by encoding the size into the buffer with little CPU and RAM overhead. I presume you know the maximum size of the message generated by nanopb, we denote it by MAX_MSG_SIZE. We call this message the payload message. Suppose that this MAX_MSG_SIZE can be represented by some integer type, which we denote by wrapped_size_t (e.g. uint16_t).
The idea is simple:
allocate the buffer slightly larger than MAX_MSG_SIZE;
write the payload message generated by nanopb at some offset into the allocated buffer;
use this offset to encode the size of the payload message at the beginning of the buffer;
transmit the whole buffer having the fixed size equal to MAX_MSG_SIZE + sizeof(wrapped_size_t) to the receiver;
upon reception decode the size of the payload message and pass both the decoded size and the payload message to pb_istream_from_buffer.
I attach the code to illustrate the idea. I used an example from nanopb repository:
#include <stdio.h>
#include <inttypes.h>
#include <string.h>
#include <pb_encode.h>
#include <pb_decode.h>
#include "simple.pb.h"
//#define COMMON_ENDIANNES
#ifdef COMMON_ENDIANNES
#define encode_size encode_size_ce
#define decode_size decode_size_ce
#else
#define encode_size encode_size_le
#define decode_size decode_size_le
#endif
typedef uint16_t wrapped_size_t;
/* Maximum size of the message returned by bytes_written */
const size_t MAX_MSG_SIZE = 11;
/* Size of the field storing the actual size of the message
* (as returned by bytes_written) */
const size_t SIZE_FIELD = sizeof(wrapped_size_t);
/* Fixed wrapped message size */
const size_t FIXED_MSG_SIZE = MAX_MSG_SIZE + sizeof(wrapped_size_t);
void print_usage(char *prog);
/* Get the address of the payload buffer from the transmitted buffer */
uint8_t* payload_buffer(uint8_t *buffer);
/* Encode the payload size into the transmitted buffer (common endiannes) */
void encode_size_ce(uint8_t *buffer, size_t size);
/* Decode the payload size into the transmitted buffer (common endiannes) */
wrapped_size_t decode_size_ce(uint8_t *buffer);
/* Encode the payload size into the transmitted buffer (little endian) */
void encode_size_le(uint8_t *buffer, size_t size);
/* Decode the payload size into the transmitted buffer (little endian) */
size_t decode_size_le(uint8_t *buffer);
int main(int argc, char* argv[])
{
/* This is the buffer where we will store our message. */
uint8_t buffer[MAX_MSG_SIZE + sizeof(wrapped_size_t)];
bool status;
if(argc > 2 || (argc == 2 && (!strcmp(argv[1], "-h") || !strcmp(argv[1], "--help"))))
{
print_usage(argv[0]);
return 1;
}
/* Encode our message */
{
/* Allocate space on the stack to store the message data.
*
* Nanopb generates simple struct definitions for all the messages.
* - check out the contents of simple.pb.h!
* It is a good idea to always initialize your structures
* so that you do not have garbage data from RAM in there.
*/
SimpleMessage message = SimpleMessage_init_zero;
/* Create a stream that will write to our buffer. */
pb_ostream_t stream = pb_ostream_from_buffer(payload_buffer(buffer),
MAX_MSG_SIZE);
if(argc > 1)
sscanf(argv[1], "%" SCNd32, &message.lucky_number);
else
{
printf("Input lucky number: ");
scanf("%" SCNd32, &message.lucky_number);
}
/* Encode the payload message */
status = pb_encode(&stream, SimpleMessage_fields, &message);
/* Wrap the payload, i.e. add the size to the buffer */
encode_size(buffer, stream.bytes_written);
/* Then just check for any errors.. */
if (!status)
{
printf("Encoding failed: %s\n", PB_GET_ERROR(&stream));
return 1;
}
}
/* Now we could transmit the message over network, store it in a file, etc.
* Note, the transmitted message has a fixed length equal to FIXED_MSG_SIZE
* and is stored in buffer
*/
/* But for the sake of simplicity we will just decode it immediately. */
{
/* Allocate space for the decoded message. */
SimpleMessage message = SimpleMessage_init_zero;
/* Create a stream that reads from the buffer. */
pb_istream_t stream = pb_istream_from_buffer(payload_buffer(buffer),
decode_size(buffer));
/* Now we are ready to decode the message. */
status = pb_decode(&stream, SimpleMessage_fields, &message);
/* Check for errors... */
if (!status)
{
printf("Decoding failed: %s\n", PB_GET_ERROR(&stream));
return 1;
}
/* Print the data contained in the message. */
printf("Your lucky number was %d; payload length was %d.\n",
(int)message.lucky_number, (int)decode_size(buffer));
}
return 0;
}
void print_usage(char *prog)
{
printf("usage: %s [<lucky_number>]\n", prog);
}
uint8_t* payload_buffer(uint8_t *buffer)
{
return buffer + SIZE_FIELD;
}
void encode_size_ce(uint8_t *buffer, size_t size)
{
*(wrapped_size_t*)buffer = size;
}
wrapped_size_t decode_size_ce(uint8_t *buffer)
{
return *(wrapped_size_t*)buffer;
}
void encode_size_le(uint8_t *buffer, size_t size)
{
int i;
for(i = 0; i < sizeof(wrapped_size_t); ++i)
{
buffer[i] = size;
size >>= 8;
}
}
size_t decode_size_le(uint8_t *buffer)
{
int i;
size_t ret = 0;
for(i = sizeof(wrapped_size_t) - 1; i >= 0; --i)
ret = buffer[i] + (ret << 8);
return ret;
}
UPD Ok, if, for some reason, you still wish to stick to the original GPB encoding there's another option available: fill the unused part of the buffer (i.e. the part after the last byte written by nanopb) with some valid data which will be ignored. For instance, you can reserve a field number which doesn't mark any field in your *.proto file but is used to mark the data which will be discarded by the GPB decoder. Let's denote this reserved field number as RESERVED_FIELD_NUMBER. This is used for backward compatibility but you can use it for your purpose as well. Let's call this filling-in the buffer with the dummy data sealing (perhaps there's a better term). This method also requires that you have at least 2 free bytes available to you after pb_encode.
So the idea of sealing is even simpler:
calculate how many buffer bytes is left unfilled after pb_encode;
mark the rest of the buffer as array of bytes with RESERVED_FIELD_NUMBER.
I attach the updated code, the main function is bool seal_buffer(uint8_t *buffer, size_t size), call it after pb_encode to seal the buffer and you're done. Currently, it has a limitation of sealing no more than 2 ** 28 + 4 bytes, but it could be easily updated to overcome this limitation.
#include <stdio.h>
#include <assert.h>
#include <inttypes.h>
#include <pb_encode.h>
#include <pb_decode.h>
#include "simple.pb.h"
/* Reserved field_number shouldn't be used for field numbering. We use it
* to mark the data which will be ignored upon reception by GPB parser.
* This number should be 1 to 15 to fit into a single byte. */
const uint8_t RESERVED_FIELD_NUMBER = 15;
/* Maximum size of the message returned by bytes_written (payload size) */
const size_t MAX_MSG_SIZE = 200;
/* Size of the transmitted message (reserve 2 bytes for minimal sealing) */
const size_t FIXED_MSG_SIZE = MAX_MSG_SIZE + 2;
void print_usage(char *prog);
/* Sealing the buffer means filling it in with data which is valid
* in the sense that a GPB parser accepts it as valid but ignores it */
bool seal_buffer(uint8_t *buffer, size_t size);
int main(int argc, char* argv[])
{
/* This is the buffer where we will store our message. */
uint8_t buffer[FIXED_MSG_SIZE];
bool status;
if(argc > 2 || (argc == 2 && (!strcmp(argv[1], "-h") || !strcmp(argv[1], "--help"))))
{
print_usage(argv[0]);
return 1;
}
/* Encode our message */
{
/* Allocate space on the stack to store the message data.
*
* Nanopb generates simple struct definitions for all the messages.
* - check out the contents of simple.pb.h!
* It is a good idea to always initialize your structures
* so that you do not have garbage data from RAM in there.
*/
SimpleMessage message = SimpleMessage_init_zero;
/* Create a stream that will write to our buffer. */
pb_ostream_t stream = pb_ostream_from_buffer(buffer, sizeof(buffer));
if(argc > 1)
sscanf(argv[1], "%" SCNd32, &message.lucky_number);
else
{
printf("Input lucky number: ");
scanf("%" SCNd32, &message.lucky_number);
}
/* Now we are ready to encode the message! */
status = pb_encode(&stream, SimpleMessage_fields, &message);
/* Then just check for any errors.. */
if (!status)
{
fprintf(stderr, "Encoding failed: %s\n", PB_GET_ERROR(&stream));
return 1;
}
/* Now the main part - making the buffer fixed-size */
assert(stream.bytes_written + 2 <= FIXED_MSG_SIZE);
if(!seal_buffer(buffer + stream.bytes_written,
FIXED_MSG_SIZE - stream.bytes_written))
{
fprintf(stderr, "Failed sealing the buffer "
"(filling in with valid but ignored data)\n");
return 1;
}
}
/* Now we could transmit the message over network, store it in a file or
* wrap it to a pigeon's leg.
*/
/* But because we are lazy, we will just decode it immediately. */
{
/* Allocate space for the decoded message. */
SimpleMessage message = SimpleMessage_init_zero;
/* Create a stream that reads from the buffer. */
pb_istream_t stream = pb_istream_from_buffer(buffer, FIXED_MSG_SIZE);
/* Now we are ready to decode the message. */
status = pb_decode(&stream, SimpleMessage_fields, &message);
/* Check for errors... */
if (!status)
{
fprintf(stderr, "Decoding failed: %s\n", PB_GET_ERROR(&stream));
return 1;
}
/* Print the data contained in the message. */
printf("Your lucky number was %d.\n", (int)message.lucky_number);
}
return 0;
}
void print_usage(char *prog)
{
printf("usage: %s [<lucky_number>]\n", prog);
}
bool seal_buffer(uint8_t *buffer, size_t size)
{
size_t i;
if(size == 1)
{
fprintf( stderr, "Cannot seal the buffer, at least 2 bytes are needed\n");
return false;
}
assert(size - 5 < 1<<28);
if(size - 5 >= 1<<28)
{
fprintf( stderr, "Representing the size exceeding 2 ** 28 + 4, "
"although it's not difficult, is not yet implemented\n");
return false;
}
buffer[0] = (15 << 3) + 2;
/* encode the size */
if(size - 2 < 1<<7)
buffer[1] = size - 2;
else
{
/* Size is large enough to fit into 7 bits (1 byte).
* For simplicity we represent the remaining size by 4 bytes (28 bits).
* Note that 1 byte is used for encoding field_number and wire_type,
* plus 4 bytes for the size encoding, therefore the "remaining size"
* is equal to (size - 5)
*/
size -= 5;
for(i = 0; i < 4; ++i)
{
buffer[i + 1] = i < 3? (size & 0x7f) | 0x80: size & 0x7f;
size >>= 7;
}
}
return true;
}
I have a function that initializes a struct that contains nested structs and arrays.
While initializing the struct I have multiple calls to calloc.
Refer to code bellow:
typedef struct
{
int length;
uint8_t *buffer;
} buffer_a;
typedef struct
{
int length;
uint8_t *buffer;
int *second_buffer_size;
uint8_t **second_buffer;
} buffer_b;
typedef struct
{
int max_length;
buffer_a *buffer_in;
buffer_b *buffer_out;
} state_struct;
state_struct *init(int size, int elements) {
size_t struct_size = sizeof(state_struct);
state_struct *s = (state_struct*) calloc(struct_size, struct_size);
log("Building state with length %d", size);
s->max_length = size;
size_t buffer_in_size = s->max_length * sizeof(buffer_a);
s->buffer_in = (buffer_a*) calloc(buffer_in_size, buffer_in_size);
size_t buffer_out_size = s->max_length * sizeof(buffer_b);
s->buffer_out = (buffer_b*) calloc(buffer_out_size, buffer_out_size);
log("Allocated memory for both buffers structs");
for (int i = 0; i < s->max_length; ++i) {
size_t buf_size = elements * sizeof(uint8_t);
s->buffer_in[i].buffer = (uint8_t*) calloc(buf_size, buf_size);
s->buffer_in[i].length = -1;
log(s, "Allocated memory for in buffer");
s->buffer_out[i].buffer = (uint8_t*) calloc(buf_size, buf_size);
s->buffer_out[i].length = -1;
log(s, "Allocated memory for out buffer");
size_t inner_size = elements * elements * sizeof(uint8_t);
size_t inner_second_buffer_size = elements * sizeof(int);
s->buffer_out[i].second_buffer = (uint8_t**) calloc(inner_size, inner_size);
s->buffer_out[i].second_buffer_size = (int*) calloc(inner_second_buffer_size, inner_second_buffer_size);
log(s, "Allocated memory for inner buffer");
}
return s;
}
Logs just before the for loop are printed but the program crashes and the first log statement inside the loop does not get printed out.
Why is this happening?
So this may not be an answer to your question, but here goes:
When I ran this code (on Ubuntu, gcc 7.4), and replaced all the log functions with printf, it finished succesfuly. I suspect the problem might be in the way you use the log function. You specify that it works up until the first log call inside the loop. You didn't specify what the log function does, or whether it is a function or just a macro wrapper for printf, but you call it in a different manner inside the loop - the first parameter is *state_struct rather than a format string.
Also, the way you call calloc seems to be semantically incorrect. The first parameter should be the number of blocks of second parameter size you want to allocate (presumably 1 in this case)
I am working with Xilinx Ethernetlite (LWIP) design. I am able to transfer the data from KC board to PC (Hercules) through Ethernet only if buf =32. But my actual buffer size is 1024. How to increase the buffer size from 32 to 1024
I am unable to make sure whether the mistake is in code or in hercules. To read the values (integer) in hercules i am doing this function.
initially, From hercules i will send the Hello command to Board,then board with accept that request. After that, board will output the data (integer values) to Hercules.
C code for itoa
char* itoa(int val, int base)
{
static char buf[32] = {0}; //buf size
int i = 30;
for(; val && i ; --i, val /= base)
buf[i] = "0123456789abcdef"[val % base];
return &buf[i+1];
}
Modified code
#define DAQ_FIFO_DEPTH 128
int transfer_data()
{
return 0;
}
err_t tcp_write_u32_string(struct tcp_pcb *pcb, unsigned char prefix, u32_t value)
{
unsigned char buf[11]; /* enough room for prefix and value. */
err_t result;
u16_t len;
unsigned char *p = buf + sizeof buf;
do {
/* ASCII encoding: '0' = 48, '1' = 49, ..., '9' = 57. */
*(--p) = 48 + (value % 10u);
value /= 10;
} while (value);
if (prefix)
*(--p) = prefix;
len = buf + sizeof buf - p;
if (tcp_sndbuf(pcb) < len)
{
result = tcp_output(pcb);
if (result != ERR_OK)
return result;
}
return tcp_write(pcb, p, len, TCP_WRITE_FLAG_COPY | TCP_WRITE_FLAG_MORE);
}
err_t send_list(struct tcp_pcb *pcb, const u32_t data[], u16_t len)
{
static const char newline[2] = { 13, 10 }; /* ASCII \r\n */
err_t result;
if (len > 0) {
u16_t i;
result = tcp_write_u32_string(pcb, 0, data[0]);
if (result != ERR_OK)
return result;
for (i = 1; i < len; i++)
{
/* ASCII comma is code 44. (Use 32 for space, or 9 for tab.) */
result = tcp_write_u32_string(pcb, 44, data[i]);
if (result != ERR_OK)
return result;
}
}
result = tcp_write(pcb, newline, 2, 0);
if (result)
return result;
return tcp_output(pcb);
}
int application_connection(void *arg, struct tcp_pcb *conn, err_t err)
{
struct netif *netif = arg; /* Because of tcp_arg(, netif). */
u32_t data[DAQ_FIFO_DEPTH];
u32_t i, n;
if (err != ERR_OK) {
tcp_abort(conn);
return ERR_ABRT;
}
err = daq_setup();
if (err != ERR_OK)
{
tcp_abort(conn);
return ERR_ABRT;
}
while (1)
{
xemacif_input(netif);
tcp_tmr();
tcp_output(conn);
n = daq_acquire(data, DAQ_FIFO_DEPTH);
if (n > DAQ_FIFO_DEPTH)
break;
if (tcp_write(conn, data, n * sizeof data[0], TCP_WRITE_FLAG_COPY) != ERR_OK)
break;
}
// daq_close();
/* Close the TCP connection. */
if (tcp_close(conn) == ERR_OK)
return ERR_OK;
/* Close failed. Abort it, then. */
tcp_abort(conn);
return ERR_ABRT;
}
int application_main(struct netif *netif, unsigned int port)
{
struct tcp_pcb *pcb;
err_t err;
pcb = tcp_new();
if (!pcb) {
/* Out of memory error */
return -1;
}
err = tcp_bind(pcb, IP_ADDR_ANY, port);
if (err != ERR_OK) {
/* TCP error */
return -1;
}
pcb = tcp_listen_with_backlog(pcb, 1);
if (!pcb) {
/* Out of memory. */
return -1;
}
tcp_arg(pcb, netif);
tcp_accept(pcb, application_connection);
while (1)
xemacif_input(netif);
}
Hercules output
enter image description here
So, this is a continuation from the discussion at Xilinx forums?
The itoa() function converts an unsigned integer (stored in an int) into the first 30 or so characters in the buffer buf.
The recv_callback() function makes little to no sense.
The call to aurora_rx_main() is documented as a "FUNCTION CALL", which is rather less than helpful (because we have no idea what it does), and even its return value is completely ignored.
The first for loop dumps the contents of the first 100 u32's in DestinationBuffer[] for debugging purposes, so that code is unrelated to the task at hand. However, we don't know who or what filled DestinationBuffer. It might or might not have been filled by the aurora_rx_main() call; we're not told either way.
(The tcp_*() functions seem to follow the API described in lwIP Wiki at Wikia.)
If the p parameter is NULL, then tcp_close(tcpb) is called, followed by a tcp_recv(tcpb, NULL) call. This makes the least sense of all: why try to receive anything (and why the NULL parameter) after a close?
The next part is similarly baffling. It looks like the if test checks if the TCP send buffer is over 1024 bytes in size. If not, the p buffer is freed. Otherwise, the for loop tries to convert each u32 in DestinationBuffer to a string, write that string to the TCP buffer; however, rather than the proper api flags, it uses the constant 1, and does not even check if appending to the TCP send buffer works.
In summary, this looks like a pile of copy-pasted code that does nothing sensible. Increasing the buffer size in itoa function is not only unnecessary (an u32, even when converted to an int, will always fit within 12 characters (excluding either the minus sign, or the nul byte at end, so make that 13 characters total), but completely unrelated to the problem it is supposed to fix.
The root problem is that the code is horrible. Modifying it is like putting car body filler over a piece of old chewing gum, in an effort to "fix" it. The proper fix is to rip out that junk code altogether, and use something better instead.
Edit: The OP states that they're a new programmer, so the comments above should be taken as a direct, honest opinion of the code shown, and not about OP themselves. Let's see if we can help OP produce better code.
First, the itoa() function shown is silly. Assuming the intent is really to send back the u32_ts in the DestinationBuffer as decimal strings, it is much better to implement a helper function for doing the conversion. Since the value is to be preceded with a comma (or some other separator), we can add that trivially as well. Since it will be sent using tcp_write(), we'll combine the functionality:
err_t tcp_write_u32_string(struct tcp_pcb *pcb,
unsigned char prefix, /* 0 for none */
u32_t value)
{
/* Because 0 <= u32_t <= 4294967295, the value itself is at most 10 digits long. */
unsigned char buf[11]; /* enough room for prefix and value. */
err_t result;
u16_t len;
unsigned char *p = buf + sizeof buf;
/* Construct the value first, from right to left. */
do {
/* ASCII encoding: '0' = 48, '1' = 49, ..., '9' = 57. */
*(--p) = 48 + (value % 10u);
value /= 10;
} while (value);
/* Prepend the prefix, if any. */
if (prefix)
*(--p) = prefix;
/* Calculate the length of this part. */
len = buf + sizeof buf - p;
/* If the TCP buffer does not have enough free space, flush it. */
if (tcp_sendbuf(pcb) < len) {
result = tcp_output(pcb);
if (result != ERR_OK)
return result;
}
/* Append the buffer to the TCP send buffer.
We also assume the packet is not done yet. */
return tcp_write(pcb, p, len, TCP_WRITE_FLAG_COPY | TCP_WRITE_FLAG_MORE);
}
so that to send num u32_ts from a specified array as decimal strings, with a newline at end, you could use
err_t send_list(struct tcp_pcb *pcb,
const u32_t data[],
u16_t len)
{
static const char newline[2] = { 13, 10 }; /* ASCII \r\n */
err_t result;
if (len > 0) {
u16_t i;
/* The first number has no prefix. */
result = tcp_write_u32_string(pcb, 0, data[0]);
if (result != ERR_OK)
return result;
/* The following numbers have a comma prefix. */
for (i = 1; i < len; i++) {
/* ASCII comma is code 44. (Use 32 for space, or 9 for tab.) */
result = tcp_write_u32_string(pcb, 44, data[i]);
if (result != ERR_OK)
return result;
}
}
/* We add a final newline.
Note that this one can be referenced,
and it does complete what we wanted to send thus far. */
result = tcp_write(pcb, newline, 2, 0);
if (result)
return result;
/* and flush the buffer, so the packet gets sent right now. */
return tcp_output(pcb);
}
Now, I haven't written C for Xilinx or used the lwIP stack at all, so the above code is written blind. Yet, I'm pretty confident it works (barring any typos or thinkos; if you find any, report them in a comment, and I'll verify and fix).
The two buffers (buf and newline) are declared static, so that although they're only visible within their respective functions, their value is valid in the global scope.
Because TCP is a stream protocol, it is not necessary to fit each response to a single packet. Other than the 11-character (for each number and its prefixing character) and the 2-character (newline) buffers, the only large buffer you need is the TCP send buffer (the maximum transmission unit or maximum segment size as I'm not sure how lwIP uses the buffer internally), typically between 536 and 1518 bytes.
The two above functions try to split packets between numbers, but that's just because it's easier than to try and fill each packet exactly. If the next (comma and) value fit in the buffer, then it is added to the buffer; otherwise the buffer is flushed first, and then the next (comma and) value added to the buffer.
From the recipient side, you should obtain a nice, readable stream using e.g. netcat. (I have no idea if Hercules is an application, or just the name of your local machine.) Because TCP is a stream protocol, the recipient cannot (reliably) detect where the packet boundaries were (unlike in, say, UDP datagrams). In practice, a TCP connection is just two streams of data, each going one way, and the split into packets is just a protocol detail application programmers don't need to worry about. For lwIP, because it is such a low-level library, a little bit of care need to be taken, but as one can see from the above code, it's really not that much.
In a comment, OP explained that they are not very experienced, and that the overall intent is to have the device accept a TCP connection, and stream data (samples acquired by a separate acquisition board) as unsigned 32-bit integers via the connection.
Because I would love to have one of those FPGA boards (I have several tasks I could see if I could offload to an FPGA), but no resources to get one, I shall try to outline the entire application here. Do note that the only information I have to go on, is the 2018 version of Xilinx OS and Libraries Document Collection (UG643) (PDF). It looks like OP wants to use the raw API, for high performance.
Converting the samples to text is silly, especially if high performance is desired. We should just use raw binary, and whatever endianness the KC705 uses. (I didn't see it at a quick glance from the documentation, but I suspect it is little-endian).
According to the documentation, the raw API main() is something similar to following:
int main(void)
{
/* MAC address. Use an unique one. */
unsigned char mac[6] = { 0x00, 0x0A, 0x35, 0x00, 0x01, 0x02 };
struct netif *netif = NULL;
ip_addr_t ipaddr, netmask, gateway;
/* Define IP address, netmask, and gateway. */
IP4_ADDR(&ipaddr, 192, 168, 1, 1);
IP4_ADDR(&netmask, 255, 255, 255, 0);
IP4_ADDR(&gateway, 0, 0, 0, 0);
/* Initialize lwIP networking stack. */
lwip_init();
/* Add this networking interface, and make it the default one */
if (!xemac_add(netif, &ipaddr, &netmask, &gateway, mac, EMAC_BASEADDR)) {
printf("Error adding network interface\n\r");
return -1;
}
netif_set_default(netif);
platform_enable_interrupts();
/* Bring the network interface up (activate it) */
netif_set_up(netif);
/* Our application listens on port 7. */
return application_main(netif, 7);
}
In the documentation examples, rather than return application_main(netif);, you'll see a call to start_application(), and then an infinite loop that regularly calls xemacif_input(netif) instead. It just means that out application_main() must call xemacif_input(netif) regularly, to be able to receive data. (The lwIP documentation says that we should also call sys_check_timeouts() or tcp_tmr() at regular intervals.)
Note that I've omitted error reporting printfs, and that rather than recovering from errors gracefully, this will just return (from main()); I'm not certain whether this causes the KC705 to restart or what.
int application_main(struct netif *netif, unsigned int port)
{
struct tcp_pcb *pcb;
err_t err;
pcb = tcp_new();
if (!pcb) {
/* Out of memory error */
return -1;
}
/* Listen for incoming connections on the specified port. */
err = tcp_bind(pcb, IP_ADDR_ANY, port);
if (err != ERR_OK) {
/* TCP error */
return -1;
}
pcb = tcp_listen_with_backlog(pcb, 1);
if (!pcb) {
/* Out of memory. */
return -1;
}
/* The accept callback function gets the network interface
structure as the extra parameter. */
tcp_arg(pcb, netif);
/* For each incoming connection, call application_connection(). */
tcp_accept(pcb, application_connection);
/* In the mean time, process incoming data. */
while (1)
xemacif_input(netif);
}
For each TCP connection to the port, we get a call to application_connection(). This is the function that sets up the data acquisition board, and transfers the data for as long as the recipient wants it.
/* How many DAQ samples to process in each batch.
* Should be around the DAQ FIFO depth or so, I think. */
#define DAQ_FIFO_DEPTH 128
err_t application_connection(void *arg, struct tcp_pcb *conn, err_t err)
{
struct netif *netif = arg; /* Because of tcp_arg(, netif). */
u32_t data[DAQ_FIFO_DEPTH];
u32_t i, n;
/* Drop the connection if there was an error. */
if (err != ERR_OK) {
tcp_abort(conn);
return ERR_ABRT;
}
/* Setup the data aquisition. */
err = daq_setup();
if (err != ERR_OK) {
tcp_abort(conn);
return ERR_ABRT;
}
/* Data acquisition to TCP loop. */
while (1) {
/* Keep the networking stack running. */
xemacif_input(netif);
tcp_tmr();
/* Tell the networking stack to output what it can. */
tcp_output(conn);
/* Acquire up to DAQ_FIFO_DEPTH samples. */
n = daq_acquire(data, DAQ_FIFO_DEPTH);
if (n > DAQ_FIFO_DEPTH)
break;
/* Write data as-is to the tcp buffer. */
if (tcp_write(conn, data, n * sizeof data[0], TCP_WRITE_FLAG_COPY) != ERR_OK)
break;
}
/* Stop data acquisition. */
daq_close();
/* Close the TCP connection. */
if (tcp_close(conn) == ERR_OK)
return ERR_OK;
/* Close failed. Abort it, then. */
tcp_abort(conn);
return ERR_ABRT;
}
There are three more functions to implement: daq_setup(), which should setup the data acquisition and FIFOs; daq_acquire(u32_t *data, u32_t count) that stores up to count samples to data[], and returns the actual number of samples stored -- it would be best if it just drained the FIFO, rather than waited for new samples to arrive --, and finally daq_close(), that stops the data acquisition.
I believe they should be something like this:
XLlFifo daq_fifo;
err_t daq_setup(void)
{
XLlFifo_Config *config = NULL;
config = XLlFifo_LookupConfig(DAQ_FIFO_ID);
if (!config)
return ERR_RTE;
if (XLlFifo_CfgInitialize(&daq_fifo, config, config->BaseAddress) != XST_SUCCESS)
return ERR_RTE;
}
u32_t daq_acquire(u32_t *data, u32_t max)
{
u32_t len, have;
have = XLlFifo_iRxGetLen(&daq_fifo);
if (have < 1)
return 0;
else
if (have < max)
max = have;
for (len = 0; len < max; len++)
data[len] = XLlFifo_RxGetWork(&daq_fifo);
return len;
}
err_t daq_close(void)
{
/* How to stop the FIFO? Do we need to? */
}
That's about it.
Here, I explain my problem, I am a beginner on the ptrace function and I would like to succeed in recovering the hard information of a structure.
For example with this command, I will have strace -e trace = fstat ls
a line: fstat (3, {st_mode = ..., st_size = ...}
and I would like to successfully retrieve the contents of the structure (st_mode) and (st_size).
I try this but to no avail:
int buffer(unsigned long long addr, pid_t child, size_t size, void *buffer)
{
size_t byte = 0;
size_t data;
unsigned long tmp;
while (byte < size) {
tmp = ptrace(PTRACE_PEEKDATA, child, addr + byte);
if ((size - byte) / sizeof(tmp))
data = sizeof(tmp);
else
data = size % sizeof(tmp);
memcpy((void *)(buffer + byte), &tmp, data);
byte += data;
}
}
and in params :
struct stat stat_i;
buffer(addr, pid, sizeof(stat_i), &stat_i);
printf("%lu", stat_i.st_size); -> fake value :/
Thank'ks !
From the man page,
PTRACE_PEEKTEXT, PTRACE_PEEKDATA
Read a word at the address addr in the tracee's memory,
returning the word as the result of the ptrace() call. Linux
does not have separate text and data address spaces, so these
two requests are currently equivalent. (data is ignored; but
see NOTES.)
Thus you must understand that tmp would hold the actually value that was read.
Your checks are wrong - you should set errno = 0 before the call and then check if it has changed. If it has - you've got an error. If it hasn't - you can be assured that tmp has the word from the remote process.
Try something like this:
int buffer(unsigned long long addr, pid_t child, size_t size, void *buffer)
{
size_t byte = 0;
size_t data;
unsigned long tmp;
// support for word aligned sizes only
if (size % sizeof(long) != 0)
return -1;
long * buffer_int = (long*) buffer;
while (byte < size) {
errno = 0;
tmp = ptrace(PTRACE_PEEKDATA, child, addr + byte);
if (errno)
return -1;
buffer_int[byte / sizeof(long)] = tmp;
byte += sizeof(long);
}
}
How can I read data from a packet in C and convert it into a structure? I mean, there's a structure like
|=======================================================================
|0123456701234567012345670123456701234567012345670123456701234567.......
| type | length | MSG HDR | data
into a struct like
struct msg {
char type;
size_t length;
int hdr;
struct data * data;
};
Is the following code fine?
bool parse_packet(char * packet, size_t packet_len, struct msg * result) {
if(packet_len < 5) return false;
result->type = *packet++;
result->length = ntohl(*(int*)packet);
packet+=4;
if(result->length + 4 + 5 > packet_len)
return false;
if(result->length < 2)
return false;
result->hdr = ntohs(*(short*)packet);
packet+=2;
return parse_data(result, packet);
}
It's usually good practice to check that packet and result are non-null.
Why are you checking that packet_len < 5 when the header is 7 bytes? Why not just ensure that the packet is at least 7 bytes and get it over with? Or is hdr not present for some type?
I'm not sure what you're trying to achieve with
if(result->length + 4 + 5 > packet_len)
result->hdr = ntohs(*(short*)packet);
packet+=2;
If the declared message length plus nine is greater than the received message length, you read another two bytes from the message. Then regardless of the length of the data, you add two to the pointer and try to parse something out of it. What if packet_len is 5 and result->length is 4294967295? You're going to read off the end of your buffer, just like in Heartbleed. You need to always verify that your reads are in bounds, and never trust the size declared in the packet.
You have a completely standard situation. There's nothing deep or surprising here.
Start with a specification of the wire format. You can use pseudo-code or actual C types for that, but the implication is that the data is packed into bytes on the wire:
struct Message // wire format, pseudo code
{
uint8_t type;
uint32_t length; // big-endian on the wire
uint8_t header[2];
uint8_t data[length];
};
Now start parsing:
// parses a Message from (buf, size)
// precondition: "buf" points to "size" bytes of data; "msg" points to Message
// returns true on success
// msg->data is malloc()ed and contains the data on success
bool parse_message(unsigned char * buf, std::size_t size, Message * msg)
{
if (size < 7) { return false; }
// parse length
uint32_t n;
memcpy(&n, buf + 1, 4);
n = ntohl(n); // convert big-endian (wire) to native
if (n > SIZE_MAX - 7)
{
// this is an implementation limit!
return false;
}
if (size != 7 + n) { return false; }
// copy data
unsigned char * p = malloc(n);
if (!p) { return false; }
memcpy(p, buf + 7, n);
// populate result
msg->type = buf[0];
msg->length = n;
msg->header[0] = buf[5];
msg->header[1] = buf[6];
msg->data = p;
return true;
}
An alternative way of parsing the length is like this, directly:
uint32_t n = (buf[1] << 24) + (buf[2] << 16) + (buf[1] << 8) + (buf[0]);
This code assumes that buf contains exactly one message. If you're taking messages off of a stream, you need to modify the code (namely the if (size != 7 + n)) to check if there is at least as much data available as required, and return the amount of consumed data, too, so the caller can advance their stream position accordingly. (The caller could in this case compute the amount of data that was parsed as msg->length + 7, but relying on that is not scalable.)
Note: As #user points out, if your size_t is not wider than uint32_t, then this implementation will wrongly reject very large messages. Specifically, messages for which it is not true that 7 + n > n will be rejected. I included a dynamic check for this (unlikely) condition.