I have set up an Kaa server and developed an application with the SDK. but application doesn't send Event massages.
this application should send license plate of the cars entered parking to server and also send an event to the another application(receiver app).
the application send data to server but doesn't send events.
whats the problem?
this is my code:
static void callback(void *context)
{
kaa_string_t plate;
kaa_user_log_record_t *log_record = kaa_logging_data_collection_create();
plate.data = "some license plate";
log_record->plate = &plate;
kaa_logging_add_record(kaa_client_get_context(context)->log_collector, log_record, NULL);
printf("%s uploaded\n", plate.data);
kaa_plate_detection_event_plate_event_t *plate_event = kaa_plate_detection_event_plate_event_create();
plate_event->plate = &plate;
kaa_error_t error_code = kaa_event_manager_send_kaa_plate_detection_event_plate_event(
kaa_client_get_context(context)->event_manager, plate_event, NULL);
//plate_event->destroy(plate_event);
printf("%s event sent\n", plate.data);
}
Problem Description
In the beginning of callback() you're defining kaa_string_t plate; ==>
It means that its memory is allocated on the stack.
Later in the same scope, you're creating a pointer to plate_event, and it will be used as argument to the event that you want to send.
Before you're sending the event, you're assigning plate_event->plate = &plate. It means that plate_event->plate is now pointing to an address on the stack.
Then, (according to what you wrote in the comments) you're sending the event using an asynchronous function. It means that the thread that is executing this function is not waiting for the message to be really sent - that's the meaning of an asynchronous function. Something else (probably a different thread, depending on the implementation of the send_event function) will take care of the sending. Therefore, it's not guarenteed that the message is sent before the next lines of code are executed.
In your case it's probable that before the message was sent, the scope of callback() ended. Therefore the memory of this scope is automatically freed and invalid from now, including kaa_string_t plate. Then at some point, the asynchronous sending is executing but it relies on an invalid memory because now plate_event->plate is pointing to a memory that was already freed.
Possible solution
Instead of allocating kaa_string_t plate on the stack, allocate it on the heap (malloc). Then the memory is valid until you free it yourself, when you're sure the message was already sent.
Something like that:
kaa_string_t *plate = malloc(sizeof(kaa_string_t));
strncpy(plate, "some license plate", sizeof(kaa_string_t));
...
// Now it's safe to do this:
plate_event->plate = plate;
// Sending event code
...
Related
I've been trying to mount a custom protocol over the TCP module on the NodeMCU platform. However, the protocol I try to embed inside the TCP data segment is binary, not ASCII-based(like HTTP for example), so sometimes it contains a NULL char (byte 0x00) ending the C string inside the TCP module implementation, causing that part of the message inside the packet get lost.
-- server listens on 80, if data received, print data to console and send "hello world" back to caller
-- 30s time out for a inactive client
sv = net.createServer(net.TCP, 30)
function receiver(sck, data)
print(data)
sck:close()
end
if sv then
sv:listen(80, function(conn)
conn:on("receive", receiver)
conn:send("hello world")
end)
end
*This is a simple example which, as you can see, the 'receiver' variable is a callback function which prints the data from the TCP segment retrieved by the listener.
How can this be fixed? is there a way to circumvent this using the NodeMCU library? Or do I have to implement another TCP module or modify the current one's implementation to support arrays or tables as a return value instead of using strings?
Any suggestion is appreciated.
The data you receive in the callback should not be truncated. You can check this for yourself by altering the code as follows:
function receiver(sck, data)
print("Len: " .. #data)
print(data)
sck:close()
end
You will observe, that, while the data is indeed only printed up to the first zero byte (by the print()-function), the whole data is present in the LUA-String data and you can process it properly with 8-bit-safe (and zerobyte-safe) methods.
While it should be easy to modify the print()-function to also be zerobyte-safe, I do not consider this as a bug, since the print function is meant for texts. If you want to write binary data to serial, use uart.write(), i.e.
uart.write(0, data)
I have a piece of MPI C code which looks something like the following:
for(i=0;i<NTask;i++)
{
got_initial_bit_of_data[i]=0;
if(need_to_communicate with i)
MPI_ISend(&bit_of_pre_data_for_i,1,MPI_INT,partner,0,MPI_COMM_WORLD,&pre_requests[i]);
}
while(1)
{
MPI_Testsome(NTask,pre_requests,&ndone,idxs,MPI_STATUSES_IGNORE)
if(ndone)
{
for(i=0;i<ndone;i++)
{
MPI_ISend(&the_main_block_of_data_for_i,size_of_block,MPI_BYTE,idxs[i],1,MPI_COMM_WORLD,&main_requests[idxs[i]]);
}
}
//Other stuff that doesn't matter
MPI_IProbe(MPI_ANY_SOURCE,0,MPI_COMM_WORLD,&flag,&status);
if(!flag)
{
MPI_IProbe(MPI_ANY_SOURCE,1,MPI_COMM_WORLD,&flag,&status);
}
if(flag)
{
//Receiving the initial little bit of data
if(status.MPI_TAG==0)
{
//Location 1
got_initial_bit_of_data[status.MPI_SOURCE]=1;
MPI_Recv(&useful_location,1,MPI_INT,status.MPI_SOURCE,MPI_STATUS_IGNORE);
}
//Receiving the main bit of data
else if(status.MPI_TAG==1)
{
//Location 2
if(got_initial_bit_of_data[status.MPI_SOURCE]!=1)
//Something has gone horribly wrong...
//Receive the main bit of data here...
}
}
}
Obviously I've omitted lots of details because the full code is several hundreds of lines long. If something I've done looks a bit odd, it's probably because it is because of something in the omitted code block.
The idea is that at the start each processor sends an "announcement" message to those processors it wants to talk to. When it detects that those processors have received this message (that is when MPI_Testsome indicates the "announcement" MPI_Isend is complete), it should send a big chunk of data.
From the point of view of a processor receiving data, it should first receive the announcement message at location 1, which will cause MPI_Testsome to indicate that the Isend is complete and send the big chunk of data. The receiving processor should then receive the main block of data at location 2. Following this logic, it should be impossible to reach location 2 with got_initial_bit_of_data[status.MPI_SOURCE] being 0, but this is precisely what does happen very occasionally and I'd like to work out why.
Either I've got the logic of the code wrong, or there's some subtlety of IProbe and Testsome that I'm missing.
I'm also exiting and re-entering this entire block of code, with different processors moving in and out at different points in time, but only when all their ISends have been processed (as determined by Testsome saying that they're completed).
If the above explanation doesn't make any sense, what I want to know is are there any circumstances under which Testsome claim that an ISend is completed without the matching receive completing (or even starting)? Is a processor making a call to IProbe enough to cause Testsome to consider a request completed for instance?
If the above explanation doesn't make any sense, what I want to know is are there any circumstances under which Testsome claim that an ISend is completed without the matching receive completing (or even starting)? Is a processor making a call to IProbe enough to cause Testsome to consider a request completed for instance?
All that MPI_Testsome guarantees is that the buffer you were using from ISend is no longer needed by MPI. If you want to guarantee that the recipient has started the receive, use the synchronous form, ISSend.
Iam looking to write a socket program based on libev. I noticed that several examples as stated in https://github.com/coolaj86/libev-examples/blob/master/src/unix-echo-server.c use the call backs based on init. For example,
main() {
......
ev_io_init(&client.io, client_cb, client.fd, EV_READ|EV_WRITE);
ev_io_start(EV_A_ &server.io);
}
static void client_cb (EV_P_ ev_io *w, int revents)
{
if (revents & EV_READ)
{
....
} else if (revents & EV_WRITE) {
......
}
}
My question comes from the expected behaviour, say for example, all that i read when in EV_READ is stored in a linked list. Lets say I keep getting free flow of packets to read, will i ever get a chance to get into EV_WRITE? I have to send out all that I recv through read to another socket. So Will it be once EV_READ and second time EV_WRITE? In other words when will EV_WRITE be unblocked? Or do I need to block EV_READ for EV_WRITE to be called. Can someone help me understand this?
I think you should keep write callback separated from read callback:
main() {
ev_io_init(&read.io, read_cb, client.fd, EV_READ);
ev_io_init(&write.io, writead_cb, client.fd, EV_WRITE);
ev_io_start(EV_A_ &read.io);
ev_io_start(EV_A_ &write.io);
}
This is my solution.
To answer shortly: If you allways check for one type of event first and then have an else
if for the other you risk starvation. In general I would check for both, unless the specified protocol made it impossible for both to be activated at the same time.
Here is a more iffy answer:
The link in your question does not contain a code structure such as your question. The client https://github.com/coolaj86/libev-examples/blob/master/src/unix-echo-client.c does have a similar callback. You will notice it disables write events, when it has written once.
// once the data is sent, stop notifications that
// data can be sent until there is actually more
// data to send
ev_io_stop(EV_A_ &send_w);
ev_io_set(&send_w, remote_fd, EV_READ);
ev_io_start(EV_A_ &send_w);
That looks like an attempt to avoid starvation of the pipe READ event branch. Even though Im not very familiar with libev, the github examples you linked to do not seem very robust. E.g static void stdin_cb (EV_P_ ev_io *w, int revents)does not use the return value of getline() to detect EOF. Also the send() and recv() socket operation return values are not inspected for how much was read or written (though on local named pipe streams the amounts will most likely match the amounts that were requested). If this was later changed to a TCP based connection, checking the amounts would be vital.
I know I'm not supposed to access a control from a thread that didn't create it, but I tried it anyway and got a really peculiar error. I did it in assembly, which everybody hates reading, so here's the equivalent code in C:
/* Function that returns the number of characters after startOfItem
that are not some delimeter. startOfItem[maxSize] is guaranteed to
be at a valid address and to be a delimiter. The function is defined
elsewhere and works perfectly. */
unsigned int getSizeOfItem(char* startOfItem, unsigned int maxSize);
/* This function runs in a worker thread. It has an exception handler that is
omitted here for clarity, and because it is never run anyway. */
void itemizeAndAddToListbox (HWND hListbox, char* strings, unsigned int size) {
while (size) {
unsigned int sizeOfItem = getSizeOfItem(strings, size);
strings[sizeOfItem] = 0; //overwrite the delimiting character with null
SendMessage( hListbox, LB_ADDSTRING, (WPARAM) 0, (LPARAM) strings );
/* passing a pointer to a different thread is a no-no, but SendMessage
does not return until the message is processed, so no disposal issues
are possible. And I happen to know that all addresses from *strings
to strings[sizeOfItem] remain valid. */
strings += sizeOfItem+1;
size -= sizeOfItem+1;
};
}
Believe it or not, this works perfectly from a thread that did not create hListbox until the very last item, at which point the listbox causes an access violation by reading strings[size+1]. It throws the exception in the UI thread (the one that created the listbox), ignoring the worker thread's exception handler. SendMessage() inappropriately returns 0 instead of the listbox error code.
I made this work by sending user-defined messages from the worker thread to the UI thread's window, which in turn sends the LB_ADDSTRING message with the very same parameters to the very same listbox, and it works perfectly. The exception hasn't happened yet when the message is sent from the UI thread, but that's such a random difference that I'm nervous about the proper working code as well. Anybody know what the listbox is doing accessing memory beyond the null-terminated end of the string in the first place, and what I can do to prevent it from doing so?
Since the SendMessage() serializes the call onto the receiving thread, then I would expect the exception to happen on the UI thread because that's the one adding the string.
MSDN SendMessage:
'The return value specifies the result of the message processing; it depends on the message sent.' This is not the listbox error code from teh exception will not be put into the message field unless the message-hander puts it there.
What happens to 'size' at the end if you call in with one string of size 1? Will 'size' not be set to -1, ie. not false?
Rgds,
Martin
I'm building a client using dns-sd api from Bonjour. I notice that there is a flag called kDNSServiceFlagsShareConnection that it is used to share the connection of one DNSServiceRef.
Apple site says
For efficiency, clients that perform many concurrent operations may want to use a single Unix Domain Socket connection with the background daemon, instead of having a separate connection for each independent operation. To use this mode, clients first call DNSServiceCreateConnection(&MainRef) to initialize the main DNSServiceRef. For each subsequent operation that is to share that same connection, the client copies the MainRef, and then passes the address of that copy, setting the ShareConnection flag to tell the library that this DNSServiceRef is not a typical uninitialized DNSServiceRef; it's a copy of an existing DNSServiceRef whose connection information should be reused.
There is even an example that shows how to use the flag. The problem i'm having is when I run the program it stays like waiting for something whenever I call a function with the flag. Here is the code:
DNSServiceErrorType error;
DNSServiceRef MainRef, BrowseRef;
error = DNSServiceCreateConnection(&MainRef);
BrowseRef = MainRef;
//I'm omitting when I check for errors
error = DNSServiceBrowse(&MainRef, kDNSServiceFlagsShareConnection, 0, "_http._tcp", "local", browse_reply, NULL);
// After this call the program stays waiting for I don't know what
//I'm omitting when I check for errors
error = DNSServiceBrowse(&BrowseRef, kDNSServiceFlagsShareConnection, 0, "_http._tcp", "local", browse_reply, NULL);
//I'm omitting when i check for errors
DNSServiceRefDeallocate(BrowseRef); // Terminate the browse operation
DNSServiceRefDeallocate(MainRef); // Terminate the shared connection
Any ideas? thoughts? suggestion?
Since there are conflicting answers, I dug up the source - annotations by me.
// If sharing...
if (flags & kDNSServiceFlagsShareConnection)
{
// There must be something to share (can't use this on the first call)
if (!*ref)
{
return kDNSServiceErr_BadParam;
}
// Ref must look valid (specifically, ref->fd)
if (!DNSServiceRefValid(*ref) ||
// Most operations cannot be shared.
((*ref)->op != connection_request &&
(*ref)->op != connection_delegate_request) ||
// When sharing, pass the ref from the original call.
(*ref)->primary)
{
return kDNSServiceErr_BadReference;
}
The primary fiels is explained elsewhere:
// When using kDNSServiceFlagsShareConnection, there is one primary _DNSServiceOp_t, and zero or more subordinates
// For the primary, the 'next' field points to the first subordinate, and its 'next' field points to the next, and so on.
// For the primary, the 'primary' field is NULL; for subordinates the 'primary' field points back to the associated primary
The problem with the question is that DNSServiceBrowse maps to ref->op==browse_request which causes a kDNSServiceErr_BadReference.
It looks like kDNSServiceFlagsShareConnection is half-implemented, because I've also seen cases in which it works - this source was found by tracing back when it didn't work.
Service referenses for browsing and resolving may unfortunately not be shared. See the comments in the Bonjour documentation for the kDNSServiceFlagsShareConnection-flag. Since you only browse twice I would just let them have separate service-refs instead.
So both DNSServiceBrowse() and DNSServiceResolve() require an unallocated service-ref as first parameter.
I can't explain why your program chokes though. The first DNSServiceBrowse() call in your example should return immediately with an error code.
Although an old question, but it should help people looking around for answers now.
The answer by vidtige is incorrect, the may be shared for any operation, provided you pass the 'kDNSServiceFlagsShareConnection' flag along with the arguments. Sample below -
m_dnsrefsearch = m_dnsservice;
DNSServiceErrorType mdnserr = DNSServiceBrowse(&m_dnsrefsearch,kDNSServiceFlagsShareConnection,0,
"_workstation._tcp",NULL,
DNSServiceBrowseReplyCallback,NULL);
Reference - http://osxr.org/android/source/external/mdnsresponder/mDNSShared/dns_sd.h#0267