I've been trying to mount a custom protocol over the TCP module on the NodeMCU platform. However, the protocol I try to embed inside the TCP data segment is binary, not ASCII-based(like HTTP for example), so sometimes it contains a NULL char (byte 0x00) ending the C string inside the TCP module implementation, causing that part of the message inside the packet get lost.
-- server listens on 80, if data received, print data to console and send "hello world" back to caller
-- 30s time out for a inactive client
sv = net.createServer(net.TCP, 30)
function receiver(sck, data)
print(data)
sck:close()
end
if sv then
sv:listen(80, function(conn)
conn:on("receive", receiver)
conn:send("hello world")
end)
end
*This is a simple example which, as you can see, the 'receiver' variable is a callback function which prints the data from the TCP segment retrieved by the listener.
How can this be fixed? is there a way to circumvent this using the NodeMCU library? Or do I have to implement another TCP module or modify the current one's implementation to support arrays or tables as a return value instead of using strings?
Any suggestion is appreciated.
The data you receive in the callback should not be truncated. You can check this for yourself by altering the code as follows:
function receiver(sck, data)
print("Len: " .. #data)
print(data)
sck:close()
end
You will observe, that, while the data is indeed only printed up to the first zero byte (by the print()-function), the whole data is present in the LUA-String data and you can process it properly with 8-bit-safe (and zerobyte-safe) methods.
While it should be easy to modify the print()-function to also be zerobyte-safe, I do not consider this as a bug, since the print function is meant for texts. If you want to write binary data to serial, use uart.write(), i.e.
uart.write(0, data)
Related
I am doing some work on worm-attack detection in RPL. In RPL, the communication between the clients might be multiple hops, with the packets going through many nodes.
However, only the receiver gets a tcpip_event on reception of the packet. The nodes that the route passes through do not get this event. Is there any way to detect the packet on the intermediate nodes?
You cannot get a notification or callback when a packet is forwarded. However, you can get a callback when a packet is received or sent by the lower layers.
In Contiki, use the function rime_sniffer_add for that. Check apps/powertrace/powertrace.c for an example.
In Contiki-NG the function has been renamed to netstack_sniffer_add.
Usage example:
Declare the sniffer like this, in the global scope:
RIME_SNIFFER(packet_sniffer, input_packet, output_packet);
Then add the sniffer from your code, once, at the start of the application execution:
rime_sniffer_add(&packet_sniffer);
The functions input_packet and output_packets are callbacks defined by you and can be used to examine the packets; for example, like this:
static void
input_packet(void)
{
int rssi = (int)packetbuf_attr(PACKETBUF_ATTR_RSSI);
printf("received a packet with RSSI=%d\n", rssi);
}
My flink app designed to process IoT data from sensors.
Sensors send data through gateways. this is what the sample data looks like
case class Data(sensorId: String, value: Float, gatewayId: String, timestamp: Long)
Data from the same sensor can come from different gateways
If the gateway is disconnected from the network, then I receive a special event about this case class GatewayEvents(gatewayId: String, event: String, timestamp: Long) and use the broadcast stream which is connected to the main data stream from the sensors
the sensor may not send data in two cases,
it is broken
the gateway is disconnected from the network (will receive GatewayEvents("gwId","disconnected",1617979694) message in broadcast stream)
If I received a message that some gateway was disconnected from the network and the sensors that sent data through it stopped sending data (for example, within 1 minute), I need to create a special event
my semi-implemented implementation looks like this:
case class Data(sensorId: String, value: Float, gatewayId: String)
case class GatewayEvents(gatewayId: String, event: String, timestamp: Long)
val sensorData: DataStream[Data] ...
val gwData: DataStream[GatewayEvents] ...
val gatewayBroadcastStateDescriptor = new MapStateDescriptor[String, GatewayEvents]("gatewayEvents", classOf[String], classOf[GatewayEvents])
val broadcastGatewayEventsStream = gwData.broadcast(gatewayBroadcastStateDescriptor)
val events: sensorData.
.keyBy(_.sensorId)
.connect(broadcastGatewayEventsStream)
.process(...)
Can't make the implementation of this process. Any ideas? I think the SessionWindows will help me, but I can't figure out how best to do it
So, the simplest idea would be to use timers in this case I think. So, basically You could implement KeyedCoProcess function in a way that if it receives GatewayDisconnected message You will register timer (processing time) to fire after desired time. If any message arrives for sensor You would simply delete the registered timer, so that it won't fire. Inside ofonTimer function You can simply emit the desired event since if the timer fires it means that no value has arrived in the timespan.
One thing to note here is that if You keyBy(_.sensorId) it means the event would be generated for every sensor that was received through this gateway. If You want to emit only one event for the gatewa, You can simply change partitioning to keyBy(_.gatewayId).
I have an issue of an if statement not passing whilst my system gatt connection is not made.
Context
I have a BLE system using a NRF52840-dk board programmed in C. I also have a mobile application which, communicates with this board via a Gatt connection. I have a single service with a single characteristic. I write to this characteristic from my mobile application and, from this do some processing. At the moment I can send over a timestamp and begin storing data. However, I need to then send data back to my mobile device by this connection.
So what I have is a command to be sent from the phone to ask for some data. This should then send data back to the phone by changing the characteristic value.
Before I can change the value I need to see if the command has been issued. However, due to the priorities and constraints of the device I need to do this processing in the main function not in the BLE interrupt that I have done my time stamping in. This is due to the data I will be transmitting eventually will be large.
My issue however is, I receive the command to send some data back to the phone and update a global int value (changed from 0 to 1). Then in my main loop test this value and, if it is 1 write to the terminal and change the value back. I would then use this point of the code to run a function to send the data.
But this statement does not pass.
This is my main loop code
if(GATT_CONNECTED == false)//This works!
{
//Do some functions here
}
else if (GATT_CONNECTED == true)// GATT_CONNECTED = true
{
NRF_LOG_INFO("Test1 passed");//Testing variable this does not print
if(main_test == 1)
{
NRF_LOG_INFO("Test2 passed");//This does not print either irrelevant of value
main_test = 0;//False
}
idle_state_handle();
}
I don't know if the issue is the way I have defined my variable or due to interrupt priorities or something like that. But, when my Gatt connection is made the loop of (GATT_CONNECTED == true) does not seem to process.
My variable is defined in another file where my GATT connection is handled. The GATT connected variable is handled in main. my main_test variable is defined in another c file as int main_test = 0;. In the header declared as extern int main_test;.
I know the GATT_CONNECTED variable works as I have code in it that only runs when my gatt is not connected. I have omitted it for simplicity.
Any ideas,
Thanks
Ps Hope you are all keeping well and, safe
Edit
Added code for simplicity
main.c
bool GATT_CONNECTED = false;
int main(void)
{
Init_Routine();
while(1)
{
Process_data();//This runs if the gatt is not connected if statement inside
if(GATT_CONNECTED == true)//This does not run true when the gatt is connected
{
NRF_LOG_INFO("check gatt connectedpassed");//Testing variable.
nrf_gpio_pin_set(LED_4);//Turn an LED on once led 4 does not work
}
idle_state_handle();
}
}
I am trying to write a lowish level audio writer with the AudioFile & ExtAudioFile APIs. I am creating a new audio file with AudioFileInitializeWithCallbacks but it appears that this needs read & get size callbacks implemented. Why can't this just accept a single write callback and trust that the data has been written sucessfully.
What if I am writing to a stream which I can not seek into such as a CD or a network socket?
Surely this should just continually push data to the write callback and it is my responsibility to write this data where needed returning an error code if the operation didn't succeed.
The docs for AudioFile_SetSizeProc and AudioFile_WriteProc appear to be incorrect as they both talk about read operations "inPosition An offset into the data from which to read.", "#result The callback should return the size of the data.".
At the moment I have got past this by only writing to a file but I get a kExtAudioFileError_InvalidOperationOrder after the first write procedure. What does this mean? There are no comments in the docs about it.
Any pointers or help would be much appriciated.
Apple documentation is wrong here. Check header file AudioFile.h:
/*!
#typedef AudioFile_SetSizeProc
#abstract A callback for setting the size of the file data. used with AudioFileOpenWithCallbacks or AudioFileInitializeWithCallbacks.
#discussion a function that will be called when AudioFile needs to set the size of the file data. This size is for all of the
data in the file, not just the audio data. This will only be called if the file is written to.
#param inClientData A pointer to the client data as set in the inClientData parameter to AudioFileXXXWithCallbacks.
#result The callback should return the size of the data.
*/
typedef OSStatus (*AudioFile_SetSizeProc)(
void * inClientData,
SInt64 inSize);
I'm building a client using dns-sd api from Bonjour. I notice that there is a flag called kDNSServiceFlagsShareConnection that it is used to share the connection of one DNSServiceRef.
Apple site says
For efficiency, clients that perform many concurrent operations may want to use a single Unix Domain Socket connection with the background daemon, instead of having a separate connection for each independent operation. To use this mode, clients first call DNSServiceCreateConnection(&MainRef) to initialize the main DNSServiceRef. For each subsequent operation that is to share that same connection, the client copies the MainRef, and then passes the address of that copy, setting the ShareConnection flag to tell the library that this DNSServiceRef is not a typical uninitialized DNSServiceRef; it's a copy of an existing DNSServiceRef whose connection information should be reused.
There is even an example that shows how to use the flag. The problem i'm having is when I run the program it stays like waiting for something whenever I call a function with the flag. Here is the code:
DNSServiceErrorType error;
DNSServiceRef MainRef, BrowseRef;
error = DNSServiceCreateConnection(&MainRef);
BrowseRef = MainRef;
//I'm omitting when I check for errors
error = DNSServiceBrowse(&MainRef, kDNSServiceFlagsShareConnection, 0, "_http._tcp", "local", browse_reply, NULL);
// After this call the program stays waiting for I don't know what
//I'm omitting when I check for errors
error = DNSServiceBrowse(&BrowseRef, kDNSServiceFlagsShareConnection, 0, "_http._tcp", "local", browse_reply, NULL);
//I'm omitting when i check for errors
DNSServiceRefDeallocate(BrowseRef); // Terminate the browse operation
DNSServiceRefDeallocate(MainRef); // Terminate the shared connection
Any ideas? thoughts? suggestion?
Since there are conflicting answers, I dug up the source - annotations by me.
// If sharing...
if (flags & kDNSServiceFlagsShareConnection)
{
// There must be something to share (can't use this on the first call)
if (!*ref)
{
return kDNSServiceErr_BadParam;
}
// Ref must look valid (specifically, ref->fd)
if (!DNSServiceRefValid(*ref) ||
// Most operations cannot be shared.
((*ref)->op != connection_request &&
(*ref)->op != connection_delegate_request) ||
// When sharing, pass the ref from the original call.
(*ref)->primary)
{
return kDNSServiceErr_BadReference;
}
The primary fiels is explained elsewhere:
// When using kDNSServiceFlagsShareConnection, there is one primary _DNSServiceOp_t, and zero or more subordinates
// For the primary, the 'next' field points to the first subordinate, and its 'next' field points to the next, and so on.
// For the primary, the 'primary' field is NULL; for subordinates the 'primary' field points back to the associated primary
The problem with the question is that DNSServiceBrowse maps to ref->op==browse_request which causes a kDNSServiceErr_BadReference.
It looks like kDNSServiceFlagsShareConnection is half-implemented, because I've also seen cases in which it works - this source was found by tracing back when it didn't work.
Service referenses for browsing and resolving may unfortunately not be shared. See the comments in the Bonjour documentation for the kDNSServiceFlagsShareConnection-flag. Since you only browse twice I would just let them have separate service-refs instead.
So both DNSServiceBrowse() and DNSServiceResolve() require an unallocated service-ref as first parameter.
I can't explain why your program chokes though. The first DNSServiceBrowse() call in your example should return immediately with an error code.
Although an old question, but it should help people looking around for answers now.
The answer by vidtige is incorrect, the may be shared for any operation, provided you pass the 'kDNSServiceFlagsShareConnection' flag along with the arguments. Sample below -
m_dnsrefsearch = m_dnsservice;
DNSServiceErrorType mdnserr = DNSServiceBrowse(&m_dnsrefsearch,kDNSServiceFlagsShareConnection,0,
"_workstation._tcp",NULL,
DNSServiceBrowseReplyCallback,NULL);
Reference - http://osxr.org/android/source/external/mdnsresponder/mDNSShared/dns_sd.h#0267