I am doing some work on worm-attack detection in RPL. In RPL, the communication between the clients might be multiple hops, with the packets going through many nodes.
However, only the receiver gets a tcpip_event on reception of the packet. The nodes that the route passes through do not get this event. Is there any way to detect the packet on the intermediate nodes?
You cannot get a notification or callback when a packet is forwarded. However, you can get a callback when a packet is received or sent by the lower layers.
In Contiki, use the function rime_sniffer_add for that. Check apps/powertrace/powertrace.c for an example.
In Contiki-NG the function has been renamed to netstack_sniffer_add.
Usage example:
Declare the sniffer like this, in the global scope:
RIME_SNIFFER(packet_sniffer, input_packet, output_packet);
Then add the sniffer from your code, once, at the start of the application execution:
rime_sniffer_add(&packet_sniffer);
The functions input_packet and output_packets are callbacks defined by you and can be used to examine the packets; for example, like this:
static void
input_packet(void)
{
int rssi = (int)packetbuf_attr(PACKETBUF_ATTR_RSSI);
printf("received a packet with RSSI=%d\n", rssi);
}
Related
Is it possible to run the C socket library in non-blocking mode, and trigger a function on receiving any data? The function will then evaluate the received data and decide the flow of control in the program.
It would be very helpful if you could mention some references.
Yes, using libevent library that is possible.
First create an event base
struct event_base *ev_base = event_base_new();
Create and add that event for anything received on the socket
struct event *read_ev = event_new(ev_base, socket_fd, EV_READ|EV_PERSIST, callback_function_ptr, callback_function_arg);
event_add(read_ev, NULL);
At the end of the function dispatch events
event_base_dispatch(ev_base);
I've been trying to mount a custom protocol over the TCP module on the NodeMCU platform. However, the protocol I try to embed inside the TCP data segment is binary, not ASCII-based(like HTTP for example), so sometimes it contains a NULL char (byte 0x00) ending the C string inside the TCP module implementation, causing that part of the message inside the packet get lost.
-- server listens on 80, if data received, print data to console and send "hello world" back to caller
-- 30s time out for a inactive client
sv = net.createServer(net.TCP, 30)
function receiver(sck, data)
print(data)
sck:close()
end
if sv then
sv:listen(80, function(conn)
conn:on("receive", receiver)
conn:send("hello world")
end)
end
*This is a simple example which, as you can see, the 'receiver' variable is a callback function which prints the data from the TCP segment retrieved by the listener.
How can this be fixed? is there a way to circumvent this using the NodeMCU library? Or do I have to implement another TCP module or modify the current one's implementation to support arrays or tables as a return value instead of using strings?
Any suggestion is appreciated.
The data you receive in the callback should not be truncated. You can check this for yourself by altering the code as follows:
function receiver(sck, data)
print("Len: " .. #data)
print(data)
sck:close()
end
You will observe, that, while the data is indeed only printed up to the first zero byte (by the print()-function), the whole data is present in the LUA-String data and you can process it properly with 8-bit-safe (and zerobyte-safe) methods.
While it should be easy to modify the print()-function to also be zerobyte-safe, I do not consider this as a bug, since the print function is meant for texts. If you want to write binary data to serial, use uart.write(), i.e.
uart.write(0, data)
Say that I have two programs, a client and a server, and that I am running the client and the server on the same computer (so the speed is extremely fast), and say that the client socket's receive buffer is empty, and that the server will not send any data to the client except if the client told the server to do so.
Now in the client, I call WSASend() and then after it I call WSARecv():
WSASend(...); // tell the server to send me some data
WSARecv(...);
So in the code above, WSASend() is telling the server to send some data to the client (for example: the string "hello").
Now after some time, two completion packets will be placed in the completion port:
The first completion packet is for WSASend() (telling me that the data
has been placed in the client socket's send buffer).
The second completion packet is for WSARecv() (telling me that the data
has been placed in the buffer that I passed to WSARecv() when I
called it).
Now my question is: is it possible that the completion packet for WSARecv() be placed in the completion port before the completion packet for WSASend() (so when I call GetQueuedCompletionStatus() I will get the completion packet for WSARecv() first)?
you must never assume any order of completion packets you got. you must have independent from this knowledge - which is operation complete.
you must define some structure inherited from OVERLAPPED and it this structure place all data related to operation. including tag which describe type of operation. so when you extract pointer to OVERLAPPED from IOCP you cast it to this structure and will be know - are this for recv or send, connect or disconnect.. for example
class IO_IRP : public OVERLAPPED
{
//...
DWORD m_opCode;// `recv`, `send`, `dsct`, `cnct`
IO_IRP(DWORD opCode,...) : m_opCode(opCode) {}
VOID IOCompletionRoutine(DWORD dwErrorCode, DWORD dwNumberOfBytesTransfered)
{
// switch (m_opCode)
m_pObj->IOCompletionRoutine(m_packet, m_opCode, dwErrorCode, dwNumberOfBytesTransfered, Pointer);
delete this;
}
static VOID CALLBACK _IOCompletionRoutine(DWORD dwErrorCode, DWORD dwNumberOfBytesTransfered, LPOVERLAPPED lpOverlapped)
{
static_cast<IO_IRP*>(lpOverlapped)->IOCompletionRoutine(dwErrorCode, dwNumberOfBytesTransfered);
}
};
// recv
if (IO_IRP* Irp = new IO_IRP('recv', ..))
{
WSARecv(..., Irp);
...
}
I am coding the communication between 2 DSPs through SPI. The start code is quite simple, DSP-1 is sending and DSP-2 is receiving (Of course, DSP-1 also receives but I don't care so far, vice versa for DSP-2)
That works fine. One thousand 16bit data were sent and received correctly.
However, when I add an random delay in DSP-1(master) side, I found DSP-2 begin to lost some data. It is confusing me that I didn't change anything at DSP-2 side for receiving and I am polling quite often.
So anyidea why the delay on sender's side might affect the receiver? (I double checked the DSP1 did send correct sequence.)
And I am thinking to convert to interrupt mechanism, will that solve this kind of issue for all?
my DSP2's polling code is:
for(;;) //my main program for receving
{
spi_xmit(data); //For sending, not care so far
while(SpiaRegs.SPIFFRX.bit.RXFFST == 0) {} //polling
while(SpiaRegs.SPIFFRX.bit.RXFFST != 0)
{
rdata[seq] = SpiaRegs.SPIRXBUF;
seq++;
}
if(seq>1000) break;
}
I am working on newtwork event based socket application.
When client has sent some data and there is something to be read on the socket, FD_READ network event is generated.
Now according to my understanding, when server wants to write over the socket, there must be an event generated i.e. FD_WRITE. But how this message will be generated?
When there is something available to be read, FD_READ is automatically generated but what about FD_WRITE when server wants to write something?
Anyone who can help me with this confusion please?
Following is the code snippet:
WSAEVENT hEvent = WSACreateEvent();
WSANETWORKEVENTS events;
WSAEventSelect(newSocketIdentifier, hEvent, FD_READ | FD_WRITE);
while(1)
{ //while(1) starts
waitRet = WSAWaitForMultipleEvents(1, &hEvent, FALSE, WSA_INFINITE, FALSE);
//WSAResetEvent(hEvent);
if(WSAEnumNetworkEvents(newSocketIdentifier,hEvent,&events) == SOCKET_ERROR)
{
//Failure
}
else
{ //else event occurred starts
if(events.lNetworkEvents & FD_READ)
{
//recvfrom()
}
if(events.lNetworkEvents & FD_WRITE)
{
//sendto()
}
}
}
FD_WRITE means you can write to the socket right now. If the send buffers fill up (you're sending data faster than it can be sent on the network), eventually you won't be able to write anymore until you wait a bit.
Once you make a write that fails due to the buffers being full, this message will be sent to you to let you know you can retry that send.
It's also sent when you first open up the socket to let you know it's there and you can start writing.
http://msdn.microsoft.com/en-us/library/windows/desktop/ms741576(v=vs.85).aspx
The FD_WRITE network event is handled slightly differently. An
FD_WRITE network event is recorded when a socket is first connected
with a call to the connect, ConnectEx, WSAConnect, WSAConnectByList,
or WSAConnectByName function or when a socket is accepted with accept,
AcceptEx, or WSAAccept function and then after a send fails with
WSAEWOULDBLOCK and buffer space becomes available. Therefore, an
application can assume that sends are possible starting from the first
FD_WRITE network event setting and lasting until a send returns
WSAEWOULDBLOCK. After such a failure the application will find out
that sends are again possible when an FD_WRITE network event is
recorded and the associated event object is set.
So, ideally you're probably keeping a flag as to whether it's OK to write, right now. It starts off as true, but eventually, you get a WSAEWOULDBLOCK when calling sendto, and you set it to false. Once you receive FD_WRITE, you set the flag back to true and resume sending packets.