The error in cooja
I'm using Contiki-ng and the examples udp-server and udp-client. I want to do a couple of things:
1- I want the client node to sniff packets and then send a packet to the server once it does.
I managed to do that but there is somethings that I don't understand:
a- When I start the sniffing in the udp-client, by adding this bit to the code:
radio_value_t radio_rx_mode;
NETSTACK_RADIO.get_value(RADIO_PARAM_RX_MODE, &radio_rx_mode);
NETSTACK_RADIO.set_value(RADIO_PARAM_RX_MODE, radio_rx_mode & (~RADIO_RX_MODE_ADDRESS_FILTER));
This only seems to catch the packets on the udp-client app level, and when I increase the QUEUEBUF_CONF_NUM to allow the server to receive these packets it only captures the node's own packets. Any idea why this is happening?
b- When I did the same in the csma.c file within the input_packet function it works and it does capture all the packets, however, I'm not sure how to set up so that once a packet is captured in the csma level a node can send a packet from the app level?
2- Just a quick question to confirm if what I'm doing is correct, I wanted to enable the ReTx in this example so I add this to the project-config file:
#define CSMA_MAX_FRAME_RETRIES 7
Will this enable the retransmission of packets? or is it doing something else?
Any help in this regard is appreciated.
Thank you.
From the CSMA code, you can try explicitly calling a function defined in your application's code, or send an event to the application's process. If this seems too ugly, perhaps the cleanest (but not as efficient) way is to call process_post() with PROCESS_BROADCAST as the first argument. This will broadcast the event to all active processes, including the application's process.
CSMA does up to 7 retransmissions by default. To disable that of or change the number of retransmissions, #define CSMA_CONF_MAX_FRAME_RETRIES to some non-default value in the project-conf.h file. Notice the CONF in the name of this preprocessor directive.
Related
Is it possible for a Linux kernel module to transparently detour the packet coming from upper layer (i.e. L2,L3) and NIC? For example, 1) a packet arrives from a NIC, the module gets the packet (do some processing on it) and delivers back to tcp/ip stack or 2) an app sends data, the module gets the packet (do some processing) and then, delivers the packet to an output NIC.
It is not like a sniffer, in which a copy of the packet is captured while the actual packet flow continues.
I thought on some possibilities to achieve my goal. I thought in registering a rx_handler in the kernel to get access to the incoming packets (coming from a NIC), but how to delivers back to the kernel stack? I mean, to allow the packet to follow the path that it should have taken without the module in the middle.
Moreover, let's say an app is sending a packet through TCP protocol. How the module could detour the packet (to literally get the packet)? Is it possible? In order to send it out through the NIC, I think dev_queue_xmit() does the job, but I'm not sure.
Does anyone know a possible solution? or any tips?
Basically, I'd like to know if there is a possibility to put a kernel module between the NIC and the MAC layer.. or in the MAC layer to do what I want. In positive case, does anyone has any hint like main kernel functions to use for those purposes?
Thanks in advance.
Yes. You can hook into kernel networking stack by providing customized callback in place of default sk_data_ready function.
static void my_sk_data_ready(struct sock *sk, int len) {
call_customized_logic(sk, len);
sock_def_readable(sk, len); /* call default callback or not call it */
}
Usage:
sk->sk_data_ready = my_sk_data_ready;
I need to test code which deals with ICMP packets, but there is no activity at all. So i thought is there any system function to trigger tsome activity, for instance to make port 80 work you usually do system("wget 'webaddress'");. Is there anything similar to that for ICMP? thanks beforehand
The ping command would get you close. Modern implementations often default to a random UDP port, but the documentation on your system (e.g. man ping) should tell you the option to pass to tell it to use ICMP instead.
How can I implement following scenario?
I want my FreeBSD kernel to drop UDP packets on high load.
I can set sysctl net.inet.udp.recvspace to very low number to drop the packet. But how do I implement such an application?
I assume I would need some kind of client/server application.
Any pointers are appreciated.
p.s. This is not a homework. And I am not looking for exact code. I am just looking for ideas.
It will do that automatically. You don't have to do anything about it at all, let alone fiddle with kernel parameters.
Most people posting about UDP are looking for ways to stop UDP from dropping packets!
Use the (SOL_SOCKET, SO_RCVBUF) socket option via setsockopt() to change the size of your socket buffer.
Either tweak the sending app to 'drop' the ocasional packet or, if not possible, connect the UDP messages via a proxy that does the same thing.
What I would do is do the following. I don't know if you need a kernel module or a program.
Supouse you have a function call when you receive an UDP datagram, and then you can choose what to do, drop it or process it. And the process function can trigger several threads.
EVER:
DATAGRAM := DEQUE()
IF(HIGHLOAD > LIMIT)
SEND(HIGH_LOAD_TO(DATAGRAM.SOURCE))
CONTINUE //Start from the biggining
HIGLOAD := HIGHLOAD + 1
PROCESS(DATAGRAM)
PROCESS(DATAGRAM):
...PROCESS DATAGRAM...
HIGHLOAD := HIGHLOAD - 1
You can tweek this how ever you want, but is an idea. When you start processing a pakcage, you count, and when the process is finished, you decrement. So you basically can choose how many packages are you processing right now.
my problem is as follows :
pcap_loop() grabs all arriving frames from the listening interface and if one of these frames contains IP data i forward it with pcap_sendpacket(). as soon as i send it the pcap_loop() grabs it and processes it again.
somebody may knows the answer to solve that?
thanks in advance and regards!
On at least some platforms, sending packets through pcap will, by default, cause those packets to be seen by pcap. Windows is one of them, so that applies to WinPcap.
The standard libpcap API to turn this off, pcap_setdirection(), is not available in current versions of WinPcap. In order to turn that off, you'll have to use the WinPcap-specific pcap_open() call to open the device on which you're capturing, and will have to supply the PCAP_OPENFLAG_NOCAPTURE_LOCAL flag in the pcap_open() call.
We have a system (built in C) in place that performs communication over UDP. Recently we have found a necessity to guarantee delivery of packets. My question is: what would be the minimum additions to a UDP based system to ensure delivery using ack packets? Also, ideally without having to manipulate the packet headers. We have application level control over the packets including sequence numbers and ack/nack flags. I am wondering if this is a lost cause and anything we attempt to do will basically be a flawed and broken version of TCP. Basically, is there a minimalist improvement we can make to achieve guaranteed delivery (we do not need many features of TCP such as congestion control etc.). Thanks!
TCP intertwines 3 services that might be relevant (okay TCP does a lot more, but I'm only going to talk about 3.)
In-order delivery
Reliable delivery
Flow control
You just said that you don't need flow control, so I won't even address that (how you would advertise a window size, etc. well, except that you'll probably need a window. i'll get to it.)
You did say that you need reliable delivery. That isn't too hard - you use ACKs to show that the sender has received a packet. Basic reliable delivery looks like:
Sender sends the packet
Receiver receives packet, and then sends an ack
If the sender doesn't get an ack (by way of a timer), he resends the packet.
Those three steps don't address these issues:
What if the ACK gets lost?
What if packets arrive out of order?
So for your application, you said you only needed reliable delivery - but didn't say anything about needing them in order. This will affect the way you implement your protocol.
(example where in-order doesn't matter: you're copying employee records from one computer to another. doesn't matter if Alice's record is received before Bob's, as long as both get there.)
So going on the presumption that you only need reliable (since that's what you said in your post), you could achieve this several ways.
Your sender can keep track of unacknowledged packets. So if it sends # 3, 4, 5, and 6, and doesn't get an ACK for 3 and 4, then the sender knows that it needs to retransmit. (Though the sender doesn't know if packets 3 and 4 were lots, or if their ACKs were lost. Either way, we have to retransmit.)
But then your sender could do cumulative ACKs - so in the above example, it would only ack #6 if it had received 3, 4, and 5. This means that the receiver would drop packet 6 if it hadn't received the ones before. If your network is very reliable, then this might not be a bad option.
The protocols described above, however, do have a window - that is, how many packets does the sender send at once? Which means that you do need some sort of windowing, but not for the purpose of flow control. How'll you transmit window sizes?
You could do it without a window by either having the window size constant, or by doing something like stop-and-wait. The former might be a better option.
Anyway, I haven't directly answered your question, but I hope I've pointed out some of the things that are worth considering when architecting this. The task of having "reliable transfer" without parts of flow control (like windowing) and without any regard to in-order is hard! (Let me know if I should give more details about some of this stuff!)
Good luck!
Take a look at Chapter 8 and Chapter 20 of Steven's UNIX Network Programming, volume 1. He covers a number of different approaches. Section 20.5 "Adding Reliability to a UDP Application" is probably most interesting to you.
I have a question running here which is collecting answers to "What to you use when you need reliable UDP". The answers are possibly much more than you want or need but you might be able to take a look at some of the protocols that have been built on UDP and grab just the ACK part that you need.
From my work with the ENet protocol (a reliable UDP protocol), I expect that you need a sequence number in each UDP datagram, a way of sending an ACK for datagrams that you've received, a way of keeping hold of datagrams that you've sent until you get an ACK for them or they time out and a way of timing the resending of datagrams for which you have yet to receive an ACK... I would also add an overall timeout for when you decide that you are never going to deliver a particular datagram, and, I guess, a callback to your application layer to inform it of this failure to deliver...
The best way to implement ack is to do it in the application layer. CoAP is an example of an application protocol which runs on udp but provides reliable data transfer. It keeps a message id for all the Confirmable(CON) messages and sends an receiver sends an ack packet with the same message id. All the ack and message id fields are kept at the application layer part. So if the sender doesn't receive an Ack packet with the message id send by him, it re transmits that packet. Application developer can modify the protocol to suit the needs required for reliable data transfer .
Tough problem. I would say, you wont be able to achieve the reliability of TCP. However, i do understand that sometimes, you need to have reliable UDP.
Gamedev forum
RUDP (a bit more hardcore)
Old Thread about reliable UDP