How can I implement following scenario?
I want my FreeBSD kernel to drop UDP packets on high load.
I can set sysctl net.inet.udp.recvspace to very low number to drop the packet. But how do I implement such an application?
I assume I would need some kind of client/server application.
Any pointers are appreciated.
p.s. This is not a homework. And I am not looking for exact code. I am just looking for ideas.
It will do that automatically. You don't have to do anything about it at all, let alone fiddle with kernel parameters.
Most people posting about UDP are looking for ways to stop UDP from dropping packets!
Use the (SOL_SOCKET, SO_RCVBUF) socket option via setsockopt() to change the size of your socket buffer.
Either tweak the sending app to 'drop' the ocasional packet or, if not possible, connect the UDP messages via a proxy that does the same thing.
What I would do is do the following. I don't know if you need a kernel module or a program.
Supouse you have a function call when you receive an UDP datagram, and then you can choose what to do, drop it or process it. And the process function can trigger several threads.
EVER:
DATAGRAM := DEQUE()
IF(HIGHLOAD > LIMIT)
SEND(HIGH_LOAD_TO(DATAGRAM.SOURCE))
CONTINUE //Start from the biggining
HIGLOAD := HIGHLOAD + 1
PROCESS(DATAGRAM)
PROCESS(DATAGRAM):
...PROCESS DATAGRAM...
HIGHLOAD := HIGHLOAD - 1
You can tweek this how ever you want, but is an idea. When you start processing a pakcage, you count, and when the process is finished, you decrement. So you basically can choose how many packages are you processing right now.
Related
The error in cooja
I'm using Contiki-ng and the examples udp-server and udp-client. I want to do a couple of things:
1- I want the client node to sniff packets and then send a packet to the server once it does.
I managed to do that but there is somethings that I don't understand:
a- When I start the sniffing in the udp-client, by adding this bit to the code:
radio_value_t radio_rx_mode;
NETSTACK_RADIO.get_value(RADIO_PARAM_RX_MODE, &radio_rx_mode);
NETSTACK_RADIO.set_value(RADIO_PARAM_RX_MODE, radio_rx_mode & (~RADIO_RX_MODE_ADDRESS_FILTER));
This only seems to catch the packets on the udp-client app level, and when I increase the QUEUEBUF_CONF_NUM to allow the server to receive these packets it only captures the node's own packets. Any idea why this is happening?
b- When I did the same in the csma.c file within the input_packet function it works and it does capture all the packets, however, I'm not sure how to set up so that once a packet is captured in the csma level a node can send a packet from the app level?
2- Just a quick question to confirm if what I'm doing is correct, I wanted to enable the ReTx in this example so I add this to the project-config file:
#define CSMA_MAX_FRAME_RETRIES 7
Will this enable the retransmission of packets? or is it doing something else?
Any help in this regard is appreciated.
Thank you.
From the CSMA code, you can try explicitly calling a function defined in your application's code, or send an event to the application's process. If this seems too ugly, perhaps the cleanest (but not as efficient) way is to call process_post() with PROCESS_BROADCAST as the first argument. This will broadcast the event to all active processes, including the application's process.
CSMA does up to 7 retransmissions by default. To disable that of or change the number of retransmissions, #define CSMA_CONF_MAX_FRAME_RETRIES to some non-default value in the project-conf.h file. Notice the CONF in the name of this preprocessor directive.
I am totally new to socket programming and I want to program a combined TCP/UDP-Server socket in C but I don't know how to combine those two.
So at the moment, I do know how TCP- and UDP-Server/-Clients work and I have already coded the Clients for TCP and UDP. I also know that I have to use the select()-function somehow, but I don't know how to do it.
I have to read two numbers, which are sent to the TCP-/UDP-Server with either TCP- or UDP-Clients and then do some calculations with these numbers and then print the result on the server.
Does anyone know a tutorial for that or an example code or can help me with that?
Or at least a good explanation of the select() function.
Basically, use an event loop. It works like this:
Is there anything I need to do now? If so, do it.
Compute how long until I next need to do something.
Call select specifying all sockets I'm willing to read from in the read set and all sockets I'm trying to write to in the write set.
If we discovered any sockets that are ready for reading, read from them.
If we discovered any sockets that are ready from writing, try to write to them. If we wrote everything we need to write, remove them from the write set.
Go to step 1.
Generally, to write to a socket, you follow this logic:
Am I already trying to write to this socket? If so, just add this to the queue and we're done.
Try to write the data to the socket. If we sent it all, we're done.
Save the leftover in the queue and add this socket to our write set.
Three things to keep in mind:
You must set all sockets non-blocking.
Make sure to copy your file descriptor sets before you pass them to select because select modifies them.
For TCP connections, you will probably need your own write queue.
The idea is to mix inside your server a TCP part and a UDP part.
Then you multiplex the inputs. You could use the old select(2) multiplexing call, but it has limitations (google for C10K problem). Using the poll(2)
multiplexing call is preferable.
You may want to use some event loop libraries, like libev (which uses select or poll or some fancier mechanisms like epoll). BTW, graphical toolkits (e.g. GTK or Qt) also provide their own even loop machinery.
Read some good Linux programming book like the Advanced Linux Programming
book (available online) which has good chapters about multiplexing syscalls and event loops. These are too complex to be explained well in a few minutes in such an answer. Books explain them better.
1) Simple write a tcp/udp server code, and when receive the message, just print it out.
2) substitute print code to process_message() function.
Then you have successfully combine TCP and UDP server to the same procedure.
Be careful with your handling procedure, it's should be cope with parellel execution.
You may try this stream_route_handler, it is c/c++ application, you can add tcp/udp handler in your single c/c++ application. This has been using by transportation heavy traffic route, and logging service purpose.
Example of using
void read_data(srh_request_t *req);
void read_data(srh_request_t *req) {
char *a = "CAUSE ERROR FREE INVALID";
if (strncmp( (char*)req->in_buff->start, "ERROR", 5) == 0) {
free(a);
}
// printf("%d, %.*s\n", i++, (int) (req->in_buff->end - req->in_buff->start), req->in_buff->start);
srh_write_output_buffer_l(req, req->in_buff->start, (req->in_buff->end - req->in_buff->start));
// printf("%d, %.*s\n", i++, (int) (req->out_buff->end - req->out_buff->start), req->out_buff->start);
}
int main(void) {
srh_instance_t * instance = srh_create_routing_instance(24, NULL, NULL);
srh_add_udp_fd(instance, 12345, read_data, 1024);
srh_add_tcp_fd(instance, 3232, read_data, 64);
srh_start(instance);
return 0;
}
If you are using C++ program, you may like this sample code.
stream route with spdlog
i have the following code:
{
send(dstSocket, rcvBuffer, recvMsgSize, 0);
sndMsgSize = recv(dstSocket, sndBuffer, RCVBUFSIZE, 0);
send(rcvSocket, sndBuffer, sndMsgSize, 0);
recvMsgSize = recv(rcvSocket, rcvBuffer, RCVBUFSIZE, 0);
}
which eventually should become part of a generic TCP-Proxy. Now as it stands, it doesn't work quite correctly, since the recv() waits for input so the data only gets transmitted in chunks, depending where it currently is.
What i read up on it is that i need something like "non-blocking sockets" and a mechanism to monitor them. This mechanism as i found out is either select, poll or epoll in Linux. Could anyone give me a confirmation that i am on the right track here? Or could this excercise also be done with blocking sockets?
Regards
You are on the right track.
"select" and "poll" are system calls where you can pass in one or more sockets and block (for a specific amount of time) until data has been received (or ready for sending) on one of those sockets.
"non-blocking sockets" is a setting you can apply to a socket (or a recv call flag) such that if you try to call recv, but no data is available, the call will return immediately. Similar semantics exist for "send". You can use non-blocking sockets with or without the select/poll method described above. It's usually not a bad idea to use non-blocking operations just in case you get signaled for data that isn't there.
"epoll" is a highly scalable version of select and poll. A "select" set is actually limited to something like 64-256 sockets for monitoring at a time and it takes a perf hit as the number of monitored sockets goes up. "epoll" can scale up to thousands of simultaneous network connections.
Yes you are in the right track. Use non-blocking socket passing their relative file descriptors to select (see FD_SET()).
This way select will monitor for events (read/write) on them.
When select returns you can check on which fd has occurred an event (look at FD_ISSET()) and handle it.
You can also set a timeout on select, and it will return after that period even if not events have been occurred.
Yes you'll have to use one of those mechanisms. poll is portable and IMO the most easy one to use. You don't have to turn off blocking in this case, provided you use a small enough value for RCVBUFSIZE (around 2k-10k should be appropriate). Non-blocking sockets are a bit more complicated to handle, since if you get EAGAIN on send, you can't just loop to try again (well you can, but you shouldn't since it uses CPU unnecessarily).
But I would recommend to use a wrapper such as libevent. In this case a struct bufferevent would work particularly well. It will make a callback when new data is available, and you just queue it up for sending on the other socket.
Tried to find an bufferevent example but seems to be a bit short on them. The documentation is here anyway: http://monkey.org/~provos/libevent/doxygen-2.0.1/index.html
I'm writing a program in linux to interface, through serial, with a piece of hardware. The device sends packets of approximately 30-40 bytes at about 10Hz. This software module will interface with others and communicate via IPC so it must perform a specific IPC sleep to allow it to receive messages that it's subscribed to when it isn't doing anything useful.
Currently my code looks something like:
while(1){
IPC_sleep(some_time);
read_serial();
process_serial_data();
}
The problem with this is that sometimes the read will be performed while only a fraction of the next packet is available at the serial port, which means that it isn't all read until next time around the loop. For the specific application it is preferable that the data is read as soon as it's available, and that the program doesn't block while reading.
What's the best solution to this problem?
The best solution is not to sleep ! What I mean is a good solution is probably to mix
the IPC event and the serial event. select is a good tool to do this. Then you have to find and IPC mechanism that is select compatible.
socket based IPC is select() able
pipe based IPC is select() able
posix message queue are also selectable
And then your loop looks like this
while(1) {
select(serial_fd | ipc_fd); //of course this is pseudo code
if(FD_ISSET(fd_set, serial_fd)) {
parse_serial(serial_fd, serial_context);
if(complete_serial_message)
process_serial_data(serial_context)
}
if(FD_ISSET(ipc_fd)) {
do_ipc();
}
}
read_serial is replaced with parse_serial, because if you spend all your time waiting for complete serial packet, then all the benefit of the select is lost. But from your question, it seems you are already doing that, since you mention getting serial data in two different loop.
With the proposed architecture you have good reactivity on both the IPC and the serial side. You read serial data as soon as they are available, but without stopping to process IPC.
Of course it assumes you can change the IPC mechanism. If you can't, perhaps you can make a "bridge process" that interface on one side with whatever IPC you are stuck with, and on the other side uses a select()able IPC to communicate with your serial code.
Store away what you got so far of the message in a buffer of some sort.
If you don't want to block while waiting for new data, use something like select() on the serial port to check that more data is available. If not, you can continue doing some processing or whatever needs to be done instead of blocking until there is data to fetch.
When the rest of the data arrives, add to the buffer and check if there is enough to comprise a complete message. If there is, process it and remove it from the buffer.
You must cache enough of a message to know whether or not it is a complete message or if you will have a complete valid message.
If it is not valid or won't be in an acceptable timeframe, then you toss it. Otherwise, you keep it and process it.
This is typically called implementing a parser for the device's protocol.
This is the algorithm (blocking) that is needed:
while(! complete_packet(p) && time_taken < timeout)
{
p += reading_device.read(); //only blocks for t << 1sec.
time_taken.update();
}
//now you have a complete packet or a timeout.
You can intersperse a callback if you like, or inject relevant portions in your processing loops.
I want to implement a Linux C program to do the following task: it uses FIN scanning to scan all the open ports of a host.
Here's a short description for the FIN scanning(skip if you already know it):Wikipedia: FIN scanning
In FIN scanning, an open port will not respond in any form, while closed port will send back a RST packet. And every computer has 65536 possible ports in total, you know. I've not found some source code which can give me some directions.
And my idea, kind of low efficiency, is like this: the main program iteratively send FIN packet to each port and a thread is in charge of receiving the feedback (RST packet). This thread only works for a period of time, and after the timeout, it exit. After that, the main program will check and determine which ports have not been RST'd yet.
I think a more serious problem of this scheme is it's not reliable enough because the timeout is hard to define. Does anyone can provide a better scheme, please?
nmap already does this.. But I don't think you can really get around doing a timeout based implementation. A couple seconds should suffice, but set a reasonable default and then make it configurable. This is what I did for an arp scanner I wrote once. I didn't use threads, but instead non-blocking pcap, but a threaded solution would have worked just as well.
Maybe nmap code can help you