The difficulty in designing a FIN scanning program - c

I want to implement a Linux C program to do the following task: it uses FIN scanning to scan all the open ports of a host.
Here's a short description for the FIN scanning(skip if you already know it):Wikipedia: FIN scanning
In FIN scanning, an open port will not respond in any form, while closed port will send back a RST packet. And every computer has 65536 possible ports in total, you know. I've not found some source code which can give me some directions.
And my idea, kind of low efficiency, is like this: the main program iteratively send FIN packet to each port and a thread is in charge of receiving the feedback (RST packet). This thread only works for a period of time, and after the timeout, it exit. After that, the main program will check and determine which ports have not been RST'd yet.
I think a more serious problem of this scheme is it's not reliable enough because the timeout is hard to define. Does anyone can provide a better scheme, please?

nmap already does this.. But I don't think you can really get around doing a timeout based implementation. A couple seconds should suffice, but set a reasonable default and then make it configurable. This is what I did for an arp scanner I wrote once. I didn't use threads, but instead non-blocking pcap, but a threaded solution would have worked just as well.

Maybe nmap code can help you

Related

TCP Client/Server with Linux

This may be a very basic question/design but I am struggling with the correct method to handle the system I am going to define here.
I have a system with a single client (PC) that will connect to an embedded Linux board (Raspberry Pi) via TCP/IP protocol. This will be a command/response system where the PC will ask for something and the raspberry PI will respond with the results.
For Example:
CMD => Read/Return ADC Channel X
RSP => ADC Channel X Data
For this type of system I have already defined a packet protocol that will allow for this interaction. My problem is how to handle this on the Raspberry PI. I envision having a single thread handling the TCP connection; placing incomming data into a thread safe queue and pulling outgoing data from a thread safe queue. Then the main thread would poll the queue periodically looking for data. When data is found the command would be processed and a response will be generated. All commands have a response.
The main thread will also be doing other time critical tasks (PID control loop) so it cannot wait for incoming or outgoing data.
My guess is this type of system is fairly common and there is probably a good approach to implementing this type of system. I am very new to Linux programming but I have been programming highly embedded systems (No OS) forever. Just struggling with the correct approach for this type of design.
Note I chose TCP/IP because it handles retying in case of failure. In my case every command has a response so UDP could be used if it makes the design easier/more flexible.
Any help is greatly appreciated.
I tend to avoid threads if I can and only use them if I have to because they make debugging the program harder. They turn a determinsitic problem into a non-deterministic one. So my initial approach would be to see if I can do this without a thread and still achieve concurrency. This is possible using select which will notify your main program when there is something on the socket that needs to be read. Then, when there is something on the socket, it can read the data, process it, and wait for the next event. Problems with this approach is if the computation on the received data takes longer than the acceptable time wanted to process the next element of data, you could end up with a backlog of unprocessed data on the socket. If this is going to happen then you can go ahead and run the receive loop in thread, and the work function in another thread, or fork a new process and deal with a copy of the data from the new process.
the ultra classic linux approach is to have a listener program that forks a new copy of itself for each new client. Linux even has a built in demon that does that for you (initd - although that might have changed with all the systemd stuff). Thats how sshd, telnetd, ftpd all work. No threads

How to optimize number of threads needed

I am building an UDP port scanner in C.
This is a scheme of the code
Create Socket
Structure raw UDP packet with port i
Send packet and wait n miliseconds for reply
I need to perform those tasks X times, depending on the number of ports to be scanned. It may be up to 65535 times.
My goal is to optimize resources, considering an i386 machine running under a 3.5.0-17-generic Linux kernel.
How many threads should be created?
How many packets should be sent inside a single thread?
Thanks for your attention.
One thread, using select, epoll or similar.
All of them. Remember to rate limit since that doesn't happen automatically with UDP.

Implementing: udp receive queue dropping packets

How can I implement following scenario?
I want my FreeBSD kernel to drop UDP packets on high load.
I can set sysctl net.inet.udp.recvspace to very low number to drop the packet. But how do I implement such an application?
I assume I would need some kind of client/server application.
Any pointers are appreciated.
p.s. This is not a homework. And I am not looking for exact code. I am just looking for ideas.
It will do that automatically. You don't have to do anything about it at all, let alone fiddle with kernel parameters.
Most people posting about UDP are looking for ways to stop UDP from dropping packets!
Use the (SOL_SOCKET, SO_RCVBUF) socket option via setsockopt() to change the size of your socket buffer.
Either tweak the sending app to 'drop' the ocasional packet or, if not possible, connect the UDP messages via a proxy that does the same thing.
What I would do is do the following. I don't know if you need a kernel module or a program.
Supouse you have a function call when you receive an UDP datagram, and then you can choose what to do, drop it or process it. And the process function can trigger several threads.
EVER:
DATAGRAM := DEQUE()
IF(HIGHLOAD > LIMIT)
SEND(HIGH_LOAD_TO(DATAGRAM.SOURCE))
CONTINUE //Start from the biggining
HIGLOAD := HIGHLOAD + 1
PROCESS(DATAGRAM)
PROCESS(DATAGRAM):
...PROCESS DATAGRAM...
HIGHLOAD := HIGHLOAD - 1
You can tweek this how ever you want, but is an idea. When you start processing a pakcage, you count, and when the process is finished, you decrement. So you basically can choose how many packages are you processing right now.

Reading from the serial port in a multi-threaded program on Linux

I'm writing a program in linux to interface, through serial, with a piece of hardware. The device sends packets of approximately 30-40 bytes at about 10Hz. This software module will interface with others and communicate via IPC so it must perform a specific IPC sleep to allow it to receive messages that it's subscribed to when it isn't doing anything useful.
Currently my code looks something like:
while(1){
IPC_sleep(some_time);
read_serial();
process_serial_data();
}
The problem with this is that sometimes the read will be performed while only a fraction of the next packet is available at the serial port, which means that it isn't all read until next time around the loop. For the specific application it is preferable that the data is read as soon as it's available, and that the program doesn't block while reading.
What's the best solution to this problem?
The best solution is not to sleep ! What I mean is a good solution is probably to mix
the IPC event and the serial event. select is a good tool to do this. Then you have to find and IPC mechanism that is select compatible.
socket based IPC is select() able
pipe based IPC is select() able
posix message queue are also selectable
And then your loop looks like this
while(1) {
select(serial_fd | ipc_fd); //of course this is pseudo code
if(FD_ISSET(fd_set, serial_fd)) {
parse_serial(serial_fd, serial_context);
if(complete_serial_message)
process_serial_data(serial_context)
}
if(FD_ISSET(ipc_fd)) {
do_ipc();
}
}
read_serial is replaced with parse_serial, because if you spend all your time waiting for complete serial packet, then all the benefit of the select is lost. But from your question, it seems you are already doing that, since you mention getting serial data in two different loop.
With the proposed architecture you have good reactivity on both the IPC and the serial side. You read serial data as soon as they are available, but without stopping to process IPC.
Of course it assumes you can change the IPC mechanism. If you can't, perhaps you can make a "bridge process" that interface on one side with whatever IPC you are stuck with, and on the other side uses a select()able IPC to communicate with your serial code.
Store away what you got so far of the message in a buffer of some sort.
If you don't want to block while waiting for new data, use something like select() on the serial port to check that more data is available. If not, you can continue doing some processing or whatever needs to be done instead of blocking until there is data to fetch.
When the rest of the data arrives, add to the buffer and check if there is enough to comprise a complete message. If there is, process it and remove it from the buffer.
You must cache enough of a message to know whether or not it is a complete message or if you will have a complete valid message.
If it is not valid or won't be in an acceptable timeframe, then you toss it. Otherwise, you keep it and process it.
This is typically called implementing a parser for the device's protocol.
This is the algorithm (blocking) that is needed:
while(! complete_packet(p) && time_taken < timeout)
{
p += reading_device.read(); //only blocks for t << 1sec.
time_taken.update();
}
//now you have a complete packet or a timeout.
You can intersperse a callback if you like, or inject relevant portions in your processing loops.

Buffering of stream data

I'm trying to develop a simple IRC bot. First I want to think out a proper design for this project. One of the things I'm wondering about right now is the read mechanism. I develop this bot on a Linux system. (Fedora 12) To read from a socket I use the system call "read()". I plan to use the reading functionality in the following way (code just an example. Not something from the final product):
while (uBytesRead = read(iServerSocket, caBuffer, MAX_MESSAGE_SIZE))
{
//1. Parse the buffer and place it into a Message structure.
//2. Add the message structure to a linked list that will act as a queue of message that are to be processed.
}
This code will be run in it's own thread. I choose for this option because I wanted there to be as small of a delay between reads as possible. (writes will be implemented in the same way) This is all slightly based on assumptions, that I would like to clear up. My question is: what if you receive so much data at such a quick rate, that the reading and processing the data (in this case just parsing it) goes slower than the rate at which data comes in. I made the assumption that this data will be buffered by the system. is this a right assumption? And if so:
How big is this buffer?
What happens with incomming data when this buffer gets full?
To make my application protected against spam, how could I best deal with it?
I hope I've explained my issue clear enough.
Thanks in advance.
IRC uses TCP sockets for networking. Linux/Posix TCP sockets have a data buffer for sending and another one for receiving. You can resize the buffers with setsockopt() and SO_SNDBUF/SO_RCVBUF.
TCP has flow control so when a receive buffer is getting full the OS will send a congestion notice. Received packets that didn't fit in the buffer will not be acknowledged by the receiver and will be eventually retransmitted by the sender.
So that's not to worry. What matters is what does the sender program when its socket's send buffer gets full. Some programs will close the socket, others would just discard written data and try again, while others might buffer internally.

Resources