Is it there a way to set TCP ECN on an unprivileged TCP socket in a C linux program?
Does any congestion algorythm that can be set through setsockopt() involve ECN?
Thank you!
Short answer: no and technically yes (but based on the question it won't help and I don't think it is yes to what you wanted to ask).
ECN is turned on by echoing 1 to /proc/sys/net/ipv4/tcp_ecn. See
ip_sysctl.txt. By default it should be 2 which enables ECN when the peer requests it, but does not initiate requests for it. To set this would require "privileges" and can't be done via a socket so the 1st answer is no.
Congestion algorithms may be set on a per socket basis and may involve ECN, trivially the default one does. So technically, yes. But even though the congestion algorithms may involve ECN, the code in tcp_input.c and tcp_output.c makes it clear that without the sysctl flag set, it won't use it, so it won't help.
See the very good information in this answer
Related
I need to test code which deals with ICMP packets, but there is no activity at all. So i thought is there any system function to trigger tsome activity, for instance to make port 80 work you usually do system("wget 'webaddress'");. Is there anything similar to that for ICMP? thanks beforehand
The ping command would get you close. Modern implementations often default to a random UDP port, but the documentation on your system (e.g. man ping) should tell you the option to pass to tell it to use ICMP instead.
How can I implement following scenario?
I want my FreeBSD kernel to drop UDP packets on high load.
I can set sysctl net.inet.udp.recvspace to very low number to drop the packet. But how do I implement such an application?
I assume I would need some kind of client/server application.
Any pointers are appreciated.
p.s. This is not a homework. And I am not looking for exact code. I am just looking for ideas.
It will do that automatically. You don't have to do anything about it at all, let alone fiddle with kernel parameters.
Most people posting about UDP are looking for ways to stop UDP from dropping packets!
Use the (SOL_SOCKET, SO_RCVBUF) socket option via setsockopt() to change the size of your socket buffer.
Either tweak the sending app to 'drop' the ocasional packet or, if not possible, connect the UDP messages via a proxy that does the same thing.
What I would do is do the following. I don't know if you need a kernel module or a program.
Supouse you have a function call when you receive an UDP datagram, and then you can choose what to do, drop it or process it. And the process function can trigger several threads.
EVER:
DATAGRAM := DEQUE()
IF(HIGHLOAD > LIMIT)
SEND(HIGH_LOAD_TO(DATAGRAM.SOURCE))
CONTINUE //Start from the biggining
HIGLOAD := HIGHLOAD + 1
PROCESS(DATAGRAM)
PROCESS(DATAGRAM):
...PROCESS DATAGRAM...
HIGHLOAD := HIGHLOAD - 1
You can tweek this how ever you want, but is an idea. When you start processing a pakcage, you count, and when the process is finished, you decrement. So you basically can choose how many packages are you processing right now.
I need to perform data filtering based on the source unicast IPv4 address of datagrams arriving to a Linux UDP socket.
Of course, it is always possible to manually perform the filtering based on the information provided by recvfrom, but I am wondering if there could be another more intelligent/efficient approach (if possible, not using libpcap).
Any ideas?
If it's a single source you need to allow, then use just connect(2) and kernel will do filtering for you. As a bonus, connected UDP sockets are more efficient. This, of cource, does not work for more then one source.
As already stated, NetFilter (the Linux firewall) can help you here.
You could also use the UDP options of xinetd and tcpd to perform filtering.
What proportion of datagrams are you expecting to discard? If it is very high, then you may want to review your application design (for example, to make the senders not send so many datagrams which are to be discarded). If it is not very high, then you don't really care about how much effort you spend discarding them.
Suppose discarding a packet takes the same amount of (runtime) effort as processing it normally; if you discard 1% of packets, you will only be spending 1% of time discarding. However, realistically, discarding is likely to be much easier than processing messages.
Is there an API function on Linux (kernel 2.6.20) which can be used to check if a given TCP/IP port is used - bound and/or connected ?
Is bind() the only solution (binding to the given port using a socket with the SO_REUSEADDR option, and then closing it) ?
Hmm,
According to strace -o trace.out netstat -at
netstat does this by looking at
/proc/net/tcp
and
/proc/net/tcp6
The used ports are in hex in the second field of the entries in that file.
You can get the state of the connection by looking at the 4th field, for example 0A is LISTEN and 01 is ESTABLISHED.
The holy portable BSD socket API won't allow you to know whether the port is in use before you try to allocate it. Don't try to outsmart the API. I know how tempting it is, I've been in that situation before. But any superficially smart way of doing this (by e.g. the proc filesystem) is prone to subtle errors, compatibility problems in the future, race conditions and so forth.
Grab the source of the netstat command and see how it sees. However, you will always have a race. Also, SO_REUSEADDR won't let you use a port someone else is actively using, of course, just one in close-wait.
I want to implement a Linux C program to do the following task: it uses FIN scanning to scan all the open ports of a host.
Here's a short description for the FIN scanning(skip if you already know it):Wikipedia: FIN scanning
In FIN scanning, an open port will not respond in any form, while closed port will send back a RST packet. And every computer has 65536 possible ports in total, you know. I've not found some source code which can give me some directions.
And my idea, kind of low efficiency, is like this: the main program iteratively send FIN packet to each port and a thread is in charge of receiving the feedback (RST packet). This thread only works for a period of time, and after the timeout, it exit. After that, the main program will check and determine which ports have not been RST'd yet.
I think a more serious problem of this scheme is it's not reliable enough because the timeout is hard to define. Does anyone can provide a better scheme, please?
nmap already does this.. But I don't think you can really get around doing a timeout based implementation. A couple seconds should suffice, but set a reasonable default and then make it configurable. This is what I did for an arp scanner I wrote once. I didn't use threads, but instead non-blocking pcap, but a threaded solution would have worked just as well.
Maybe nmap code can help you