maximum value of congestion window in ubuntu - congestion-control

Can anyone help me to find the max congestion window value for TCP in ubuntu? I only find the init window size; I can’t find the max.
I only can find the init window size . i need the max , it is related to buffer size of tcp ?

Can anyone help me to find the max congestion window value for TCP in ubuntu?
Command
sysctl net.ipv4.tcp_wmem
will output something like
net.ipv4.tcp_wmem = 4096 16384 4194304
the last value (4194304) is the maximum congestion window.

It depends on the send window.
With window scale option, you can find the limits of send windows here:
https://en.wikipedia.org/wiki/TCP_window_scale_option
You may want to take a look into the actual implementation and may find the following useful:
1. general tcp implementation:
https://github.com/torvalds/linux/blob/6f0d349d922ba44e4348a17a78ea51b7135965b1/net/ipv4/tcp.c
2. TCP cubic variant:
https://github.com/torvalds/linux/blob/6f0d349d922ba44e4348a17a78ea51b7135965b1/net/ipv4/tcp_cubic.c
The are various variants of TCP congestion control - you can find what flavour you are using based on: https://superuser.com/questions/992919/how-to-check-the-tcp-congestion-control-algorithm-flavour-in-ubuntu

Related

TCP snd_cwnd in the linux kernel

What does tcp_sock.snd_cwnd represent in the kernel? The comments claim it is the congestion window size for the sender. But its values hover around the 10-30 range. Is it measured in MSS?

Modify default value of socket buffer sizes in windows

In the socket programming, SO_SNDBUF and SO_RCVBUF will have the default value as the 8192 bytes when the size of RAM is greater than 19MB.
Now, I want to change the socket buffer sizes for my sockets.I know that one way is by setsockopt. But, I want to apply changes to the system default, and be able to use the modified value of the socket buffers for all the sockets I create in the system. Please let me know where do I make the configuration changes in windows platform?
Here there is a description of how it works:
http://smallvoid.com/article/winnt-winsock-buffer.html
And the solution should be:
[HKEY_LOCAL_MACHINE \SYSTEM \CurrentControlSet \Services \Afd \Parameters]
DefaultReceiveWindow = 16384
DefaultSendWindow = 16384

c TCP rawsocket options

how to include TCP options
MSS(maximum segment size),
WS(window scale),
sack-permitted.
options in c raw socket
we can include other options in TCP like source,destination,syn,ack
tcp->src,
tcp->dst,
tcp->syn,
tcp->ack.
.........
but when i include tcp reserverd special options mss,ws
tcp->mss,tcp->ws.
it shows an error that MSS,WS are not in the tcp header
can anyone show me how to include those options in tcp raw socket
thank you
TCP WS in Linux
Assuming Linux, I believe you cannot directly change the TCP window size in C. This is because this is handled directly by the kernel.
One way to modify the TCP WS is to employ a mix of the following sysctl variables (read more about them in man tcp):
tcp_wmem
tcp_rmem
tcp_window_scaling
According to RFC 1323 (https://www.ietf.org/rfc/rfc1323.txt), TCP window scaling allows a maximum WS of 65K. The default maximum TCP WS in the Linux kernel is 32K. According to man tcp, you would increase the size of your socket buffer, at which point TCP Window Scaling will be used.
TCP MSS in Linux
Once again, I believe this is only possible at the kernel level. You can override the default calculations for MSS (which are dynamically calculated based on hop distance) using the iptables kernel module. Specifically, using the --set-mss option.
See: http://lartc.org/howto/lartc.cookbook.mtu-mss.html
If I am wrong, please correct me.

What is the default size of datagram queue length in Unix Domain Sockets (AF_UNIX)? Is it configurable?

I know the maximum length of a datagram queue length can be found using
"cat /proc/sys/net/unix/max_dgram_qlen".
I wanted to know how to find the default value that is set on boot up (like in case of the /proc/sys/net/core/wmem_default for the send buffer size).
Is it possible to increase the value of max_dgram_qlen? What is the upper limit of the same?
My kernel version is 2.6.27.7. I'm new to Unix Domain Socket programming (AF_UNIX).
Thanks in advance for any comments / solutions!
The previous answers/comments failed to understand that the OP was talking about maximum queue length in datagrams (max_dram_qlen) and not in bytes. The OS provides settings for both sizes.
You can set max_dgram_qlen using the following command:
sysctl net.unix.max_dgram_qlen=128
You may need to run with sudo and you may also need to put double quotes around max_dgram_qlen=128 depending on your shell.
Also, see What's the practical limit on the size of single packet transmitted over domain socket?.
man unix(7):
The SO_SNDBUF socket option does have an effect for UNIX domain sockets, but the SO_RCVBUF option does not. For datagram sockets, the SO_SNDBUF value imposes an upper limit on the size of outgoing datagrams. This limit is calculated as the doubled (see socket(7)) option value less 32 bytes used for overhead.

The difficulty in designing a FIN scanning program

I want to implement a Linux C program to do the following task: it uses FIN scanning to scan all the open ports of a host.
Here's a short description for the FIN scanning(skip if you already know it):Wikipedia: FIN scanning
In FIN scanning, an open port will not respond in any form, while closed port will send back a RST packet. And every computer has 65536 possible ports in total, you know. I've not found some source code which can give me some directions.
And my idea, kind of low efficiency, is like this: the main program iteratively send FIN packet to each port and a thread is in charge of receiving the feedback (RST packet). This thread only works for a period of time, and after the timeout, it exit. After that, the main program will check and determine which ports have not been RST'd yet.
I think a more serious problem of this scheme is it's not reliable enough because the timeout is hard to define. Does anyone can provide a better scheme, please?
nmap already does this.. But I don't think you can really get around doing a timeout based implementation. A couple seconds should suffice, but set a reasonable default and then make it configurable. This is what I did for an arp scanner I wrote once. I didn't use threads, but instead non-blocking pcap, but a threaded solution would have worked just as well.
Maybe nmap code can help you

Resources