Opening win-socket (tcp) in kernel mode specifying sequence number - c

I'm writing a windows driver (of course in c and I'm in kernel mode) and I'd like to open a tcp socket from the outside specifying the sequence number the first SYN packet should have.
I tried modifying the packet filtering it with Windows Filtering Platform, but of course it doesn't work because the stack think that the correct number is the original one and the recipient's stack think that the correct one is modified one.
I'm looking somethink like:
OpenSocket(..., UINT32 seqNum, UINT16 winSize)
or anything equivalent.
There is a way to do that?
Thanks,
Marco

Seems like a strange thing to be doing, but if your filter can modify both incoming and outgoing packets then it can fix the sequence number in both directions.
Just figure out the offset from the orignal sequence number. Then you can add it to the sequence number for outgoing packets and subtract it from the acknowledgment numbers for incoming packets.
Each side of the conversation gets exactly what they expect, even though they disagree on what is expected.

Related

Sending multiple Package very fast in C

I'm trying to do a multiplayer game in c, but when I send multiple package like "ARV 2\n\0" and "POS 2 0 0\n\0" from the server to the client (with send()), when I try to read them with recv(), he only found 1 package that appear to be the 2 package in 1..
So I'm asking, is that normal ? And if yes, how could I force my client to read 1 by 1 the packages ? (or my server to send them 1 by 1 if the problem come from the call send)
Thanks !
Short answer: Yes, this is normal. You are using TCP/IP, I assume. It is a byte stream protocol, there are no "packets". Network and OS on either end may combine and split the data you send in any way that fits in some buffers, or parts of network. Only thing guaranteed is, that you get the same bytes in same order.
You need to use your own packet framing. For text protocol, separate packets with, for example, '\0' bytes or newlines. Also note that network or OS may give you partial packets per single "read", so you need to handle that in your code as well. This is easiest if packet separator is single byte.
Especially for a binary protocol where there are no "unused" byte values to mark packet boundaries, you could write length of packet as binary data, then that many data bytes, then again length, data, and so on. Note that the data stream may get split to different "read" calls even in the middle of the length info as well (unless length is single byte), so you may need a few lines more of code to handle receiving split packets.
Another option would be to use UDP protocol, which indeed sends packets. But UDP packets may get lost or delivered in wrong order (and have a few other problems), so you need to handle that somehow, and this often results in you re-inventing TCP, poorly. So unless you notice TCP/IP just won't cut it, stick with that.

How to count bytes using send() in C, including protocol size?

I'm trying to count total send bytes from my program, but I can't get accurate value.
All my functions call a single function that send data to my server using send() function.
In this function, i get return of send() and sum into global counter. This is working fine.
But when I compare to 'iftop' utility (sudo iftop -f 'port 33755'), I'm getting more data on iftop then in my app....and my guess if because of tcp headers/protocol data. I really don't know how to calculate this. I'm sending packets using send() and variable data length, so I'm not sure if is possible to detect/calculate TCP packet size from there. I know that each TCP packet send TCP header, but I'm not sure how many packets is sent.
May I assume that every call to send(), if data length is less than 1518 (TCP packet size limite?), than it's only one TCP packet and I need to sum TCP Header length? Even if I sent one byte? If so, how much is these extra-bytes from TCP structure?!
For information: I'm using GCC on linux as compiler.
Tks!
How to count bytes using send() in C, including protocol size?
There is no reliable way to do so from within your program. You can compute a minimum total number of bytes required to transmit data of the total payload size you count, subject to a few assumptions, but you would need to monitor from the kernel side to determine the exact number of bytes.
May I assume that every call to send(), if data length is less than
1518 (TCP packet size limite?), than it's only one TCP packet and I
need to sum TCP Header length?
No, that would not be a safe assumption. The main problem is that the kernel does not necessarily match the data transferred by each send() call to its own sequence of packets. It may combine data from multiple send()s into a smaller number of packets. Additionally, however, it may use either a smaller MTU or a larger one than Ethernet's default of 1500 bytes, depending on various factors, and, furthermore, you need to fit packet headers into the chosen MTU, so the payload carried by one packet is smaller than that.
I suspect you're making this too hard. If this is a task that has been assigned to you -- a homework problem, for example -- then my first guess would be that it is intended that you count only the total payload size, not the protocol overhead. Alternatively, if you do need to account for the overhead, then my guess would be that you are supposed to estimate, based on measured or assumed characteristics of the network. If you've set this problem for yourself, then I can only say that people generally make one of the two computations I just described, not the one you asked about.

Efficient way of validating UDP communication protocol having single Server and multiple Clients

I have developed a single Server/multiple Clients udp application, where Server can handle x number of clients at a time. The Server has x number of threads each thread dedicated to one Client.
The code works perfectly fine. Now I want to check my application for all possible scenarios i.e. validate my application. For this purpose, I need to design a test best.
Initial Design:
The test bed I initially designed has following functionalities:
The Server GUI has a button on it. When the button is clicked, the
each thread in the Server reads a text file, picks up few bytes of
the text file, and sends those chunks to its respective clients. The
thread then picks next chunk of bytes from the text file, sends those
chunks to the client and so on until EOF is found.
The Client on the other side keep receiving these chunks of bytes,
creates a text file, and keeps storing these chunks of bytes in its
text file.
When EOF from Server is received, the Client starts sending the
completely received text file back to the Server over its Socket.
When the file is completely received back (echoed), the Server then
compares the two text files, the Sent file and the echoed one. If
both files are same, the communication process has occurred without
any fault and the communication protocol is validated.
The above mentioned validation technique (sending the text file, receiving the echoed file and then comparing both) checks the following things:
The number of bytes sent = number of bytes receieved.
No data is corrupted.
The data is receieved in proper order.
If any of the above mentioned three conditions is not fulfilled, that means that there is some error in communication.
Now I have been asked to make changes to this test bead and add more functionlities to it. Does the procedure that I am using actually can check above mentioned 3 conditions in all scenarios?
Are there some other conditions that must be checked besides above mentioned 3 conditions.
What could be other methods of checking communication protocol except the one I desgined i.e. Sending a text file and getting it echoed and then comparing.
I have to implement more functionlities to his test bed for making validation system more efficient or completely replece the above test bed with some better option.
Please help me with your suggestions.
Thanks in advane :)
The first two of your conditions are guaranteed by UDP. Picking "a few bytes", i.e. anything less than 65535 bytes (64kiB isn't really a "few" bytes) will result in a single datagram being sent, and anything larger than that will fail. Though you will not want to max out the largest possible datagram size, as it will incur IP fragmentation (staying below 1280 bytes is a good idea).
You will be able to receive exactly the amount you sent or nothing at all, never more or less. UDP does not guarantee that any datagram that is sent out arrives (it cannot guarantee that, since IP does not), but it does guarantee that the entire datagram arrives as-is -- or nothing. Never anything in between.
It further guarantees that the data inside the datagram matches its checksum (the underlying protocols including IP/ethernet/ATM further do their own checksumming) and thus arrives in the same binary representation as it was sent. In other words, data arrives in order (inside the datagram) and is not corrupted.
It is of course in theory possible that a bit error passes all 3 layers of checksums, but this is extremely unlikely and will not happen in practice. Unless you need to guard against someone maliciously tampering with packets, you do not need to worry. The kinds of bit errors that happen accidentially are reliably picked up by the checksums used in the protocols.
If, on the other hand, you do need to guard against malicious modification of your data, you must add a MAC (or a checksum and encrypt the entire packet -- adding a checksum alone is useless).
To ensure that data spanning several datagrams arrives in order, you must add sequence numbers to your packets (in the same manner TCP does). And with that, you can as well use TCP, which is likely more efficient and less error-prone. One of the main reasons why one would want to use UDP is normally because in-order delivery and reliability are not needed, or sometimes reliability is needed, but not in-order delivery.
In-order delivery is the main cause of TCP's latency during packet loss (in absence of packet loss, TCP is exactly as "fast" as UDP), so if this is needed, there is no sane reason not to use TCP in the first place. It is a protocol that has been fine-tuned and worked reliably for literally billions of people for 4 decades.
Also, using one socket and one thread per client is possibly not the best approach. The disk won't read any faster, and the network card won't send any faster either. UDP doesn't need a socket per client either. When using TCP, you'll have no other choice but to use one socket per client, but still multiplexing using a readiness notification system will give you much better performance and fewer opportunities for threading errors.
Also, sending back a checksum such as one of the SHA family (or a MAC, if it needs to be secure) may be more efficient than echoing back the whole lot of data. The likelihood that the checksum matches and the data accidentially doesn't is neglegible.
Entire revision control systems that manage millions of lines of code for millions of people (such as git) rely on the fact that this just doesn't happen to identify files (well, it does happen of course, you just won't live to see it).
I have a question here ? Why UDP why not TCP? especially when you are worried for packet order and data corruption. According to me(I may be wrong), UDP is good only when the data is timesensitive like video stream.
Secondly, yes there are other methods of checking integrity of transmitted data. Simplest may be checking the MD5 and SHA1 checksum.
Does the procedure that I am using actually can check above mentioned 3 conditions in all scenarios?
yes
What could be other methods of checking communication protocol except the one I desgined i.e. Sending a text file and getting it echoed and then comparing.
It doesn't have to be a file, but it has to be something you can check once you get the response. You could just generate some random data and hold on to it until you get the response.
You'd have to tell us what you really want to test. If you are trying to make sure that UDP doesn't give you bad data or out of order data, you're using the wrong protocol. You're not testing anything by seeing if you get the exact data in the exact order you send it over UDP except for the networking infrastructure you have in place.
You say you want to test your application for "all possible scenarios", but that doesn't even mean anything. You're testing to see if a behavior that is part of the UDP specification exists and trying to see that it doesn't? Well, it does. Even if you never see it.

What is happening when a TCP sequence number arrives that is not what is expected?

I am writing a program that uses libpcap to capture packets and reassemble a TCP stream. My program simply monitors the traffic and so I have no control over the reception and transmittal of packets. My program disregards all non TCP/IP traffic.
I calculate the next expected sequence number from the ISN and then the successive SEQ numbers. I have it set up so that every TCP connection is uniquely identified by a tuple made up of the source IP, source port, dest IP, and dest port. Everything goes swimmingly until I receive a packet that has a sequence number different than what I am expecting. I have uploaded screen shots to help illustrate what I am describing here.
My questions are:
1. Where is the data that was in the "lost" packet?
2. How does the SEQ number order recover from this situation?
3. What can I do to handle these occurrences.
Please remember; however, I am not writing a program that adheres to TCP. I am writing a program that passively monitors network traffic for TCP streams and attempts to save the raw data to disk, and I am confused as to why the above state situation happens and how I can program to handle it.
Thank you
Where is the data that was in the "lost" packet?
It got dropped by someone
It got lost on the way (wrong detour) and will arrive later
How does the SEQ number order recover from this situation
The receiver notices the segment is out of sequence and doesn't send it to the application, thereby fulfilling its contract: in-order reliable byte stream. Now, what actually happens to get the missing piece is quite intricate and varies from stack to stack. In a nutshell the stack waits for the missing piece to arrive.
The receiver can throw away out-of-sequence segments or it can queue them in a reassembly queue
The receiver can wait for the missing segment to arrive or it can immediately send the ACK it already sent before. Duplicate ACKs will alert the peer something is wrong (look for Fast Retransmit)
When sending acknowledgments the TCP can inform the peer some segments arrived successfully - they're just out of sequence (SACK)
What can I do to handle these occurrences
You can't do anything since you're only monitoring. You could probably get more insight into what is really happening if you also captured the response traffic.
Depending on the window-size of the current TCP connection, if the new packet fits within the receiving window (multi-packet buffer) it will be entered into the receiving queue (and reordered for ordered delivery to protocol clients).
If the sequence number is larger than the maximum for the current window, the packet gets rejected.
See also section 4.4.2 (INPUT PACKET HANDLER) in RFC 675

can one call of recv() receive data from 2 consecutive send() calls?

i have a client which sends data to a server with 2 consecutive send calls:
send(_sockfd,msg,150,0);
send(_sockfd,msg,150,0);
and the server is receiving when the first send call was sent (let's say i'm using select):
recv(_sockfd,buf,700,0);
note that the buffer i'm receiving is much bigger.
my question is: is there any chance that buf will contain both msgs? of do i need 2 recv() calls to get both msgs?
thank you!
TCP is a stream oriented protocol. Not message / record / chunk oriented. That is, all that is guaranteed is that if you send a stream, the bytes will get to the other side in the order you sent them. There is no provision made by RFC 793 or any other document about the number of segments / packets involved.
This is in stark contrast with UDP. As #R.. correctly said, in UDP an entire message is sent in one operation (notice the change in terminology: message). Try to send a giant message (several times larger than the MTU) with TCP ? It's okay, it will split it for you.
When running on local networks or on localhost you will certainly notice that (generally) one send == one recv. Don't assume that. There are factors that change it dramatically. Among these
Nagle
Underlying MTU
Memory usage (possibly)
Timers
Many others
Of course, not having a correspondence between an a send and a recv is a nuisance and you can't rely on UDP. That is one of the reasons for SCTP. SCTP is a really really interesting protocol and it is message-oriented.
Back to TCP, this is a common nuisance. An equally common solution is this:
Establish that all packets begin with a fixed-length sequence (say 32 bytes)
These 32 bytes contain (possibly among other things) the size of the message that follows
When you read any amount of data from the socket, add the data to a buffer specific for that connection. When 32 bytes are reached, read the length you still need to read until you get the message.
It is really important to notice how there are really no messages on the wire, only bytes. Once you understand it you will have made a giant leap towards writing network applications.
The answer depends on the socket type, but in general, yes it's possible. For TCP it's the norm. For UDP I believe it cannot happen, but I'm not an expert on network protocols/programming.
Yes, it can and often does. There is no way of matching up sends and receive calls when using TCP/IP. Your program logic should test the return values of both send and recv calls in a loop, which terminates when everything has been sent or recieved.

Resources