TCP segment without options and data - c

I want to implement a simplified TCP/IP stack. Now I am writting my tcp_connect function.
In the handshake step, can I send the TCP segment without TCP options and data(Only send the TCP header in the client side)?

I don't think any options are required. However, it you don't send the Maximum Segment Size option, the default MSS that's assumed is only 576.
TCP handshaking segments don't usually include any data. However, it's legal to include it, so your stack should accept it if it receives it.

There is only three (simple for implementation) options described in RFC793:
Kind Length Meaning
---- ------ -------
0 - End of option list.
1 - No-Operation.
2 4 Maximum Segment Size.
You can try to append only "End of option list" to TCP header, or use predefined template.
As a client, you may not add data in outgoing handshake-segments, but you should be able to process incoming ones when you accept connection.
I tried as you saied, but cannot get response.
Did you try to use something like TCPdump on other side? Any incoming segments?

Related

Accessing TCP header fields (without raw socket API)

I am writing an application that needs to get access to TCP header fields, for example, a sequence number or a TCP timestamp field.
Is it possible to get sequence numbers (or other header fields) by operating at the socket API without listening on a raw socket? (I want to avoid filtering out all the packets).
I am looking at the TCP_INFO but it has a limited information.
For example, after calling a recvmsg() and getting a data buffer, is it possible to know the sequence number of the segment that delivered the last byte in that received data buffer?
Thanks
You can try to use libpcap to capture packets. This lib allows to specify packet filter using the same syntax as in Wireshark, so you could limit captured packets to one connection only. One downside is that you would have to receive packets in normal way too, what complicated things a bit and is an additional performance overhead.
Update: you can also open raw socket and set Berkeley Packet Filter on it using socket option SO_ATTACH_FILTER. More details are here: https://www.kernel.org/doc/Documentation/networking/filter.txt . However you would have to implement TCP part of IP stack in your code too.

Can LoadRunner Receive Data By UDP Packets?

We want to receive packets from udp sockets, the udp packets have variable length and we don't know how long they really are until we receive them (parts of them exactly, length were written in the sixth byte).
We tried the function lrs_set_receive_option with MarkerEnd only to find it has no help on this issue. The reason why we want to receive by packets is that we need to respond some packet by sending back user-defined udp packets.
Is there anybody knows how achieve that?
UPDATE
The LR version seems to be v10 or v11.
We need respond an incoming udp packet by sending back a udp packet immediately.
The udp packet may be like this
| orc code | packet length | Real DATA |
Issue is we can't let loadrunner return data for each packets, sometimes it returns many packets in a buffer, sometimes it waits until timeout though when there has been an incoming packet in the socket buffer. While in the c programming language world, when calling recvfrom(udp socket) we are returned only one udp packet per time (per call) which is want we really want.
If you need raw socket support to intercept at the packet level then you are likely going to have to jump to a DLL virtual user in Visual Studio with the raw socket support.
As to your question on UDP support: Yes, a Winsock user supports both core transport types, UDP and TCP. TCP being the more common variant as connection oriented. However, packet examination is at layer 3 of the OSI model for the carrier protocol IP. The ACK should come before you receive the dataflow for your use in the script. You are looking at assembled data flows in the data.ws when you jump to the TCP and UDP level.
Now, you are likely receiving a warning on receive buffer size mismatch which is taking you down this path with a mismatch to the recording size. There is an easy way to address this. If you take your send buffer and construct it using the lrs_set_send_buffer() function, then anything that returns will be taken as correct, ignoring the previously recorded buffer size and not having to wait for a match or timeout before continuing.

Few queries regarding raw sockets in C

I want to make a chat room using raw socket in C. I have following problems:
Q 1 : Can I use select function to handle multiple connections in case of raw sockets ?
Q 2 : Port nos in sockets are real ports or logically implemented for various applications on transport layer??
Q 3 : I am having one computer only so using lo ( local loop) as my interface. So the process which is initiating the chat has send first and then receive call, so it's receiving it's own data. How to restrict it?
Any help would be appreciated since that would help me in increasing my confidence on raw sockets.
Thanks :)
If you want this to be a real, usable chat system, stop. Don't use raw sockets. Huge mistake.
If you are just playing around because you want to put “raw sockets” under the “Experience” section of your résumé, you may read on.
You can use the select function to detect when a raw socket has a packet available to receive, and when it can accept a packet to transmit. You can pass multiple file descriptors to a single call to select if you want to check multiple raw sockets (or whatever) simultaneously.
Port numbers are part of the TCP and UDP protocols (and some other transport layer protocols). The kernel doesn't look for port numbers when receiving packets for raw sockets.
The raw(7) man page‚ states:
All packets or errors matching the protocol number specified for the raw socket are passed to this socket.
And it also states:
A raw socket can be bound to a specific local address using the bind(2) call. If it isn't bound, all packets with the specified IP protocol are received.
Therefore you probably want to at least use different IP addresses for each end of the “connection”, and bind each end to its address.
“But!” you say, “I'm using loopback! I can only use the 127.0.0.1 address!” Not so, my friend. The entire 127.0.0.0/8 address block is reserved for loopback addresses; 127.0.0.1 is merely the most commonly-used loopback address. Linux (but perhaps not other systems) responds to every address in the loopback block. Try this in one window:
nc -v -l 10150
And then in another window:
nc -s 127.0.0.1 127.0.0.2 10150
You will see that you have created a TCP connection from 127.0.0.1 to 127.0.0.2. I think you can also bind your raw sockets to separate addresses. Then, when you receive a packet, you can check whether it's from the other end's IP address to decide whether to process or discard it.
Just curious, why do you want to use raw sockets? Raw sockets (AF_INET, SOCK_RAW) allow you to send out "raw" packets, where you are responsible for crafting everything but the MAC and IP layers.
A1: There are no "connections" with raw sockets. Just packets.
A2: There are no "ports" with raw sockets. Just packets. "Port numbers" as we know them are part of the TCP or UDP protocols, both of which are above the level at which we work with raw sockets.
A3: This is not specific to raw sockets - you would have this issue regardless of your protocol selection. To really answer this, we would need to know much more about your proposed protocol, since right now, you're simply blasting out raw IP packets.

C / determining content of received UDP packet

When receiving a UDP packet, where I do not know neither content nor structure, how can I find out what the content is? Is that possible somehow? Wireshark tells me the data is:
12:03:01:00:00:00:00:00:1f:44:fe:a2:07:e2:4c:28:00:15:e1:e0:00:80:12:61
The question is just about the data itself, not header of lower layers.
Thanks in advance for your help.
Wireshark shows UDP header. There is 2 port numbers. Usually another port number is reserved for the used protocol (not always).
You may look from the port reservation table which is the used protocol.
And if you are lucky, you find the protocol specification from Web and you can open the content of the packet.

Data transfer over TCP-IPv6 connection

I am working on a client-server application in C and on Linux platform. What I am trying to achieve is to change the socket id over a TCP connection on both client and server without data loss where in the client sends the data from a file to the server in the main thread. The application is multithreaded where the other threads change the socket id based on some global flags set.
Problem: The application has two TCP socket connections established, over both IPv4 and IPv6 paths. I am transferring a file over the TCP-IPv4 connection first in the main thread. The other thread is checking on some global flags and has access to/share the socket IDs created for each protocol in the main thread. The send and recv use a pointer variable in its call to point to the socket ID to be used for the data transfer. The data is transferred initially over TCP-Ipv4. Once the global flags are set and few other checks are made the other thread changes the socket ID used in send call to point to IPv6 socket. This thread also takes care of communicating the change between the two hosts.I am getting all the data over IPv4 sent completely before switching. Also I am getting data sent over Ipv6 after the socket ID is just switched. But down the transfer there is loss of data over IPv6 connection.(I am using a pointer variable in send function on server side send(*p_dataSocket.socket_id,sentence,p_size,0); to change the pointer to IPv6 socket ID on the fly)
The error after recv and send call on both side respectively is says ESPIPE:Illegal seek, but this error exists even before switching. So I am pretty much sure this is nothing to do with the data loss
I am using pselect() to check for the available data for each socket. I can somehow understand the data loss while switching(if not properly handled) but I am not able to figure out why the data loss is occurring down the transfer after switching. I hope I am clear on what the issue is. I have also checked to send the data individually over each protocol without switching and there is no data loss.It I initially transfer the data over Ipv6 and then switch to IPv4, there is no data loss. Also would really appreciate to know to how to investigate in this issue apart from using errno or netstat.
When you are using TCP to send data you just can't loose a part of the information in between. You either receive the byte stream the way it was sent or receive nothing at all - provided that you are using the socket-related functions correctly.
There are several points you may want to investigate.
First of all you must make sure that you are really sending the data which is lost. Add some logging on the server side application: dump anything that you transmit witn send() into some file. Include some extra info as well, like:
Data packet no.==1234, *p_dataSocket.socket_id==11, Data=="data_contents_here", 22 bytes total; send() return==22
The important thing here is to watch the contents of *p_dataSocket.socket_id. Make sure that you are using mutex or something like that cause you have a thread which regularly reads socket_id contents and another thread which occasionally changes it. You are not guranteed against the getting of a wrong value from that address unless your threads have monopoly access to it while reading/writing. It is important both for normal program operation and for the debugging information generation.
Another possible problem here is the logic which selects sentence to send. Corruption of this variable may be hard to track in multithreaded program. The logging of transmitted information will help you here too.
Use any TCP sniffer to check what TCP stack really transmits. Are there packets with lost data? If there are no those packets, try to find out which send() call was responsible for sending that data. If those packets exist, check the receiving side for bugs.
errno value should not be used alone. Its value has meaning only when you get an erroneous return from a function. Try to find out when exactly errno becomes ESPIPE That may happen when any of API functions return something like -1 (depends on function). When you find out where it happens you should find out what is wrong in that particular piece of code (debugger is your friend). Have in mind that errno behavior in multithreaded environment depends on your system implementation. Make sure that you use -pthread option (gcc) or at least compile with -D_REENTRANT to minimize the risks.
Check this question for some info about the possible cause of your situation with errno==ESPIPE. Try some debuggin techniques, as suggested there. Errno value of ESPIPE gives a hint that you are using file descriptors incorrectly somewhere in your program. Maybe somewhere you are using a socket fd as regular file or something like that. This may be caused by some race condition (simultaneous access to one object from several threads).

Resources