STUN server xoring of IPv6 - xor

I'm trying to decode a STUN success response basing on RFC 5389:
If the IP address family is IPv6, X-Address is computed by taking the mapped IP address
in host byte order, XOR'ing it with the concatenation of the magic
cookie and the 96-bit transaction ID, and converting the result to
network byte order.
Magic cookie is a constant and it is 0x2112A442.
Transaction ID in my case is: 0x6FA22B0D9C5F5AD75B6A4E43.
My X-Address (IPv6) in Host Byte Order is:
0x034A67D82F4B3657B193039A8BA8FDA1
Do I have to xor Host Byte Order X-Address with the concatenation of Magic Cookie and Transaction ID in Network or Host Byte Order?
In the first case, Network Byte Order concatenation is equal to:
0x2112A442 6FA22B0D9C5F5AD75B6A4E43
The first byte 0x03 is xored with 0x21, the last byte 0xA1 is xored with 0x43
But in the second case, the Host Byte Order concatenation is:
0x434E6A5BD75A5F9C0D2BA26F 42A41221
The first byte 0x03 is xored with 0x43, the last byte 0xA1 is xored with 0x21.
Another possible behavior is if it takes and converts separately Magic cookie and Transaction ID to Host Byte Order but it concatenate them preserving header order:
0x42A41221 434E6A5BD75A5F9C0D2BA26F
The first byte 0x03 is xored with 0x42, the last byte 0xA1 is xored with 0x6F.

Everything is done in Network Byte Order.
But here's the thing, for IPv6 addresses, there is no difference between "host byte order" and "network byte order". IPv6 addresses are always understood to be an array of 16 bytes. And individual bytes don't have a "byte order". In "C" code we'd just express that IPv6 address as:
unsigned char ipv6address[16];
Or in terms of the sockaddr_in6 struct;
struct sockaddr_in6 addr;
unsigned char* ipv6addresss = addr.sin6_addr.s6_addr; // points to a sequence of 16 bytes
Contrast that with IPv4, which are often passed around in code as 32-bit integers. In the IPv4 case, you often wind up having to invoke the htonl and ntohl functions.
Unless you are doing something like maintaining the the IPv6 address as an array of 8 16-bit integers instead of an array bytes, then you shouldn't have to think about endianness and byte order too much. (As a matter of fact, I'd encourage you to not think about byte order with regards to 16-byte ip addresses).
Example:
My IPv6 address is this:
2001:0000:9d38:6abd:347d:0d08:3f57:fefd
As an array of hex bytes that's logically written out as:
200100009d386abd347d0d083f57fefd
When my STUN server receives a binding request from this IPv6 address, it applies the following XOR operation to send back the XOR-MAPPED-ADDRESS. Let's assume it's the same transaction id as yours and it includes the magic cookie to indicate RFC 5389 support (2112A442 6FA22B0D9C5F5AD75B6A4E43)
XOR:
200100009D386ABD347D0D083F57FEFD
2112A4426FA22B0D9C5F5AD75B6A4E43
The result:
0113A442F29A41B0A82257DF643DB0BE
Similarly, the client receiving the STUN binding response applies the inverse XOR operation on this byte array with the same transaction id.
XOR:
0113A442F29A41B0A82257DF643DB0BE
2112A4426FA22B0D9C5F5AD75B6A4E43
The result:
200100009D386ABD347D0D083F57FEFD
You can reference the source code to Stuntman if that helps for an example of how to apply the xor mapping operation. The XOR operation is here on GitHub. The code here doesn't distinguish between transaction ids with a magic cookie and those that don't. It just treats the transaction id as a logical sequence of 16 bytes.
The above takes care of the IPv6 address. But the 16-bit port value does have to get byte flipped if it is getting treated as a short or 16-bit integer. In C code, that's typically handled with a call to ntohs.

Related

C - inet_pton() not producing network byte order

I got confused with the inet_pton() function. According to the man page
This function converts the character string src into a network address structure in the af address family, then copies the network address structure to dst. The af argument must be either AF_INET or AF_INET6. dst is written in network byte order.
So the function produces a network byte order value. But I got this code:
struct sockaddr_in a;
char sip[20];
inet_pton(AF_INET, "192.168.0.182", (void *)&a.sin_addr);
inet_ntop(AF_INET, (void *)&a.sin_addr, sip, 20);
printf("htonl:%08x\n", htonl(a.sin_addr.s_addr));
printf("inet_pton:%08x\n", a.sin_addr.s_addr);
printf("inet_ntop:%s\n", sip);
output:
htonl:c0a800b6
inet_pton:b600a8c0
inet_ntop:192.168.0.182
the output of inet_pton is b6.00.a8.c0, which converts to 182.0.168.192, and it's also different from the output of htonl.
Since htonl converts host byte order to network byte order, so if inet_pton produces network byte order, I suppose their outputs should be the same? Does it mean that inet_pton actually produces host byte order?
if inet_pton already produces network byte order, why do I need htonl to get the right value?
Yes, inet_pton puts the address bytes into the destination buffer in network order. Let's go through an example to see what happens. Using your address of "192.168.0.182", inet_pton produces these four bytes:
c0 a8 00 b6 (hex)
192 168 0 182 (dec)
That is network byte order. When you then call htonl (which is not actually correct -- you should be calling ntohl to convert from network order to host order but as #ZanLynx pointed out, the two functions are identical on x86), you re-order the bytes to:
b6 00 a8 c0
But then you pass that form to printf with %x as the format. That tells printf to interpret the four bytes as a single 32-bit integer, but x86 is a little endian machine, so when it loads the four bytes as an integer, the b6 is the lowest order byte and the c0 is the highest, which produces what you saw:
htonl:c0a800b6
So, in general, if you have an IPv4 address in network form, and you want to quickly display it (for debugging or whatever) in an order that "makes sense" to you (as a programmer) you would use:
printf("%x\n", ntohl(a.sin_addr.s_addr));
You could also display the single bytes as they reside in network order (which is really the exact same thing but may be easier to wrap your head around), just use an unsigned char * (or equivalently uint8_t * from <stdint.h>) to print the individual bytes:
uint8_t *ipp = (void *)&a.sin_addr.s_addr;
printf("%02x %02x %02x %02x\n", ipp[0], ipp[1], ipp[2], ipp[3]);
(You need to use an unsigned type here to avoid sign extension. In the above statement, each char will be promoted to an int in the call to printf. If you have a signed char containing, say, 0xc0, it will typically be sign-extended into a 32-bit int: 0xffffffc0. As an alternative, you can use the format specification "%02hhx" which explicitly tells printf that you are really passing it a char; then it will only look at the lowest order byte of each promoted int.)

calculate IP header len

I am trying to frame the ICMP packet and send it through raw socket. Looking at the examples, I see that the IP packet length is calculated as :
iphdr.ip_hl = sizeof(struct ip) >> 2
Can you please explain why we need to right shift struct ip by 2 times instead of assigning a constan value ?
The 'ip_hl' field of an IP (or ICMP) packet is defined as the length of the IP header, in 32-bit words.
sizeof(struct ip) yields the length of the IP header, in 8-bit bytes. Right shifting this value twice provides the length in 32-bit words, as expected in the ip_hl field.
A good reason not to use a constant for this, is to eliminate magic numbers in source code. (The compiler will generate a constant value anyway for 'sizeof(struct ip) >> 2').
Because 4-bit header length field is the number of 32-bit words in the header, including options, so it might be longer then 20 bytes (field value 5), so it's not supposed to be a constant value. Your examples just assume no options scenario.

Typecasting a char to an int (for socket)

I have probably asked this question twice since y'day, but I have still not got a favourable answer. My problem is I have an IP address which is stored in an unsigned char. Now i want to send this IP address via socket from client to server. People have advised me to use htonl() and ntohl() for network byte transfer, but I am not able to understand that the arguments for htonl() and ntohl() are integers...how can i use it in case of unsigned char?? if I can't use it, how can I make sure that if I send 130.191.166.230 in my buffer, the receiver will receive the same all the time?? Any inputs or guidance will be appreciated. Thanks in advance.
If you have an unsigned char array string (along the lines of "10.0.0.7") forming the IP address (and I'm assuming you do since there are very few 32-bit char systems around, making it rather difficult to store an IP address into a single character), you can just send that through as it is and let the other end use it (assuming you both encode characters the same way of course, such as with ASCII).
On the other hand, you may have a four byte array of chars (assuming chars are eight bits) containing the binary IP address.
The use of htonl and ntohl is to ensure that this binary data is sent through in an order that both big-endian and little-endian systems can understand.
To that end, network byte order (the order of the bytes "on the wire") is big-endian so these functions basically do nothing on big-endian systems. On little-endian systems, they swap the bytes around.
In other words, you may have the following binary data:
uint32_t ipaddress = 0x0a010203; // for 10.1.2.3
In big endian layout that would be stored as 0x0a,0x01,0x02,0x03, in little endian as 0x03,0x02,0x01,0x0a.
So, if you want to send it in network byte order (that any endian system will be able to understand), you can't just do:
write (fd, &ipaddress, 4);
since sending that from little endian system to a big endian one will end up with the bytes reversed.
What you need to do is:
uint32_t ipaddress = 0x0a010203; // for 10.1.2.3
uint32_t ip_netorder = htonl (ipaddress); // change if necessary.
write (fd, &ip_netorder, 4);
That forces it to be network byte order which any program at the other end can understand (assuming it uses ntohl to ensure it's correct for its purposes).
In fact, this scheme can handle more than just big and little endian. If you have a 32-bit integer coding scheme where ABCD (four bytes) is encoded as A,D,B,C or even where you have a bizarrely wild bit mixture forming your integers (like using even bits first then odd bits), this will still work since your local htonl and ntohl know about those formats and can convert them correctly to network byte order.
An array of chars has a defined ordering and is not endian dependent - they always operate from low to high addresses by convention.
Do you have a string or 4 bytes?
IP4 address is 4 bytes (aka chars). So you will be having 4 unsigned chars, in an array somewhere. cast that array to send it across.
e.g. unsigned char IP[4];
use ((char *)IP) as data buffer to send, and send 4 bytes from it.

Sending the array of arbitrary length through a socket. Endianness

I'm fighting with socket programming now and I've encountered a problem, which I don't know how to solve in a portable way.
The task is simple : I need to send the array of 16 bytes over the network, receive it in a client application and parse it. I know, there are functions like htonl, htons and so one to use with uint16 and uint32. But what should I do with the chunks of data greater than that?
Thank you.
You say an array of 16 bytes. That doesn't really help. Endianness only matters for things larger than a byte.
If it's really raw bytes then just send them, you will receive them just the same
If it's really a struct you want to send it
struct msg
{
int foo;
int bar;
.....
Then you need to work through the buffer pulling that values you want.
When you send you must assemble a packet into a standard order
int off = 0;
*(int*)&buff[off] = htonl(foo);
off += sizeof(int);
*(int*)&buff[off] = htonl(bar);
...
when you receive
int foo = ntohl((int)buff[off]);
off += sizeof(int);
int bar = ntohl((int)buff[off]);
....
EDIT: I see you want to send an IPv6 address, they are always in network byte order - so you can just stream it raw.
Endianness is a property of multibyte variables such as 16-bit and 32-bit integers. It has to do with whether the high-order or low-order byte goes first. If the client application is processing the array as individual bytes, it doesn't have to worry about endianness, as the order of the bits within the bytes is the same.
htons, htonl, etc., are for dealing with a single data item (e.g. an int) that's larger than one byte. An array of bytes where each one is used as a single data item itself (e.g., a string) doesn't need to be translated between host and network byte order at all.
Bytes themselves don't have endianness any more in that any single byte transmitted by a computer will have the same value in a different receiving computer. Endianness only has relevance these days to multibyte data types such as ints.
In your particular case it boils down to knowing what the receiver will do with your 16 bytes. If it will treat each of the 16 entries in the array as discrete single byte values then you can just send them without worrying about endiannes. If, on the other hand, the receiver will treat your 16 byte array as four 32 bit integers then you'll need to run each integer through hton() prior to sending.
Does that help?

Byte order with a large array of characters in C

I am doing some socket programming in C, and trying to wrestle with byte order problems. My request (send) is fine but when I receive data my bytes are all out of order. I start with something like this:
char * aResponse= (char *)malloc(512);
int total = recv(sock, aResponse, 511, 0);
When dealing with this response, each 16bit word seems to have it's bytes reversed (I'm using UDP). I tried to fix that by doing something like this:
unsigned short * _netOrder= (unsigned short *)aResponse;
unsigned short * newhostOrder= (unsigned short *)malloc(total);
for (i = 0; i < total; ++i)
{
newhostOrder[i] = ntohs(_netOrder[i]);
}
This works ok when I am treating the data as a short, however if I cast the pointer to a char again the bytes are reversed. What am I doing wrong?
Ok, there seems to be problems with what you are doing on two different levels. Part of the confusion here seems to stem for your use of pointers, what type of objects they point to, and then the interpretation of the encoding of the values in the memory pointed to by the pointer(s).
The encoding of multi-byte entities in memory is what is referred to as endianess. The two common encodings are referred to as Little Endian (LE) and Big Endian (BE). With LE, a 16-bit quantity like a short is encoded least significant byte (LSB) first. Under BE, the most significant byte (MSB) is encoded first.
By convention, network protocols normally encode things into what we call "network byte order" (NBO) which also happens to be the same as BE. If you are sending and receiving memory buffers on big endian platforms, then you will not run into conversion problems. However, your code would then be platform dependent on the BE convention. If you want to write portable code that works correctly on both LE and BE platforms, you should not assume the platform's endianess.
Achieving endian portability is the purpose of routines like ntohs(), ntohl(), htons(), and htonl(). These functions/macros are defined on a given platform to do the necessary conversions at the sending and receiving ends:
htons() - Convert short value from host order to network order (for sending)
htonl() - Convert long value from host order to network order (for sending)
ntohs() - Convert short value from network order to host order (after receive)
ntohl() - Convert long value from network order to host order (after receive)
Understand that your comment about accessing the memory when cast back to characters has no affect on the actual order of entities in memory. That is, if you access the buffer as a series of bytes, you will see the bytes in whatever order they were actually encoded into memory as, whether you have a BE or LE machine. So if you are looking at a NBO encoded buffer after receive, the MSB is going to be first - always. If you look at the output buffer after your have converted back to host order, if you have BE machine, the byte order will be unchanged. Conversely, on a LE machine, the bytes will all now be reversed in the converted buffer.
Finally, in your conversion loop, the variable total refers to bytes. However, you are accessing the buffer as shorts. Your loop guard should not be total, but should be:
total / sizeof( unsigned short )
to account for the double byte nature of each short.
This works ok when I'm treating the data as a short, however if I cast the pointer to a char again the bytes are reversed.
That's what I'd expect.
What am I doing wrong?
You have to know what the sender sent: know whether the data is bytes (which don't need reversing), or shorts or longs (which do).
Google for tutorials associated with the ntohs, htons, and htons APIs.
It's not clear what aResponse represents (string of characters? struct?). Endianness is relevant only for numerical values, not chars. You also need to make sure that at the sender's side, all numerical values are converted from host to network byte-order (hton*).
Apart from your original question (which I think was already answered), you should have a look at your malloc statement. malloc allocates bytes and an unsigned short is most likely to be two bytes.
Your statement should look like:
unsigned short *ptr = (unsigned short*) malloc(total * sizeof(unsigned short));
the network byte order is big endian, so you need to convert it to little endian if you want it to make sense, but if it is only an array it shouldn't make a fuss, how does the sender sends it's data ?
For single byte we might not care about byte ordering.

Resources