Constructing a new IPv6 address from a Contiki uip address in C - c

I'm inexperienced with both Contiki and C but I'm trying to do the following:
Basically, I get a structure, event, which has a type, an id and a uip ip6address.
Using this event, I want to construct a uip ipv6 multicast address with a fixed prefix (ff1e).
At the moment I have the following code:
static uip_ds6_maddr_t *
derive_mcast_addr(struct eventstruc* event)
{
int ff1e;
//Fixed multicast prefix to be used by LooCI.
uint8_t mlcPrefix = ff1e;
//Type of the event
uint8_t eventType = event->type;
//Publisher Component ID of the sender
uint8_t * srccomp = event->source_cid;
// IPv6 address of the sender
uip_ip6addr_t * srcaddr = event->source_node);
// A derived multicast address is
// mlcPrefix + ":" + eventType + ":" +srccomp + ":0:" + (last 64bits srcAddr)
}
I'm unsure if this code is decent and on how to get the last 64 bits of the src address, especially since they might not be in the expected format.
For example, if the source address is 0::0:0:0:0 then I'd just need the 0:0:0:0 part. If it was, say, 2001::a00:27ff:fef7:30a7, I'd just need a00:27ff:fef7:30a7.
Also, there is the added complexity of Contiki uip...
Anybody have a decent idea?

First, your uint8_t variables are probably not wide enough, you might need:
//Fixed multicast prefix to be used by LooCI.
uint16_t mlcPrefix = 0xff1e;
I'm not familiar with Contiki, but based on this: http://dak664.github.io/contiki-doxygen/a00424_source.html uip_ip6addr_t is really this:
typedef union uip_ip6addr_t {
u8_t u8[16]; /* Initializer, must come first!!! */
u16_t u16[8];
} uip_ip6addr_t;
If that's the case, then you can get the lower 64 bits by looking at:
srcaddr->u16[4]
srcaddr->u16[5]
srcaddr->u16[6]
srcaddr->u16[7]
Or it could be indexes 0-3 depending on how things are stored in uip_ip6addr_t.
To put things back together, you can put your upper 64 bits in u16[0] through u16[3] and then put the original lower 64 bits back in u16[4] through u16[7].
If uip_ds6_maddr_t is this:
typedef struct uip_ds6_maddr {
uint8_t isused;
uip_ipaddr_t ipaddr;
} uip_ds6_maddr_t;
And you have a pointer uip_ds6_maddr_t *dst then you could do:
dst->ipaddr.u16[0] = mlcPrefix;
And so on.

Related

Auto-Allocate to a specific RAM Area in GCC/C

Sorry for my english, its a bit hard for me to explain what exactly i would need.
I'm making some extra code into existing binarys using the GCC compiler.
In this case, its PowerPC, but it should not really matter.
I know, where in the existing binary i have free ram available (i dumped the full RAM to make sure) but i need to define each RAM address manually, currently i am doing it like this:
// #ram.h
//8bit ram
uint8_t* xx1 = (uint8_t*) 0x807F00;
uint8_t* xx2 = (uint8_t*) 0x807F01;
//...and so on
// 16bit ram
uint16_t* xxx1 = (uint16_t*) 0x807F40;
uint16_t* xxx2 = (uint16_t*) 0x807F42;
//...and so on
// 32bit ram
uint32_t* xxxx1 = (uint32_t*) 0x807FA0;
uint32_t* xxxx2 = (uint32_t*) 0x807FA4;
//...and so on
And im accessing my variables like this:
void __attribute__ ((noinline)) silly_demo_function() {
#include "ram.h"
if (*xxx2>*xx1) {
*xxx3 = *xxx3 + *xx1;
}
return;
}
But this gets really boring, if i want to patch my code into another existing binary, where the location of available/free/unused ram can be fully different, or if im replacing/removing some value in the middle. I am using 8, 16 and 32bit variables.
Is there a way, i can define an area like 0x807F00 to 0x00808FFF, and allocate my variables on the fly, and the compiler will allocate it inside my specific location?
I suspect that the big problem here is that those addresses are memory mapped IO (devices) and not RAM; and should not be treated as RAM.
Further, I'd say that you probably should be hiding the "devices that aren't RAM" behind an abstract layer, a little bit like a device driver; partly so that you can make sure that the compiler complies with any constraints caused by it being IO and not RAM (e.g. treated as volatile, possibly honoring any access size restrictions, possibly taking care of any cache coherency management); partly so that you/programmers know what is normal/fast/cached RAM and what isn't; partly so that you can replace the "device" with fake code for testing; and partly so that it's all kept in a single well defined area.
For example; you might have a header file called "src/devices.h" that contains:
#define xx1_address 0x807F00
..and the wrapper code might be a file called "src/devices/xx1.c" that contains something like:
#include "src/devices.h"
static volatile uint8_t * xx1 = (uint8_t*) xx1_address;
uint8_t get_xx1(void) {
return *xx1;
}
void set_xx1(uint8_t x) {
*xx1 = x;
}
However; depending on what these devices actually are, you might need/want some higher level code. For example, maybe xx1 is a temperature sensor and it doesn't make any sense to try to set it, and you want it to scale that raw value so it's "degrees celsius", and the highest bit of the raw value is used to indicate an error condition (and the actual temperature is only 7 bits), so the wrapper might be more like:
#include "src/devices.h"
#define xx1_offset -12.34
#define xx1_scale 1.234
static volatile uint8_t * xx1 = (uint8_t*) xx1_address;
float get_xx1_temperature(void) {
uint8_t raw_temp = *xx1;
if(raw_temp * 0x80 != 0) {
/* Error flag set */
return NAN;
}
/* No error */
return (raw_temp + xx1_offset) * xx1_scale;
}
In the meanwhile, i figured it out.
Its just as easy as defining .data, .bsss and .sbss in the linker directives.
6 Lines of code and its working like a charm.

C creating a raw UDP packet

I am interested in creating a DNS (using UDP protocol to send it) response packet, however I found limited information how to create your own packet.
Most tutorials are like this https://opensourceforu.com/2015/03/a-guide-to-using-raw-sockets/
They use structs to fill in the fields and connect them into 1 sequence. But I am concerned that the compiler can pad the struct, making it "corrupted" (make the packet longer then it should be)
I fully know that there are struct attributes, that don't allow the compiler to pad structs, but I don't want to use them
Can anyone point me some resources on packet creation. I can use Libpcap and raw sockets
You do it like this:
// helper function to add uint32_t to a buffer
char *append_uint32(char *buf_position, uint32_t value) {
// network protocols usually use network byte order for numbers,
// htonl is POSIX function so you may have to make your own on other platform
// http://pubs.opengroup.org/onlinepubs/9699919799/functions/htonl.html
value = htonl(value);
memcpy(buf_postion, &value, sizeof value);
return buf_position + sizeof value;
}
// example code using the function:
// generate packet with numbers 0...9 in network byte order
void func() {
char buf[sizeof(int32_t) * 10];
char *bptr = buf;
for(uint32_t i=0; i<10; ++i) {
bptr = append_uint32(bptr, i);
}
// do something with buf (use malloc instead of stack if you want return it!)
}

Initializing, constructing and converting struct to byte array causes misalignment

I am trying to design a data structure (I have made it much shorter to save space here but I think you get the idea) to be used for byte level communication:
/* PACKET.H */
#define CM_HEADER_SIZE 3
#define CM_DATA_SIZE 16
#define CM_FOOTER_SIZE 3
#define CM_PACKET_SIZE (CM_HEADER_SIZE + CM_DATA_SIZE + CM_FOOTER_SIZE)
// + some other definitions
typedef struct cm_header{
uint8_t PacketStart; //Start Indicator 0x5B [
uint8_t DeviceId; //ID Of the device which is sending
uint8_t PacketType;
} CM_Header;
typedef struct cm_footer {
uint16_t DataCrc; //CRC of the 'Data' part of CM_Packet
uint8_t PacketEnd; //should be 0X5D or ]
} CM_Footer;
//Here I am trying to conver a few u8[4] tp u32 (4*u32 = 16 byte, hence data size)
typedef struct cm_data {
union {
struct{
uint8_t Value_0_0:2;
uint8_t Value_0_1:2;
uint8_t Value_0_2:2;
uint8_t Value_0_3:2;
};
uint32_t Value_0;
};
//same thing for Value_1, 2 and 3
} CM_Data;
typedef struct cm_packet {
CM_Header Header;
CM_Data Data;
CM_Footer Footer;
} CM_Packet;
typedef struct cm_inittypedef{
uint8_t DeviceId;
CM_Packet Packet;
} CM_InitTypeDef;
typedef struct cm_appendresult{
uint8_t Result;
uint8_t Reason;
} CM_AppendResult;
extern CM_InitTypeDef cmHandler;
The goal here is to make reliable structure for transmitting data over USB interface. At the end the CM_Packet should be converted to an uint8_t array and be given to data transmit register of an mcu (stm32).
In the main.c file I try to init the structure as well as some other stuff related to this packet:
/* MAIN.C */
uint8_t packet[CM_PACKET_SIZE];
int main(void) {
//use the extern defined in packet.h to init the struct
cmHandler.DeviceId = 0x01; //assign device id
CM_Init(&cmHandler); //construct the handler
//rest of stuff
while(1) {
CM_GetPacket(&cmHandler, (uint8_t*)packet);
CDC_Transmit_FS(&packet, CM_PACKET_SIZE);
}
}
And here is the implementation of packet.h which screws up everything so bad. I added the packet[CM_PACKET_SIZE] to watch but it is like it is just being generated randomly. Sometimes by pure luck I can see in this array some of the values that I am interested in! but it is like 1% of the time!
/* PACKET.C */
CM_InitTypeDef cmHandler;
void CM_Init(CM_InitTypeDef *cm_initer) {
cmHandler.DeviceId = cm_initer->DeviceId;
static CM_Packet cmPacket;
cmPacket.Header.DeviceId = cm_initer->DeviceId;
cmPacket.Header.PacketStart = CM_START;
cmPacket.Footer.PacketEnd = CM_END;
cm_initer->Packet = cmPacket;
}
CM_AppendResult CM_AppendData(CM_InitTypeDef *handler, uint8_t identifier,
uint8_t *data){
CM_AppendResult result;
switch(identifier){
case CM_VALUE_0:
handler->Packet.Data.Value_0_0 = data[0];
handler->Packet.Data.Value_0_1 = data[1];
handler->Packet.Data.Value_0_2 = data[2];
handler->Packet.Data.Value_0_3 = data[3];
break;
//Also cases for CM_VALUE_0, 1 , 2
//to build up the CM_Data sturct of CM_Packet
default:
result.Result = CM_APPEND_FAILURE;
result.Reason = CM_APPEND_CASE_ERROR;
return result;
break;
}
result.Result = CM_APPEND_SUCCESS;
result.Reason = 0x00;
return result;
}
void CM_GetPacket(CM_InitTypeDef *handler, uint8_t *packet){
//copy the whole struct in the given buffer and later send it to USB host
memcpy(packet, &handler->Packet, sizeof(CM_PACKET_SIZE));
}
So, the problem is this code gives me 99% of the time random stuff. It never has the CM_START which is the start indicator of packet to the value I want to. But most of the time it has the CM_END byte correctly! I got really confused and cant find out the reason. Being working on an embedded platform which is hard to debugg I am kind of lost here...
If you transfer data to another (different) architecture, do not just pass a structure as a blob. That is the way to hell: endianess, alignment, padding bytes, etc. all can (and likely will) cause trouble.
Better serialize the struct in a conforming way, possily using some interpreted control stream so you do not have to write every field out manually. (But still use standard functions to generate that stream).
Some areas of potential or likely trouble:
CM_Footer: The second field might very well start at a 32 or 64 bit boundary, so the preceeding field will be followed by padding. Also, the end of that struct is very likely to be padded by at least 1 bytes on a 32 bit architecture to allow for proper alignment if used in an array (the compiler does not care you if you actually need this). It might even be 8 byte aligned.
CM_Header: Here you likely (not guaranteed) get one uint8_t with 4*2 bits with the ordering not standardized. The field my be followed by 3 unused bytes which are required for the uint32_t interprettion of the union.
How do you guarantee the same endianess (for >uint8_t: high byte first or low byte first?) for host and target?
In general, the structs/unions need not have the same layout for host and target. Even if the same compiler is used, their ABIs may differ, etc. Even if it is the same CPU, there might be other system constraints. Also, for some CPUs, different ABIs (application binary interface) exist.

CPU write value passed from application to qemu is strange

I was trying to run RTEMS(a real-time OS) application on a sparc virtual machine using QEMU.
I'm almost there and I've seen it working hours ago. But after removing some prints it is not working and later I found it's not because of the removed prints. The data is not being passed correctly between the RTEMS image and the QEMU emulation model.(I'm working with QEMU version 1.5.50 and lan9118.c model borrowed from QEMU version 2.0.0. I modifed lan9118 a little.)
In the QEMU model, the memory region ops are defined as
struct MemoryRegionOps {
/* Read from the memory region. #addr is relative to #mr; #size is
* in bytes. */
uint64_t (*read)(void *opaque,
hwaddr addr,
unsigned size);
/* Write to the memory region. #addr is relative to #mr; #size is
* in bytes. */
void (*write)(void *opaque,
hwaddr addr,
uint64_t data,
unsigned size);
...
}
and in the RTEMS application, I write to the device like
*TX_FIFO_PORT = cmdA;
*TX_FIFO_PORT = cmdB;
where TX_FIFO_PORT is defined as below.
#define TX_FIFO_PORT (volatile ulong *)(SMSC9118_BASE + 0x20)
But when I write, for example,
cmdA : 0x2a300200 and cmdB : 0x2a002a00,
The values I expected are
cmdA : 0x0002302a and cmdB : 0x002a002a. (Just endian converted values)
But the values I see at the write function (entrance of QEMU) are
cmdA : 0x02000200 and cmdB : 0x2a002a00 respectively.
The observed values have not been endian converted and even the first value is different(lower 16 bit repeated).
What could be problem?
Any hint will be deeply appreciated.
Strangely I fixed this by commenting out the endian conversion for cmdA and cmdB in the RTEMS before writing to the device.(It was ok with the endian conversion..I don't know) So it's working 'almost'.
Anyway, here is a tip about exchaning CPU write/read data in QEMU processor and deivce.
In QEMU, Each device model provides write and read function, also it specifies how the word should be transferd to/from the device regarding endianness. It is specified like below.
static const MemoryRegionOps lan9118_mem_ops = {
.read = lan9118_readl,
.write = lan9118_writel,
.endianness = DEVICE_NATIVE_ENDIAN,
};
Here is the copy from email I received from Peter Maydell from qemu-discuss#nongnu.org mailing list.
------------------------
This depends on what the MemoryRegionOps struct for the memory region sets its .endianness field to.
DEVICE_NATIVE_ENDIAN means the device sees values the same way round as the guest CPU's native endianness[*], so if the guest does a 32 bit write of 0x12345678 then it appears in the write function's argument as 0x12345678. DEVICE_BIG_ENDIAN means that if the CPU is little endian then the word will be byteswapped.
DEVICE_LITTLE_ENDIAN means that if the CPU is big endian then the word will be byteswapped. The latter are useful for devices or buses which have a specific endianness which is not the same as that of the CPU (eg PCI is always little endian).

Modify header of a captured packet

I am trying to modify the IP header to include more IP options with the use of the libnetfiletr_queue. So far I have managed to come to the point where I obtain the packet as shown below.
if (nfq_set_mode(qh, NFQNL_COPY_PACKET, 0xffff) < 0) {
fprintf(stderr, "Unable to set nfq_set_mode\n");
exit(1);
}
Then I managed to go far as shown below,
static int my_callBack(struct nfq_q_handle *qh, struct nfgenmsg *nfmsg,struct nfq_data *tb)
{
int id = 0;
int packet_len;
unsigned char *data;
struct nfqnl_msg_packet_hdr *packet_hdr;
unsigned char *data;
packet_hdr = nfq_get_msg_packet_hdr(tb);
if (packet_hdr) {
id = ntohl(packet_hdr->packet_id);
}
packet_len = nfq_get_payload(tb, &data);
if (packet_len >= 0) {
//print payload length
printf("payload_length = %d ", packet_len);
//modify packet ip header
}
return nfq_set_verdict(qh, id, NF_ACCEPT, 0, NULL);
}
But from here onwards I am a bit confused on how to proceed on modifying the IP header of the captured packet at //modify packet ip header comment.Example on a modification to the IP header (such as traffic class(IPV6)/ IP options/ version/ flags/ destination address) is ok since I only need to understand how the modification works :).
I have tried many resources and could not succeed in proceeding any further. You expert advice and help on this query will be very much appreciated. :)
Thank you very much :)
To modify the values of an IP header, start by defining a structure to represent your header. You find what the structure should be by reading the RFC spec for the protocol you're trying to access.
Here's a link to the RFC for IPv6: https://www.rfc-editor.org/rfc/rfc2460#section-3
The first row of the IPv6 header is a bit tricky, because they aren't using byte-aligned fields. The Version field is 4-bits wide, the Traffic Class is 8-bits wide, and the Flow Label is 20-bits wide. The whole header is 320 bits (40 bytes) and 256 of those are src and dest address. Only 64-bits are used for the other fields, so it's probably easiest to define your struct like this:
struct ipv6_hdr {
uint32_t row1;
uint16_t payload_length;
uint8_t next_header;
uint8_t hop_limit;
uint16_t src[8];
uint16_t dest[8];
};
To extract the row one values, you can use some masking:
#define VERSION_MASK 0xF0000000
#define TRAFFIC_CLASS_MASK 0x0FF00000
#define FLOW_LABEL_MASK 0x000FFFFF
struct ipv6_hdr foo;
...
nfq_get_payload(tb, &foo); // Just an example; don't overflow your buffer!
// bit-wise AND gets masked field from row1
uint8_t version = (uint8_t) ((foo->row1 & VERSION_MASK) >> 28); // shift (32-4) bits
Once you point your struct to the data payload, assuming your byte array matches this format, modifying the header values becomes simple assignment:
version = 6;
// bit-wise OR puts our value in the right place in row1
foo->row1 &= ~(VERSION_MASK) // clear out the old value first
foo->row1 = ((uint32_t) version << 28) | foo->row1;
I chose to make the src and dest addresses in the struct an array of 16-bit values because IPv6 addresses are a series of 8, 16-bit values. This should make it easy to isolate any given pair of bytes.
You will have to determine what format your data payload is in before applying the proper struct to it.
For info on how to create an IPv4 header, check its RFC: https://www.rfc-editor.org/rfc/rfc791#section-3.1
Hope this helps (you may have to fiddle with my code samples to get the syntax right, it's been a few months).
editing with info about checksums as requested in comments
Follow this RFC for generating checksums after modifying your header: https://www.rfc-editor.org/rfc/rfc1071
The key take-away there is to zero the checksum field in the header before generating the new checksum.

Resources