I am writing code for an ATSAM device using XC32 from Microchip (not clear to me if C99 or C11).
This code works fine:
typedef union {
struct {
uint16_t bit1 : 1;
uint16_t bit2 : 1;
// .. MUCH more lines...
};
struct {
uint32_t data[N];
// ... more lines, with sub-structures....
};
} Context_t;
Context_t c;
c.data[0] = 1;
c.bit2 = false;
Notice both anonymous structures.
Since both structures are large, change all the time and their code are generated automatically by other script, it would be more convenient to use them like this:
// bit.h
typedef struct {
uint16_t bit1 : 1;
uint16_t bit2 : 1;
// .. MUCH more items...
} Bits_t;
// data.h
typedef struct {
uint32_t data[N];
// ... more code...
} Data_t;
// import both headers
typedef union {
Bits_t;
Data_t;
} Context_t;
Context_t c;
c.data[0] = 1;
c.bit2 = false;
This fails to build with compiler saying declaration does not declare anything for both lines inside union. That sounds fair to me.
If I name them like bellow it works, but final code will be forced to be aware of this change in structure. Not good for our application.
typedef union {
Bits_t b;
Data_t d;
} Context_t;
Context_t c;
c.d.data[0] = 1;
c.b.bit2 = false;
I assume I am the culprit in here and failing example is failing due to me declaring the union the wrong way.
Any tip are welcome!
You can take advantage of the fact that include files are essentially copied-and-pasted into the source files that include them to do what you want.
If you have bit.h consist only of this:
struct {
uint16_t bit1 : 1;
uint16_t bit2 : 1;
// .. MUCH more items...
};
And data.h only of this:
struct {
uint32_t data[N];
// ... more code...
};
Then you can create your union like this:
typedef union {
#include "bits.h"
#include "data.h"
} Context_t;
Allowing the two internal struct to both be anonymous and to be generated externally without having to touch the module where Context_t is defined.
I am writing a program to read a bmp header. I've written some code that was working when it was all in main. How to implement this code as a function of its own and then implementing it onto main?
Here is the whole code :
#include <stdio.h>
#include <stdlib.h>
#include <stdio.h>
#include <stdin.h>
struct bmp_header {
uint16_t type;
uint32_t size;
uint16_t reserved1;
uint16_t reserved2;
uint32_t offset;
uint32_t dib_size;
uint32_t width;
uint32_t height;
uint16_t planes;
uint16_t bpp;
uint32_t compression;
uint32_t image_size;
uint32_t x_ppm;
uint32_t y_ppm;
uint32_t num_colors;
uint32_t important_colors;
};
void read_bmp(FILE *BMPFile,struct bmp_header* Header) {
fread(&(Header->type), 2, 1, BMPFile);
fread(&(Header->size),4,1,BMPFile);
fread(&(Header->reserved1),2,1,BMPFile);
fread(&(Header->reserverd2),2,1,BMPFile);
fread(&(Header->offset),4,1,BMPFile);
fread(&(Header->dib_size),4,1,BMPFile);
fread(&(Header->width),4,1,BMPFile);
fread(&(Header->height),4,1,BMPFile);
fread(&(Header->planes),2,1,BMPFile);
fread(&(Header->bpp),2,1,BMPFile);
fread(&(Header->compression),4,1,BMPFile);
fread(&(Header->image_size),4,1,BMPFile);
fread(&(Header->x_ppm),4,1,BMPFile);
fread(&(Header->y_pp),4,1,BMPFile);
fread(&(Header->num_colors),4,1,BMPFile);
fread(&(Header->important_colors),4,1,BMPFile);
}
int main() {
FILE *BMPFile = fopen("image.bmp","rb");
if(BMPFile == NULL)
{
return;
}
struct bmp_header* Header;
read_bmp(BMPFile,Header);
fclose(BMPFile);
return 0;
}
The relevant parts of the version of the program with all reading action in main, that worked as expected, is reported below
int main( void )
{
FILE *BMPFile = fopen ("lenna.bmp", "rb");
if (BMPFile == NULL)
{
return 0;
}
struct bmp_header Header;
memset(&Header, 0, sizeof(Header));
fread(&Header.type, 2, 1, BMPFile);
fread(&Header.size),4,1,BMPFile);
fread(&Header.reserved1),2,1,BMPFile);
fread(&Header.reserverd2),2,1,BMPFile);
fread(&Header.offset),4,1,BMPFile);
fread(&Header.dib_size),4,1,BMPFile);
fread(&Header.width),4,1,BMPFile);
fread(&Header.height),4,1,BMPFile);
fread(&Header.planes),2,1,BMPFile);
fread(&Header.bpp),2,1,BMPFile);
fread(&Header.compression),4,1,BMPFile);
fread(&Header.image_size),4,1,BMPFile);
fread(&Header.x_ppm),4,1,BMPFile);
fread(&Header.y_pp),4,1,BMPFile);
fread(&Header.num_colors),4,1,BMPFile);
fread(&Header.important_colors),4,1,BMPFile);
/* Header fields print section */
/* ... */
}
Whenever a working code stops working, it is useful to focus on the changes between the two versions of the code. So, why does your original code work correctly? It looks like this:
int main( void )
{
FILE *BMPFile = fopen ("lenna.bmp", "rb");
if (BMPFile == NULL)
{
return 0;
}
struct bmp_header Header;
memset(&Header, 0, sizeof(Header));
fread(&Header.type, 2, 1, BMPFile);
...
}
You declare Header, of type struct bmp_header, in main's stack (as local variable). In this way the structure will be for sure allocated during all program's lifetime.
You memset it to 0
You pass Header's fields addresses directly to fread
In the new version of the program you have a function defined as
void read_bmp(FILE *BMPFile,struct bmp_header* Header);
so you need a pointer to struct bmp_header to be passed to it. For this reason you declare
struct bmp_header* Header;
and call read_bmp(BMPFile,Header);.
What is the different with the working version? Well, the pointer! Declaring a pointer you say to the compiler that it contains an address, in this case the address of the structure required by read_bmp().
But you never say to the compiler what the address is, so that freads within read_bmp() will write to random location causing a segmentation fault.
What to do
You need to pass to read_bmp() a valid struct bmp_header address, and you have two options.
You can allocate Header in the stack like you did before, and pass its address to read_bmp(), through & operator. It should have been your first attempt, since it was really similar to your working solution.
struct bmp_header Header;
read_bmp(BMPFile, &Header);
You can declare Header as a pointer, but you will need to dynamically allocate its memory through malloc:
struct bmp_header * Header = malloc(sizeof(struct bmp_header));
read_bmp(BMPFile, Header);
You have to create struct bmp_header* type function. Next create struct bmp_header pointer in your function & allocate memory (header size == 54B) for returned reference.
I have a C source code for a microcontroller and I would show you the first part of the header:
#define USART_RX_BUFFER_SIZE 256
#define USART_TX_BUFFER_SIZE 256
#define USART_RX_BUFFER_MASK (USART_RX_BUFFER_SIZE - 1)
#define USART_TX_BUFFER_MASK (USART_TX_BUFFER_SIZE - 1)
#if (USART_RX_BUFFER_SIZE & USART_RX_BUFFER_MASK)
#error RX buffer size is not a power of 2
#endif
#if (USART_TX_BUFFER_SIZE & USART_TX_BUFFER_MASK)
#error TX buffer size is not a power of 2
#endif
typedef struct USART_Buffer {
volatile uint8_t RX[USART_RX_BUFFER_SIZE];
volatile uint8_t TX[USART_TX_BUFFER_SIZE];
volatile uint16_t RX_Head;
volatile uint16_t RX_Tail;
volatile uint16_t TX_Head;
volatile uint16_t TX_Tail;
} USART_Buffer_t;
typedef struct Usart_and_buffer {
USART_t *usart;
USART_DREINTLVL_t dreIntLevel;
USART_TXCINTLVL_t txcIntLevel;
USART_Buffer_t buffer;
PORT_t *rs485_Port;
uint8_t rs485_Pin;
bool rs485;
} USART_data_t;
uint8_t USART_RXBuffer_GetByte(USART_data_t *usart_data);
bool USART_RXComplete(USART_data_t *usart_data);
void USART_TransmitComplete(USART_data_t *usart_data);
...
There are several other functions like this.
Their implementation often use both USART_data_t and USART_Buffer_t, example:
bool USART_TXBuffer_PutByte(USART_data_t *usart_data, uint8_t data) {
uint8_t tempCTRLA;
uint16_t tempTX_Head;
bool TXBuffer_FreeSpace;
USART_Buffer_t * TXbufPtr;
if (usart_data->rs485) usart_data->rs485_Port->OUTSET = usart_data->rs485_Pin;
TXbufPtr = &usart_data->buffer;
...
In my actual applications I need to declare a lot of USART_data_t structs but only a couple of them require such a big buffer (256 bytes). Most will work with very small ones, like 64 or even 32 bytes.
I easily run out of memory and all that space is actually unused.
My goal is find a way to save space.
The dirty way is to clone the whole file, rename all the variables/functions in order to have two different versions, say one with the buffers of 256 bytes and another with the buffers of 64 bytes.
But then I also have to change every call in my code, and I would avoid that.
This library is very fast because it works on circular buffers with a size of power of 2 and using the defines above it takes only few clock cycles to do the required math. So I really don't want to rewrite the whole library to use a dynamic allocation of the memory.
Any other idea?
Example of use:
USART_data_t RS232B_USART_data;
USART_data_t RS232A_USART_data;
#define RS232B_BUFFER_SIZE 4
char RS232B_RxBuffer[RS232B_BUFFER_SIZE];
char RS232B_TxBuffer[RS232B_BUFFER_SIZE];
#define RS232A_BUFFER_SIZE 64
char RS232A_RxBuffer[RS232A_BUFFER_SIZE];
char RS232A_TxBuffer[RS232A_BUFFER_SIZE];
ISR(USARTC1_RXC_vect) { USART_RXComplete(&RS232B_USART_data); }
ISR(USARTC1_DRE_vect) { USART_DataRegEmpty(&RS232B_USART_data); }
ISR(USARTC1_TXC_vect) { USART_TransmitComplete(&RS232B_USART_data); }
ISR(USARTD0_RXC_vect) { USART_RXComplete(&RS232A_USART_data); }
ISR(USARTD0_DRE_vect) { USART_DataRegEmpty(&RS232A_USART_data); }
ISR(USARTD0_TXC_vect) { USART_TransmitComplete(&RS232A_USART_data); }
// ...
USART_InterruptDriver_Initialize(&RS232B_USART_data, &RS232B_USART, USART_DREINTLVL_LO_gc, USART_TXCINTLVL_LO_gc, false);
USART_Format_Set(RS232B_USART_data.usart, USART_CHSIZE_8BIT_gc, USART_PMODE_DISABLED_gc, false);
USART_RxdInterruptLevel_Set(RS232B_USART_data.usart, USART_RXCINTLVL_HI_gc);
USART_Baudrate_Set(&RS232B_USART, 2094, -7);
USART_Rx_Enable(RS232B_USART_data.usart);
USART_Tx_Enable(RS232B_USART_data.usart);
USART_InterruptDriver_Initialize(&RS232A_USART_data, &RS232A_USART, USART_DREINTLVL_LO_gc, USART_TXCINTLVL_LO_gc, false);
USART_Format_Set(RS232A_USART_data.usart, USART_CHSIZE_8BIT_gc, USART_PMODE_DISABLED_gc, false);
USART_RxdInterruptLevel_Set(RS232A_USART_data.usart, USART_RXCINTLVL_HI_gc);
USART_Baudrate_Set(&RS232A_USART, 2094, -7);
USART_Rx_Enable(RS232A_USART_data.usart);
USART_Tx_Enable(RS232A_USART_data.usart);
You could potentially rewrite USART_Buffer_t to contain just a pointer to the rx/tx buffers and add two additional variables that are set to the size of the "attached" buffers.
This is all just written down, so expect typos etc., but I hope you get the idea.:
typedef struct USART_Buffer {
volatile uint8_t* pRX;
volatile uint8_t* pTX;
volatile uint16_t RX_Size;
volatile uint16_t TX_Size;
volatile uint16_t RX_Head;
volatile uint16_t RX_Tail;
volatile uint16_t TX_Head;
volatile uint16_t TX_Tail;
} USART_Buffer_t;
Then you write a helper function like
USART_InitBuffers(USART_data_t data, uint8_t* pTxBuffer, uint16_t sizeTxBuffer, uint8_t* pRxBuffer, uint16_t sizeRxBuffer) {
data.pRX = pTxBuffer;
// ... other assignments
}
This way, you can specify your arrays of different size for each USART:
uint8_t TxData1[100]
uint8_t RxData1[10]
uint8_t TxData2[255]
uint8_t RxData2[50]
USART_Buffer_t data1;
USART_Buffer_t data2;
main() {
USART_InitBuffers(&data1, TxData1, sizeof(TxData1), RxData1, sizeof(RxData1));
USART_InitBuffers(&data2, TxData2, sizeof(TxData2), RxData2, sizeof(RxData2));
}
In the end, you have to adjust the library functions, to make use of RX_Size and TX_Size instead of using USART_RX_BUFFER_SIZE and USART_TX_BUFFER_SIZE.
I am trying to design a data structure (I have made it much shorter to save space here but I think you get the idea) to be used for byte level communication:
/* PACKET.H */
#define CM_HEADER_SIZE 3
#define CM_DATA_SIZE 16
#define CM_FOOTER_SIZE 3
#define CM_PACKET_SIZE (CM_HEADER_SIZE + CM_DATA_SIZE + CM_FOOTER_SIZE)
// + some other definitions
typedef struct cm_header{
uint8_t PacketStart; //Start Indicator 0x5B [
uint8_t DeviceId; //ID Of the device which is sending
uint8_t PacketType;
} CM_Header;
typedef struct cm_footer {
uint16_t DataCrc; //CRC of the 'Data' part of CM_Packet
uint8_t PacketEnd; //should be 0X5D or ]
} CM_Footer;
//Here I am trying to conver a few u8[4] tp u32 (4*u32 = 16 byte, hence data size)
typedef struct cm_data {
union {
struct{
uint8_t Value_0_0:2;
uint8_t Value_0_1:2;
uint8_t Value_0_2:2;
uint8_t Value_0_3:2;
};
uint32_t Value_0;
};
//same thing for Value_1, 2 and 3
} CM_Data;
typedef struct cm_packet {
CM_Header Header;
CM_Data Data;
CM_Footer Footer;
} CM_Packet;
typedef struct cm_inittypedef{
uint8_t DeviceId;
CM_Packet Packet;
} CM_InitTypeDef;
typedef struct cm_appendresult{
uint8_t Result;
uint8_t Reason;
} CM_AppendResult;
extern CM_InitTypeDef cmHandler;
The goal here is to make reliable structure for transmitting data over USB interface. At the end the CM_Packet should be converted to an uint8_t array and be given to data transmit register of an mcu (stm32).
In the main.c file I try to init the structure as well as some other stuff related to this packet:
/* MAIN.C */
uint8_t packet[CM_PACKET_SIZE];
int main(void) {
//use the extern defined in packet.h to init the struct
cmHandler.DeviceId = 0x01; //assign device id
CM_Init(&cmHandler); //construct the handler
//rest of stuff
while(1) {
CM_GetPacket(&cmHandler, (uint8_t*)packet);
CDC_Transmit_FS(&packet, CM_PACKET_SIZE);
}
}
And here is the implementation of packet.h which screws up everything so bad. I added the packet[CM_PACKET_SIZE] to watch but it is like it is just being generated randomly. Sometimes by pure luck I can see in this array some of the values that I am interested in! but it is like 1% of the time!
/* PACKET.C */
CM_InitTypeDef cmHandler;
void CM_Init(CM_InitTypeDef *cm_initer) {
cmHandler.DeviceId = cm_initer->DeviceId;
static CM_Packet cmPacket;
cmPacket.Header.DeviceId = cm_initer->DeviceId;
cmPacket.Header.PacketStart = CM_START;
cmPacket.Footer.PacketEnd = CM_END;
cm_initer->Packet = cmPacket;
}
CM_AppendResult CM_AppendData(CM_InitTypeDef *handler, uint8_t identifier,
uint8_t *data){
CM_AppendResult result;
switch(identifier){
case CM_VALUE_0:
handler->Packet.Data.Value_0_0 = data[0];
handler->Packet.Data.Value_0_1 = data[1];
handler->Packet.Data.Value_0_2 = data[2];
handler->Packet.Data.Value_0_3 = data[3];
break;
//Also cases for CM_VALUE_0, 1 , 2
//to build up the CM_Data sturct of CM_Packet
default:
result.Result = CM_APPEND_FAILURE;
result.Reason = CM_APPEND_CASE_ERROR;
return result;
break;
}
result.Result = CM_APPEND_SUCCESS;
result.Reason = 0x00;
return result;
}
void CM_GetPacket(CM_InitTypeDef *handler, uint8_t *packet){
//copy the whole struct in the given buffer and later send it to USB host
memcpy(packet, &handler->Packet, sizeof(CM_PACKET_SIZE));
}
So, the problem is this code gives me 99% of the time random stuff. It never has the CM_START which is the start indicator of packet to the value I want to. But most of the time it has the CM_END byte correctly! I got really confused and cant find out the reason. Being working on an embedded platform which is hard to debugg I am kind of lost here...
If you transfer data to another (different) architecture, do not just pass a structure as a blob. That is the way to hell: endianess, alignment, padding bytes, etc. all can (and likely will) cause trouble.
Better serialize the struct in a conforming way, possily using some interpreted control stream so you do not have to write every field out manually. (But still use standard functions to generate that stream).
Some areas of potential or likely trouble:
CM_Footer: The second field might very well start at a 32 or 64 bit boundary, so the preceeding field will be followed by padding. Also, the end of that struct is very likely to be padded by at least 1 bytes on a 32 bit architecture to allow for proper alignment if used in an array (the compiler does not care you if you actually need this). It might even be 8 byte aligned.
CM_Header: Here you likely (not guaranteed) get one uint8_t with 4*2 bits with the ordering not standardized. The field my be followed by 3 unused bytes which are required for the uint32_t interprettion of the union.
How do you guarantee the same endianess (for >uint8_t: high byte first or low byte first?) for host and target?
In general, the structs/unions need not have the same layout for host and target. Even if the same compiler is used, their ABIs may differ, etc. Even if it is the same CPU, there might be other system constraints. Also, for some CPUs, different ABIs (application binary interface) exist.
I'm currently working on a C-project where I should hide text in an .bmp image.
Therefore I open an image and write the file header and the info header in two structures:
typedef struct _BitmapFileHeader_
{
uint16_t bfType_;
uint32_t bfSize_;
uint32_t bfReserved_;
uint32_t bfOffBits_;
}
__attribute__((packed))
BitmapFileHeader;
typedef struct _BitmapInfoHeader_
{
uint32_t biSize_;
int32_t biWidth_;
int32_t biHeight_;
uint16_t biPlanes_;
uint16_t biBitCount_;
uint32_t biCompression_;
uint32_t biSizeImage_;
int32_t biXPelsPerMeter_;
int32_t biYPelsPerMeter_;
uint32_t biClrUsed_;
uint32_t biClrImportant_;
}BitmapInfoHeader;
BitmapFileHeader* bitmap_headerdata = NULL;
BitmapInfoHeader* bitmap_infodata = NULL;
filename is defined previously
int readBitmapFile (char* filename, BitmapFileHeader** bitmap_headerdata,
BitmapInfoHeader** bitmap_infodata, unsigned char** bitmap_colordata)
{
FILE* bmp_file;
bmp_file = fopen(filename, "rb");
fseek(bmp_file, 0, SEEK_SET); // Set File Cursor to beginning
fread((*bitmap_headerdata), sizeof(**bitmap_headerdata), 1, bmp_file);
fseek(bmp_file, sizeof(**bitmap_headerdata), SEEK_SET);
fread((*bitmap_infodata), sizeof(**bitmap_infodata), 1, bmp_file);
int checkinfo = sizeof(**bitmap_infodata);
int checkheader = sizeof(**bitmap_headerdata);
printf("Size of Infodata: %d\nSize of Headerdata: %d\n", checkinfo, checkheader);
....
}
When I open a valid Bitmap (24bit, not compressed) and I compare the values bfType_, biBitCount and biCompression to 19778,24,0 on Linux it works just fine but when I try to run it on Windows the Program stops when it compares biBitCount to 24.
When I debugged the program I noticed that all the Values from "bitmap_infodata" are one line above from where they should be (when I look at it like a table).
Then I compared sizeof(**bitmap_headerdata) on Linux and on Windows and noticed that it's 14 on Linux and 16 on Windows?
Shouldn't that be the same? And why does the structure bitmap_headerdata have same values on both OS but bitmap_infodata is different?
Bernhard
The problem is that structs are padded differently in different environments.
There are 2 solutions to the problem.
1: Read the header field by field.
2: Remove struct padding. The syntax to do that varies. Some compilers use #PRAGMA PACK. You are using __attribute__((__packed__)) which apparantly doesn't work on both platforms.
The packed attribute is broken in mingw gcc:
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=52991
You might want to consider using:
#pragma pack(1)
struct BitmapFileHeader {
...
};
#pragma pack()
And -mno-ms-bitfields flag.