Value of argument in #define C? - c-preprocessor

I'm wondering if there is any possibility of replacing arguments in directive #define by the values of the arguments, they actually have in program. Here's the code:
typedef struct {
uint8_t b0 : 1;
uint8_t b1 : 1;
uint8_t b2 : 1;
uint8_t b3 : 1;
uint8_t b4 : 1;
uint8_t b5 : 1;
uint8_t b6 : 1;
uint8_t b7 : 1;
} BIT_FIELD;
#define _PORTD (*( volatile BIT_FIELD*)&PORTD)
#define kupa(s) _PORTD.b##s
void SentByteTo74HC595 (uint8_t val){
for(int i = 7 ; i >=0 ; i--) {
DS = kupa(i);
SHCPpulse() ;
}
The problem is that kupa makes _PORTD.bi insted of _PORTD.b0, PORTD.b1 etc. I was trying diffrent ways with # or ## but I'm not sure if it's even possbile to achieve what i want to achieve.

Because preprocessor (the thing that expands macros) is run even before the compiler. What you want is to access names of struct's fields in runtime, which is impossible, because names of fields are not preserved. C is not a dynamic language and do not have reflections.
You can, however, write your for iteration as a separate define and call it 8 times. Not the best solution, though - in your case I'd use bit shifts instead of bit fields. It's easier to support, use and read.

Pre-processor directives are processed and applied prior to processing (i.e. compilation). kupa is a pre-processor macro, so the compiler never sees it. So no, you can't do it like this.
What you're trying to do is not portable, how the bits are stored is compiler dependent.
You may be better using bit manipulation such as defining enums which specify a given bit
enum {
b0 = 0x001,
b1 = 0x002,
b3 = 0x004,
...
};
You may still need to perform (e.g.) endian conversions on values depending on how you are exchanging them.

typedef struct {
uint8_t b0 : 1;
uint8_t b1 : 1;
uint8_t b2 : 1;
uint8_t b3 : 1;
uint8_t b4 : 1;
uint8_t b5 : 1;
uint8_t b6 : 1;
uint8_t b7 : 1;
} BIT_FIELD;
union u_ToSend {
uint8_t ui8val;
BIT_FIELD BFval;
} ;
void SentByteTo74HC595 (uint8_t val){
union u_ToSend tmp;
tmp.ui8val = val;
for (int i = 7; i >=0; i--){
DS = tmp.BFval.b7 ;
tmp.ui8val <<= 1 ;
SHCPpulse() ;
}
Is there any other way to write it, so that program consume less RAM and Flash memory ?

Related

Function to generate the corresponding mask for a bit field

I have a 32 bit register R with various bit fields declared as follows:
typedef union {
uint32_t raw;
struct {
uint32_t F1 : 0x4;
uint32_t F2 : 0x8;
uint32_t F3 : 0x8;
uint32_t F4 : 0xC;
}
} reg1
I also have a regWrite macro that read-modify-writes a field in the register as follows:
#define RegWrite(Reg, Field, Addr, Val) do {
Reg.raw = read32(Addr);
Reg.Field = Val;
write32(Addr, Reg.raw);
} while(0)
Now, I wanted to enhance the RegWrite module to optionally output a script to console instead of actually programming hardware, so that this can be saved and re-run at a later point of time.
For example, if I call out to regWrite as follows:
regWrite(reg1, F2, 0x12345678, 0xC)
The print output from the macro should look something like this:
set variable1 [read 32 0x12345678]
set variable1 [ ($variable1 & 0xFFFFF00F) | (0xC << 4) ]
write 32 0x12345678 variable1
How would I generate 0xFFFFF00F, and 4 within the macro? Thanks!
Well, your question lacks some important information, including:
What do you try to achive?
Why do you need to give just the struct member name as an argument?
This might be an X-Y-problem.
Anyway, from the literal requirement:
Fn(X) should print out 0xY, and Z.
You can do this with a macro:
#include <stdint.h>
#include <stdio.h>
struct F {
uint32_t F1 : 0x4;
uint32_t F2 : 0x8;
uint32_t F3 : 0x8;
uint32_t F4 : 0xC;
};
#define Fn(Fx) do { \
union { \
struct F f; \
uint32_t u; \
} v; \
v.u = 0; \
v.u = ~v.u; \
v.f.Fx = 0; \
uint32_t m = v.u; \
int b; \
for (b = 0; (v.u & 1) != 0; b++) { \
v.u >>= 1; \
} \
(void)printf("0x%0X %d\n", m, b); \
} while (0)
int main(void) {
/* Fn(F2) should print out 0xFFFFF00F, and 4. */
Fn(F2);
/* Fn(F3) should print out 0xFFF00FFF, and 12. */
Fn(F3);
return 0;
}
Some notes to this hacked "solution":
It uses do { ... } while(0) to make sure that the macro can't be used as an expression, only as a statement.
There is no interpretation of Fx until it is read by the compiler in the line v.f.Fx = 0.
The code is only for C.
Each time it is used it will take clock cycles, and it needs code space. This seems to be unnecessary for constant expressions.
It works by defining a union that can be used as the struct or the resulting uint32_t.
The mask is generated by setting all bits to 1, and then resetting only the given struct member to 0.
The bit offset is obtained by looking for the first 0-bit from the right.
Please be aware that the standard makes no promisses about the order of bitfields in a memory word ("unit"), not even that they are in the same memory word. For further details see the chapter "Structure and union specifiers" of the version of the standard your compiler complies to.
But if you need the values for other purposes you should think about your architectur, and of course of the possibilities of the C standard. As I said, presumably you're trying to solve a completely other problem. And for this, the shown source is not the solution.
OK, I found some time to search for some more usable solution.
#include <stdint.h>
#include <stdio.h>
struct F {
uint32_t F1 : 0x4;
uint32_t F2 : 0x8;
uint32_t F3 : 0x8;
uint32_t F4 : 0xC;
};
typedef union {
struct F f;
uint32_t u;
} Fn_type;
uint32_t Fn_mask_helper(Fn_type v) {
return ~v.u;
}
#define Fn_mask(Fx) Fn_mask_helper((Fn_type){.u=0, .f.Fx=~0})
int Fn_bit_offset_helper(Fn_type v) {
v.u = ~v.u;
int b;
for (b = 0; (v.u & 1) != 0; b++) {
v.u >>= 1;
}
return b;
}
#define Fn_bit_offset(Fx) Fn_bit_offset_helper((Fn_type){.u=0, .f.Fx=~0})
int main(void) {
uint32_t m2 = Fn_mask(F2);
int b2 = Fn_bit_offset(F2);
(void)printf("0x%0X %d\n", m2, b2);
uint32_t m3 = Fn_mask(F3);
int b3 = Fn_bit_offset(F3);
(void)printf("0x%0X %d\n", m3, b3);
return 0;
}
To access the field (struct member) specified in the argument we need to use a macro. In C we can't use the name of a struct member as an argument on its own. Remember, the C preprocessor knows nothing about C. It is a quite simple search'n'replace tool.
This macro expands to the call to its helper function which takes the union as a parameter. The macro replacement text contains an initialization for this union with all bits on 0 but the bits of the concerned struct member.
The helper functions do the same as the macro in my other answer. In Fn_bit_offset_helper() the inversion of v.u together with the right shift ensures that the loop will not loop forever.
Note: You need a compiler in compliance with at least C99.

reorder a double type data

I'm trying to create a function in C in Windows that has as input a double (8Bytes) and returns another double but rearranged, that is, the input is B7 ... B0 and the output is B0 ... B7. My compiler gives me an error when working with int and double data.
I thought about taking the input value and make masks with a high level byte (0xFF) and thus separate the 8Bytes that form the double input value, then concatenate them in the order other than the one they entered and get my double output ordered as I want, but it does not work.
The code is the following:
double ordena_lineal(double lineal)
{
// Recibo B7...B0 y devuelvo B0...B7
uint8_t B0,B1,B2,B3,B4,B5,B6,B7;
double lineal_final;
B0 = lineal&&0xFF;
B1 = (lineal>>8)&&0xFF;
B2 = (lineal>>8*2)&&0xFF;
B3 = (lineal>>8*3)&&0xFF;
B4 = (lineal>>8*4)&&0xFF;
B5 = (lineal>>8*5)&&0xFF;
B6 = (lineal>>8*6)&&0xFF;
B7 = (lineal>>8*7)&&0xFF;
lineal_final = (B7 | (B6 << 8) | (B5 << 8*2) | (B4 << 8*3) | (B3 << 8*4) | (B2 << 8*5) | (B1 << 8*6) | (B0 << 8*7))
return lineal_final;
}
Bitwise operators are illegal to be used on floating point types. Also, when you assign the shifted bytes back to lineal_final, you're assigning the value of that expression, not its representation.
You need to use a union to do what you intend.
union double_bytes {
double d;
uint8_t b[sizeof(double)];
};
double ordena_lineal(double lineal)
{
union double_bytes src, dst;
src.d = lineal;
for (int i = 0; i < sizeof(double); i++) {
dst.b[sizeof(double) - 1 - i] = src.b[i];
}
return dst.d;
}
This allows you to access the object representation of a double and perform aliasing in a standard compliant manner.

C 40bit byte swap (endian)

I'm reading/writing a binary file in little-endian format from big-endian using C and bswap_{16,32,64} macros from byteswap.h for byte-swapping.
All values are read and written correctly, except a bit-field of 40 bits.
The bswap_40 macro doesn't exist and I don't know how do it or if a better solution is possible.
Here is a small code showing this problem:
#include <stdio.h>
#include <inttypes.h>
#include <byteswap.h>
#define bswap_40(x) bswap_64(x)
struct tIndex {
uint64_t val_64;
uint64_t val_40:40;
} s1 = { 5294967296, 5294967296 };
int main(void)
{
// write swapped values
struct tIndex s2 = { bswap_64(s1.val_64), bswap_40(s1.val_40) };
FILE *fp = fopen("index.bin", "w");
fwrite(&s2, sizeof(s2), 1, fp);
fclose(fp);
// read swapped values
struct tIndex s3;
fp = fopen("index.bin", "r");
fread(&s3, sizeof(s3), 1, fp);
fclose(fp);
s3.val_64 = bswap_64(s3.val_64);
s3.val_40 = bswap_40(s3.val_40);
printf("val_64: %" PRIu64 " -> %s\n", s3.val_64, (s1.val_64 == s3.val_64 ? "OK" : "Error"));
printf("val_40: %" PRIu64 " -> %s\n", s3.val_40, (s1.val_40 == s3.val_40 ? "OK" : "Error"));
return 0;
}
That code is compiled with:
gcc -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE
swap_40.c -o swap_40
How can I define bswap_40 macro for read and write these values of 40 bits doing byte-swap?
By defining bswap_40 to be the same as bswap_64, you're swapping 8 bytes instead of 5. So if you start with this:
00 00 00 01 02 03 04 05
You end up with this:
05 04 03 02 01 00 00 00
Instead of this:
00 00 00 05 04 03 02 01
The simplest way to handle this is to take the result of bswap_64 and right shift it by 24:
#define bswap_40(x) (bswap_64(x) >> 24)
EDIT
I got better performance writing this macro (comparing with my initial code, this produced less assembly instructions):
#define bswap40(s) \
((((s)&0xFF) << 32) | (((s)&0xFF00) << 16) | (((s)&0xFF0000)) | \
(((s)&0xFF000000) >> 16) | (((s)&0xFF00000000) >> 32))
use:
s3.val_40 = bswap40(s3.val_40);
... but it might be an optimizer issue. I thinks they should be optimized to the same thing.
Original Post
I love dbush's answer better... I was about to write this:
static inline void bswap40(void* s) {
uint8_t* bytes = s;
bytes[0] ^= bytes[3];
bytes[1] ^= bytes[2];
bytes[3] ^= bytes[0];
bytes[2] ^= bytes[1];
bytes[0] ^= bytes[3];
bytes[1] ^= bytes[2];
}
It's a destructive inline function for switching the bytes...
I'm reading/writing a binary file in little-endian format from big-endian using C and bswap_{16,32,64} macros from byteswap.h for byte-swapping.
Suggest a different way of approaching this problem: Far more often, code needs to read a file in a known endian format and then convert to the code's endian. This may involve a byte swap, it may not the trick is to write code that works under all conditions.
unsigned char file_data[5];
// file data is in big endidan
fread(file_data, sizeof file_data, 1, fp);
uint64_t y = 0;
for (i=0; i<sizeof file_data; i++) {
y <<= 8;
y |= file_data[i];
}
printf("val_64: %" PRIu64 "\n", y);
uint64_t val_40:40; is not portable. Bit ranges on types other that int, signed int, unsigned are not portable and have implementation specified behavior.
BTW: Open the file in binary mode:
// FILE *fp = fopen("index.bin", "w");
FILE *fp = fopen("index.bin", "wb");

Store C structs for multiple platform use - would this approach work?

Compiler: GNU GCC
Application type: console application
Language: C
Platforms: Win7 and Linux Mint
I wrote a program that I want to run under Win7 and Linux. The program writes C structs to a file and I want to be able to create the file under Win7 and read it back in Linux and vice versa.
By now, I have learned that writing complete structs with fwrite() will give almost 100% assurance that it won't be read back correctly by the other platform. This due to padding and maybe other causes.
I defined all structs myself and they (now, after my previous question on this forum) all have members of type int32_t, int64_t and char. I am thinking about writing a WriteStructname() function for each struct that will write the individual members as int32_t, int64_t and char to the outputfile. Likewise, a ReadStructname() function to read the individual struct members from the file and copy them to an empty struct again.
Would this approach work? I prefer to have maximum control over my sourcecode, so I'm not looking for libraries or other dependencies to achieve this unless I really have to.
Thanks for reading
Element-wise writing of data to a file is your best approach, since structs will differ due to alignment and packing differences between compilers.
However, even with the approach you're planning on using, there are still potential pitfalls, such as different endianness between systems, or different encoding schemes (ie: two's complement versus one's complement encoding of signed numbers).
If you're going to do this, you should consider something like a JSON parser to encode and decode your data so you don't corrupt it due to the issues mentioned above.
Good luck!
If you use GCC or any other compiler that supports "packed" structs, as long you avoid yourself from using anything but [u]intX_t types in the struct, and execute endianness fix in any field where type is bigger than 8 bits, you are platform safe :)
This is an example code where you get portability between platforms, do not forget to manually edit the endianness UIP_BYTE_ORDER.
#include <stdint.h>
#include <stdio.h>
/* These macro are set manually, you should use some automated detection methodology */
#define UIP_BIG_ENDIAN 1
#define UIP_LITTLE_ENDIAN 2
#define UIP_BYTE_ORDER UIP_LITTLE_ENDIAN
/* Borrowed from uIP */
#ifndef UIP_HTONS
# if UIP_BYTE_ORDER == UIP_BIG_ENDIAN
# define UIP_HTONS(n) (n)
# define UIP_HTONL(n) (n)
# define UIP_HTONLL(n) (n)
# else /* UIP_BYTE_ORDER == UIP_BIG_ENDIAN */
# define UIP_HTONS(n) (uint16_t)((((uint16_t) (n)) << 8) | (((uint16_t) (n)) >> 8))
# define UIP_HTONL(n) (((uint32_t)UIP_HTONS(n) << 16) | UIP_HTONS((uint32_t)(n) >> 16))
# define UIP_HTONLL(n) (((uint64_t)UIP_HTONL(n) << 32) | UIP_HTONL((uint64_t)(n) >> 32))
# endif /* UIP_BYTE_ORDER == UIP_BIG_ENDIAN */
#else
#error "UIP_HTONS already defined!"
#endif /* UIP_HTONS */
struct __attribute__((__packed__)) s_test
{
uint32_t a;
uint8_t b;
uint64_t c;
uint16_t d;
int8_t string[13];
};
struct s_test my_data =
{
.a = 0xABCDEF09,
.b = 0xFF,
.c = 0xDEADBEEFDEADBEEF,
.d = 0x9876,
.string = "bla bla bla"
};
void save()
{
FILE * f;
f = fopen("test.bin", "w+");
/* Fix endianness */
my_data.a = UIP_HTONL(my_data.a);
my_data.c = UIP_HTONLL(my_data.c);
my_data.d = UIP_HTONS(my_data.d);
fwrite(&my_data, sizeof(my_data), 1, f);
fclose(f);
}
void read()
{
FILE * f;
f = fopen("test.bin", "r");
fread(&my_data, sizeof(my_data), 1, f);
fclose(f);
/* Fix endianness */
my_data.a = UIP_HTONL(my_data.a);
my_data.c = UIP_HTONLL(my_data.c);
my_data.d = UIP_HTONS(my_data.d);
}
int main(int argc, char ** argv)
{
save();
return 0;
}
Thats the saved file dump:
fanl#fanl-ultrabook:~/workspace-tmp/test3$ hexdump -v -C test.bin
00000000 ab cd ef 09 ff de ad be ef de ad be ef 98 76 62 |..............vb|
00000010 6c 61 20 62 6c 61 20 62 6c 61 00 00 |la bla bla..|
0000001c
This is a good approach. If all fields are integer types of a specific size such as int32_t, int64_t, or char, and you read/write the appropriate number of them to/from arrays, you should be fine.
The one thing you need to watch out for is endianness. Any integer type should be written in a known byte order and read back in the proper byte order for the system in question. The simplest way to do this is with the ntohs and htons functions for 16-bit ints and the ntohl and htonl functions for 32-bit ints. There's no corresponding standard functions for 64-bit ints, but that shouldn't be to difficult to write.
Here's a sample of how you could write these functions for 64 bit:
uint64_t htonll(uint64_t val)
{
uint8_t v[8];
uint64_t *result = (uint64_t *)v;
int i;
for (i=0; i<8; i++) {
v[i] = (uint8_t)(val >> ((7-i) * 8));
}
return *result;
}
uint64_t ntohll(uint64_t val)
{
uint8_t *v = (uint8_t *)&val;
uint64_t result = 0;
int i;
for (i=0; i<8; i++) {
result |= (uint64_t)v[i] << ((7-i) * 8);
}
return result;
}

Create BMP header in C (can't limit 2 byte fields)

I'm doing it based on:
https://en.wikipedia.org/wiki/BMP_file_format
I want to create a BMP image from scratch in C.
#include <stdio.h>
#include <stdlib.h>
typedef struct HEADER {
short FileType;
int FileSize;
short R1;
short R2;
int dOffset;
} tp_header;
int main () {
FILE *image;
image = fopen("test.bmp", "w");
tp_header bHeader;
bHeader.FileType = 0x4D42;
bHeader.FileSize = 70;
bHeader.R1 = 0;
bHeader.R2 = 0;
bHeader.dOffset = 54;
fwrite(&bHeader, sizeof(struct HEADER), 1, image);
return 0;
}
I should be getting at output file:
42 4D 46 00 00 00 00 00 00 00 36 00 00 00
But instead i get:
42 4D 40 00 46 00 00 00 00 00 00 00 36 00 00 00
First of it should contain only 14 bytes. That "40 00" ruins it all. Is that the propper way of setting the header in C? How else can i limit the size in bytes outputed?
A struct might include padding bytes between the fields to align the next field to certain address offsets. The values of these padding bytes are indetermined. A typical layout might look like:
struct {
uint8_t field1;
uint8_t <padding>
uint8_t <padding>
uint8_t <padding>
uint32_t field2;
uint16_t field3;
uint8_t <padding>
uint8_t <padding>
};
<padding> is just added by the compile; it is not accessible by your program. This is just an example. Actual padding may differ and is defined by the ABI for your architecture (CPU/OS/toolchain).
Also, the order in which the bytes of a larger type are stored in memory (endianess) depends on the architecture. However, as the file requires a specific endianess, this might also have to be fixed.
Some - but not all - compilers alow to specify a struct to be packed (avoid padding), that still does not help with the endianess-problem.
Best is to serialize the struct properly by shifts and store to an uint8_t-array:
#include <stdint.h>
/** Write an uint16_t to a buffer.
*
* \returns The next position in the buffer for chaining.
*/
inline uint8_t *writeUInt16(uint8_t *bp, value)
{
*bp++ = (uint8_t)value;
*bp++ = (uint8_t)(value >> 8);
return bp;
}
// similar to writeUInt16(), but for uint32_t.
... writeUInt32( ... )
...
int main(void)
{
...
uint8_t buffer[BUFFER_SIZE], *bptr;
bptr = buffer;
bptr = writeUInt16(bptr, 0x4D42U); // FileType
bptr = writeUInt32(bptr, 70U); // FileSize
...
}
That will fill buffer with the header fields. BUFFER_SIZE has to be set according to the header you want to create. Once all fields are stored, write buffer to the file.
Declaring the functions inline hints a good compiler to create almost optimal code for constants.
Note also, that the sizes of short, etc. are not fixed. Use stdint.h types is you need types of defined size.
The problem is that your struct is aligned. You ought to write it like
#pragma pack(push, 1)
typedef struct HEADER {
short FileType;
int FileSize;
short R1;
short R2;
int dOffset;
} tp_header;
#pragma pack(pop)
Just for you to know — the compiler for optimizing reasons by default would lay it out like:
typedef struct HEADER {
short FileType;
char empty1; //inserted by compiler
char empty2; //inserted by compiler
int FileSize;
short R1;
short R2;
int dOffset;
} tp_header;
But you actually made also another error: sizeof(int) ≥ 4 bytes. I.e. depending on a platform integer could be 8 bytes. It is important, in such a cases you have to use types like int32_t from cstdint

Resources