I am trying to write a driver to read the rtc time. In the main code I have written the following. The driver side has the same definitions and ioctl handler returns the condition of flag.
On compiling the main code I get the error of "invalid operands to binary << (have 'char *' and 'int')" for the RD_RTC_TIME_UPDTD definition.
What is the reason for the error?
main.c:
#define RTC_MAGIC_NO "p"
#define RTC_TIME_UPDTD_CMD 0x1F
#define RD_RTC_TIME_UPDTD _IOR(RTC_MAGIC_NO, RTC_TIME_UPDTD_CMD, int*)
int rtc_time_updtd_sts = 0;
rtcDev = open("dev/rtc0", O_RDWR);
ret = ioctl(rtcDev, RD_RTC_TIME_UPDTD, (int*)&rtc_time_updtd_sts);
driver: rtc.c
#define RTC_MAGIC_NO "p"
#define RTC_TIME_UPDTD_CMD 0x1F
#define RD_RTC_TIME_UPDTD _IOR(RTC_MAGIC_NO, RTC_TIME_UPDTD_CMD, int*)
int rtc_do_ioctl(unsigned int cmd, unsigned long arg, int kernel){
switch(cmd) {
case RD_RTC_TIME_UPDTD: /* Read the flag to check if the RTC time is set by system or not */
{
copy_to_user((int *)arg, &rtc_time_updtd, sizeof(rtc_time_updtd));
break;
}
}
RTC_MAGIC_NO implies it should be a number. The first argument to _IOR should be an 8-bit number. Instead you defined it as "p" which is a string literal which will give you a char *. Perhaps you wanted to use 'p' instead, which would be treated as an int (and should fit the 8-bit criteria).
_IOR(RTC_MAGIC_NO, RTC_TIME_UPDTD_CMD, int*)
expands to
// _IOC(dir,type,nr,size)
_IOC(2U,("p"),(0x1F),((sizeof(int*))))
which expands to
(((2U) << _IOC_DIRSHIFT) | \
(("p") << _IOC_TYPESHIFT) | \
((0x1F) << _IOC_NRSHIFT) | \
((sizeof(int*)) << _IOC_SIZESHIFT))
Here you try to leftshift a char* (the "p") _IOC_TYPESHIFT steps. Shifting a pointer is not a supported operation in C so change "p" to 'p' like the other Real Time Clock ioctl macros in linux/rtc.h.
Related
I have written a C Macro to set/unset Bits in a uint32 variable. Here are the definitions of the macros:
extern uint32_t error_field, error_field2;
#define SET_ERROR_BIT(x) do{\
if(x < 0 || x >63){\
break;\
}\
if(((uint32_t)x)<32U){\
(error_field |= ((uint32_t)1U << ((uint32_t)x)));\
break;\
} else if(((uint32_t)x)<64U){\
(error_field2 |= ((uint32_t)1U<<(((uint32_t)x)-32U)));\
}\
}while(0)
#define RESET_ERROR_BIT(x) do{\
if(((uint32_t)x)<32U){\
(error_field &= ~((uint32_t)1U<<((uint32_t)x)));\
break;\
} else if(((uint32_t)x) < 64U){\
(error_field2 &= ~((uint32_t)1U<<(((uint32_t)x)-32U)));\
}\
} while(0)
I am passing a field of an enumeration, that looks like this:
enum error_bits {
error_chamber01_data = 0,
error_port21_data,
error_port22_data,
error_port23_data,
error_port24_data,
/*this goes on until 47*/
};
This warning is produced:
left shift count >= width of type [-Wshift-count-overflow]
I am calling the Macros like this:
USART2->CR1 |= USART_CR1_RXNEIE;
SET_ERROR_BIT(error_usart2);
/*error_usart2 is 47 in the enum*/
return -1;
I get this warning with every macro, even with those where the left shift count is < 31.
If I use the definition of the macro without the macro, it produces no warning. The behaviour is the same with a 64 bit variable. I am programming a STM32F7 with AC6 STM32 MCU GCC compiler.
I can't figure out why this happens. Can anyone help me?
Probably a problem with the compiler not being able to diagnose correctly, as stated by M Oehm. A workaround could be, instead of using the minus operation, use the remainder operation:
#define _SET_BIT(x, bit) (x) |= 1U<<((bit) % 32U)
#define SET_BIT(x, bit) _SET_BIT(x, (uint32_t)(bit))
#define _SET_ERROR_BIT(x) do{\
if((x)<32U){\
SET_BIT(error_field, x);\
} else if((x)<64U){\
SET_BIT(error_field2, x);\
}\
}while(0)
#define SET_ERROR_BIT(x) _SET_ERROR_BIT((uint32_t)(x))
This way the compiler is finally smart enough to know that the value of x will never exceed 32.
The call to the "_" macro is used in order to force x to always be an uint32_t, inconditionally of the macro call, avoiding the UB of a call with a negative value of x.
Tested in coliru
Problem:
In the macros, you distinguish two cases, which, on their own, are okay. The warning comes from the branch that isn't executed, where the shift is out of range. (Apparently these diagnostics are issued before the dead branch is eliminated.)
#M Oehm
Solution
Insure shifts are in range 0-31 in both paths regardless of the x value and type of x.
x & 31 is a stronger insurance than x%32 or x%32u. % can result in negative remainders when x < 0 and with a wide enough type.
#define SET_ERROR_BIT(x) do{\
if((x) < 0 || (x) >63){\
break;\
}\
if(((uint32_t)x)<32U){\
(error_field |= ((uint32_t)1U << ( (x)&31 )));\
break;\
} else if(((uint32_t)x)<64U){\
(error_field2 |= ((uint32_t)1U<<( (x)&31 )));\
}\
}while(0)
As a general rule: good to use () around each usage of x.
Seeing the thread I wanted to indicate a nice (and perhaps cleaner) way to set, reset and toggle the status of a bit in the case of the two unsigned integers as in thread. This code should be OT because uses x that shall be an unsigned int (or an int) and not a enum value.
I've written the line of code at the end of this answer.
The code receives as input a number of parameter couples. Each couple of parameter is a letter and a number. The letter may be:
S to set a bit
R to reset a bit
T to toggle a bit
The number has to be a bit value from 0 to 63. The macros in the code discard each number greater than 63 and nothing is modified into the variables. The negative values haven't been evalued because we suppose a bit value is an unsigned value.
For Example (if we name the program bitman):
Executing: bitman S 0 S 1 T 7 S 64 T 7 S 2 T 80 R 1 S 63 S 32 R 63 T 62
The output will be:
S 0 00000000-00000001
S 1 00000000-00000003
T 7 00000000-00000083
S 64 00000000-00000083
T 7 00000000-00000003
S 2 00000000-00000007
T 80 00000000-00000007
R 1 00000000-00000005
S 63 80000000-00000005
S 32 80000001-00000005
R 63 00000001-00000005
T 62 40000001-00000005
#include <unistd.h>
#include <stdio.h>
#include <stdint.h>
#include <string.h>
static uint32_t err1 = 0;
static uint32_t err2 = 0;
#define SET_ERROR_BIT(x) (\
((unsigned)(x)>63)?err1=err1:((x)<32)?\
(err1 |= (1U<<(x))):\
(err2 |= (1U<<((x)-32)))\
)
#define RESET_ERROR_BIT(x) (\
((unsigned)(x)>63)?err1=err1:((x)<32)?\
(err1 &= ~(1U<<(x))):\
(err2 &= ~(1U<<((x)-32)))\
)
#define TOGGLE_ERROR_BIT(x) (\
((unsigned)(x)>63)?err1=err1:((x)<32)?\
(err1 ^= (1U<<(x))):\
(err2 ^= (1U<<((x)-32)))\
)
int main(int argc, char *argv[])
{
int i;
unsigned int x;
for(i=1;i<argc;i+=2) {
x=strtoul(argv[i+1],NULL,0);
switch (argv[i][0]) {
case 'S':
SET_ERROR_BIT(x);
break;
case 'T':
TOGGLE_ERROR_BIT(x);
break;
case 'R':
RESET_ERROR_BIT(x);
break;
default:
break;
}
printf("%c %2d %08X-%08X\n",argv[i][0], x, err2, err1);
}
return 0;
}
The macros are splitted in more then one line, but they are each a one-line code.
The code main has no error control then if the parameters are not correctly specified the program might be undefined behaviour.
#include <stdio.h>
#include <stdlib.h>
int main(void)
{
int *int_pointer = (int *) malloc(sizeof(int));
// open output file
FILE *outptr = fopen("test_output", "w");
if (outptr == NULL)
{
fprintf(stderr, "Could not create %s.\n", "test_output");
return 1;
}
*int_pointer = 0xabcdef;
fwrite(int_pointer, sizeof(int), 1, outptr);
//clean up
fclose(outptr);
free(int_pointer);
return 0;
}
this is my code and when I see the test_output file with xxd it gives following output.
$ xxd -c 12 -g 3 test_output
0000000: efcdab 00 ....
I'm expecting it to print abcdef instead of efcdab.
Which book are you reading? There are a number of issues in this code, casting the return value of malloc for example... Most importantly, consider the cons of using an integer type which might vary in size and representation from system to system.
An int is guaranteed the ability to store values between the range of -32767 and 32767. Your implementation might allow more values, but to be portable and friendly with people using ancient compilers such as Turbo C (there are a lot of them), you shouldn't use int to store values larger than 32767 (0x7fff) such as 0xabcdef. When such out-of-range conversions are performed, the result is implementation-defined; it could involve saturation, wrapping, trap representations or raising a signal corresponding to computational error, for example, the latter of two which could cause undefined behaviour later on.
You need to translate to an agreed-upon field format. When sending data over the write, or writing data to a file to be transferred to other systems, it's important that the protocol for communication be agreed upon. This includes using the same size and representation for integer fields. Both output and input should be followed by a translation function (serialisation and deserialisation, respectively).
Your fields are binary, and so your file should be opened in binary mode. For example, use fopen(..., "wb") rather than "w". In some situations, '\n' characters might be translated to pairs of \r\n characters, otherwise; Windows systems are notorious for this. Can you imagine what kind of havoc and confusion this could wreak? I can, because I've answered a question about this problem.
Perhaps uint32_t might be a better choice, but I'd choose unsigned long as uint32_t isn't guaranteed to exist. On that note, for systems which don't have htonl (which returns uint32_t according to POSIX), that function could be implemented like so:
uint32_t htonl(uint32_t x) {
return (x & 0x000000ff) << 24
| (x & 0x0000ff00) << 8
| (x & 0x00ff0000) >> 8
| (x & 0xff000000) >> 24;
}
As an example inspired by the above htonl function, consider these macros:
typedef unsigned long ulong;
#define serialised_long(x) serialised_ulong((ulong) x)
#define serialised_ulong(x) (x & 0xFF000000) / 0x1000000 \
, (x & 0xFF0000) / 0x10000 \
, (x & 0xFF00) / 0x100 \
, (x & 0xFF)
typedef unsigned char uchar;
#define deserialised_long(x) (x[3] <= 0x7f \
? deserialised_ulong(x) \
: -(long)deserialised_ulong((uchar[]) { 0x100 - x[0] \
, 0xFF - x[1] \
, 0xFF - x[2] \
, 0xFF - x[3] })
#define deserialised_ulong(x) ( x[0] * 0x1000000UL \
+ x[1] * 0x10000UL \
+ x[2] * 0x100UL \
+ x[3] )
#include <stdio.h>
#include <stdlib.h>
int main(void)
{
FILE *f = fopen("test_output", "wb+");
if (f == NULL)
{
fprintf(stderr, "Could not create %s.\n", "test_output");
return 1;
}
ulong value = 0xABCDEF;
unsigned char datagram[] = { serialised_ulong(value) };
fwrite(datagram, sizeof datagram, 1, f);
printf("%08lX serialised to %02X%02X%02X%02X\n", value, datagram[0], datagram[1], datagram[2], datagram[3]);
rewind(f);
fread(datagram, sizeof datagram, 1, f);
value = deserialised_ulong(datagram);
printf("%02X%02X%02X%02X deserialised to %08lX\n", datagram[0], datagram[1], datagram[2], datagram[3], value);
fclose(f);
return 0;
}
Use htonl()
It converts from whatever the host-byte-order is (endianness of your machine) to network byte order. So whatever machine you're running on you will get the the same byte order. These calls are used so that regardless of the host you're running on the bytes are sent over the network in the right order, but it works for you too.
See the man pages of htonl and byteorder. There are various conversion functions available, also for different integer sizes, 16-bit, 32-bit, 64-bit ...
#include <stdio.h>
#include <stdlib.h>
#include <arpa/inet.h>
int main(void) {
int *int_pointer = (int *) malloc(sizeof(int));
// open output file
FILE *outptr = fopen("test_output", "w");
if (outptr == NULL) {
fprintf(stderr, "Could not create %s.\n", "test_output");
return 1;
}
*int_pointer = htonl(0xabcdef); // <====== This ensures correct byte order
fwrite(int_pointer, sizeof(int), 1, outptr);
//clean up
fclose(outptr);
free(int_pointer);
return 0;
}
Compiler: GNU GCC
Application type: console application
Language: C
Platforms: Win7 and Linux Mint
I wrote a program that I want to run under Win7 and Linux. The program writes C structs to a file and I want to be able to create the file under Win7 and read it back in Linux and vice versa.
By now, I have learned that writing complete structs with fwrite() will give almost 100% assurance that it won't be read back correctly by the other platform. This due to padding and maybe other causes.
I defined all structs myself and they (now, after my previous question on this forum) all have members of type int32_t, int64_t and char. I am thinking about writing a WriteStructname() function for each struct that will write the individual members as int32_t, int64_t and char to the outputfile. Likewise, a ReadStructname() function to read the individual struct members from the file and copy them to an empty struct again.
Would this approach work? I prefer to have maximum control over my sourcecode, so I'm not looking for libraries or other dependencies to achieve this unless I really have to.
Thanks for reading
Element-wise writing of data to a file is your best approach, since structs will differ due to alignment and packing differences between compilers.
However, even with the approach you're planning on using, there are still potential pitfalls, such as different endianness between systems, or different encoding schemes (ie: two's complement versus one's complement encoding of signed numbers).
If you're going to do this, you should consider something like a JSON parser to encode and decode your data so you don't corrupt it due to the issues mentioned above.
Good luck!
If you use GCC or any other compiler that supports "packed" structs, as long you avoid yourself from using anything but [u]intX_t types in the struct, and execute endianness fix in any field where type is bigger than 8 bits, you are platform safe :)
This is an example code where you get portability between platforms, do not forget to manually edit the endianness UIP_BYTE_ORDER.
#include <stdint.h>
#include <stdio.h>
/* These macro are set manually, you should use some automated detection methodology */
#define UIP_BIG_ENDIAN 1
#define UIP_LITTLE_ENDIAN 2
#define UIP_BYTE_ORDER UIP_LITTLE_ENDIAN
/* Borrowed from uIP */
#ifndef UIP_HTONS
# if UIP_BYTE_ORDER == UIP_BIG_ENDIAN
# define UIP_HTONS(n) (n)
# define UIP_HTONL(n) (n)
# define UIP_HTONLL(n) (n)
# else /* UIP_BYTE_ORDER == UIP_BIG_ENDIAN */
# define UIP_HTONS(n) (uint16_t)((((uint16_t) (n)) << 8) | (((uint16_t) (n)) >> 8))
# define UIP_HTONL(n) (((uint32_t)UIP_HTONS(n) << 16) | UIP_HTONS((uint32_t)(n) >> 16))
# define UIP_HTONLL(n) (((uint64_t)UIP_HTONL(n) << 32) | UIP_HTONL((uint64_t)(n) >> 32))
# endif /* UIP_BYTE_ORDER == UIP_BIG_ENDIAN */
#else
#error "UIP_HTONS already defined!"
#endif /* UIP_HTONS */
struct __attribute__((__packed__)) s_test
{
uint32_t a;
uint8_t b;
uint64_t c;
uint16_t d;
int8_t string[13];
};
struct s_test my_data =
{
.a = 0xABCDEF09,
.b = 0xFF,
.c = 0xDEADBEEFDEADBEEF,
.d = 0x9876,
.string = "bla bla bla"
};
void save()
{
FILE * f;
f = fopen("test.bin", "w+");
/* Fix endianness */
my_data.a = UIP_HTONL(my_data.a);
my_data.c = UIP_HTONLL(my_data.c);
my_data.d = UIP_HTONS(my_data.d);
fwrite(&my_data, sizeof(my_data), 1, f);
fclose(f);
}
void read()
{
FILE * f;
f = fopen("test.bin", "r");
fread(&my_data, sizeof(my_data), 1, f);
fclose(f);
/* Fix endianness */
my_data.a = UIP_HTONL(my_data.a);
my_data.c = UIP_HTONLL(my_data.c);
my_data.d = UIP_HTONS(my_data.d);
}
int main(int argc, char ** argv)
{
save();
return 0;
}
Thats the saved file dump:
fanl#fanl-ultrabook:~/workspace-tmp/test3$ hexdump -v -C test.bin
00000000 ab cd ef 09 ff de ad be ef de ad be ef 98 76 62 |..............vb|
00000010 6c 61 20 62 6c 61 20 62 6c 61 00 00 |la bla bla..|
0000001c
This is a good approach. If all fields are integer types of a specific size such as int32_t, int64_t, or char, and you read/write the appropriate number of them to/from arrays, you should be fine.
The one thing you need to watch out for is endianness. Any integer type should be written in a known byte order and read back in the proper byte order for the system in question. The simplest way to do this is with the ntohs and htons functions for 16-bit ints and the ntohl and htonl functions for 32-bit ints. There's no corresponding standard functions for 64-bit ints, but that shouldn't be to difficult to write.
Here's a sample of how you could write these functions for 64 bit:
uint64_t htonll(uint64_t val)
{
uint8_t v[8];
uint64_t *result = (uint64_t *)v;
int i;
for (i=0; i<8; i++) {
v[i] = (uint8_t)(val >> ((7-i) * 8));
}
return *result;
}
uint64_t ntohll(uint64_t val)
{
uint8_t *v = (uint8_t *)&val;
uint64_t result = 0;
int i;
for (i=0; i<8; i++) {
result |= (uint64_t)v[i] << ((7-i) * 8);
}
return result;
}
In a multithreaded program that was written I have some performance problems with very high lock contention.
I have solved this issue by having a few flags within an 32 bit unsigned integer.
currently I just bit shift the values in a temporary variable and then atomically write it.
But I don`t really like to remember the exact amount of bit shifts or where exactly what flag resides.
So I have been wondering if I could just make a union with a uint32_t and the struct with my bitflags with the same size, couldn`t I acces the bitflags by the struct and atomically write it as a uint32_t?
below is the code on how I`d like it to work. It does work but I am unsure on whether this is allowed
#include <stdio.h>
#include <stdlib.h>
#include <inttypes.h>
typedef struct atomic_flags {
unsigned int flags1 : 2;
unsigned int flags2 : 2;
unsigned int flags3 : 2;
unsigned int flags4 : 2;
unsigned int flags5 : 8;
unsigned int reserved : 16;
}atomic_flags;
union data {
atomic_flags i;
uint32_t q;
} data;
int main() {
union data test1;
union data test2;
test1.i.flags1 = 1;
test1.i.flags2 = 2;
test1.i.flags3 = 3;
test1.i.flags4 = 2;
test1.i.flags5 = 241;
test1.i.reserved = 1337;
printf("%u\n", test1.q);
__atomic_store_n(&test2.q, test1.q, __ATOMIC_SEQ_CST);
printf("test1 flags1: %u\n", test1.i.flags1);
printf("test1 flags2: %u\n", test1.i.flags2);
printf("test1 flags3: %u\n", test1.i.flags3);
printf("test1 flags4: %u\n", test1.i.flags4);
printf("test1 flags5: %u\n", test1.i.flags5);
printf("test1 reserved: %u\n", test1.i.reserved);
printf("test2 flags1: %u\n", test2.i.flags1);
printf("test2 flags2: %u\n", test2.i.flags2);
printf("test2 flags3: %u\n", test2.i.flags3);
printf("test2 flags4: %u\n", test2.i.flags4);
printf("test2 flags5: %u\n", test2.i.flags5);
printf("test2 reserved: %u\n", test2.i.reserved);
}
or maybe this is even possible?
__atomic_store_n(&test2.i.flags1, 2, __ATOMIC_SEQ_CST);
It is implementation defined.
If you want to make all the masking and shifting easier and to reduce the likelihood of mistakes, then a sturdier (but uglier) way would be enlist the preprocessor to help you out:
/*
* widths of the bitfields; these values can be changed independently of anything
* else, provided that the total number of bits does not exceed 32.
*/
#define FLAG_flag1_BITS 2
#define FLAG_flag2_BITS 2
#define FLAG_flag3_BITS 2
#define FLAG_flag4_BITS 2
#define FLAG_flag5_BITS 8
/* Macro evaluating to the number of bits in the named flag */
#define FLAG_BITS(flagname) (FLAG_ ## flagname ## _BITS)
/*
* Positions of the flags in the overall bitmask; these adapt to the flag widths
* above, but a new macro (with the same pattern) will be needed if a bitfield
* is added.
*/
#define FLAG_flag1_SHIFT 0
#define FLAG_flag2_SHIFT (FLAG_flag1_SHIFT + FLAG_flag1_BITS)
#define FLAG_flag3_SHIFT (FLAG_flag2_SHIFT + FLAG_flag2_BITS)
#define FLAG_flag4_SHIFT (FLAG_flag3_SHIFT + FLAG_flag3_BITS)
#define FLAG_flag5_SHIFT (FLAG_flag4_SHIFT + FLAG_flag4_BITS)
/* Macro evaluating to the position of the named flag in the overall bitfield */
#define FLAG_SHIFT(flagname) (FLAG_ ## flagname ## _SHIFT)
/* evaluates to a bitmask for selecting the named flag's bits from a bitfield */
#define FLAG_MASK(flagname) \
((~(((uint32_t) 0xffffffff) << FLAG_BITS(flagname))) << FLAG_SHIFT(flagname))
/* evaluates to a bitfield having the specified flag set to the specified value */
#define FLAG(flagname, v) ((v << FLAG_SHIFT(flagname)) & FLAG_MASK(flagname))
/* macro to set the specified flag in the specified bitfield to the specified value */
#define SET_FLAG(flagname, i, v) \
do { i = (i & ~FLAG_MASK(flagname)) | FLAG(flagname, v); } while (0)
/* macro to retrieve the value of the specified flag from the specified bitfield */
#define GET_FLAG(flagname, i) (((i) & FLAG_MASK(flagname)) >> FLAG_SHIFT(flagname))
/* usage example */
int function(uint32_t bitfield) {
uint32_t v;
SET_FLAG(flag2, bitfield, 1);
v = GET_FLAG(flag5, bitfield);
}
Though that involves a prodigous stack of macros, it's mostly driven by the first set, that give the bitfield widths. Substantially all of that will compile down to the same shift and mask operations that you would use anyway, as the computations will be performed mostly by the preprocessor and/or compiler. Actual usage is very simple.
Consider I want to generate parities at compile time. The parity calculation is given literal constants and with any decent optimizer it will boil down to a single constant itself. Now look at the following parity calculation with the C preprocessor:
#define PARITY16(u16) (PARITY8((u16)&0xff) ^ PARITY8((u16)>>8))
#define PARITY8(u8) (PARITY4((u8)&0x0f) ^ PARITY4((u8)>>4))
#define PARITY4(u4) (PARITY2((u4)&0x03) ^ PARITY2((u4)>>2))
#define PARITY2(u2) (PARITY1((u2)&0x01) ^ PARITY1((u2)>>1))
#define PARITY1(u1) (u1)
int message[] = { 0x1234, 0x5678, PARITY16(0x1234^0x5678));
This will calculate the parity at compile time, but it will produce an enormous amount of intermediate code, expanding to 16 instances of the expression u16 which itself can be e.g. an arbitrary complex expression. The problem is that the C preprocessor can't evaluate intermediary expressions and in the general case only expands text (you can force it to do integer arithmetic in-situ but only for trivial cases, or with gigabytes of #defines).
I have found that the parity for 3 bits can be generated at once by an arithmetic expression: ([0..7]*3+1)/4. This reduces the 16-bit parity to the following macro:
#define PARITY16(u16) ((4 & ((((u16)&7)*3+1) ^ \
((((u16)>>3)&7)*3+1) ^ \
((((u16)>>6)&7)*3+1) ^ \
((((u16)>>9)&7)*3+1) ^ \
((((u16)>>12)&7)*3+1) ^ \
((((u16)>>15)&1)*3+1))) >> 2))
which expands u16only 6 times. Is there an even cheaper (in terms of number of expansions) way, e.g. a direct formula for a 4,5,etc. bit parity? I couldn't find a solution for a linear expression of the form (x*k+d)/m for acceptable (non-overflowing) values k,d,m for a range > 3 bits. Anyone out there with a more clever shortcut for preprocessor parity calculation?
Is something like this what you are looking for?
The following "PARITY16(u16)" preprocessor macro can be used as a literal constant in structure assignments, and it only evaluates the argument once.
/* parity.c
* test code to test out bit-twiddling cleverness
* 2013-05-12: David Cary started.
*/
// works for all 0...0xFFFF
// and only evalutes u16 one time.
#define PARITYodd33(u33) \
( \
((((((((((((((( \
(u33) \
&0x555555555)*5)>>2) \
&0x111111111)*0x11)>>4) \
&0x101010101)*0x101)>>8) \
&0x100010001)*0x10001)>>16) \
&0x100000001)*0x100000001)>>32) \
&1)
#define PARITY16(u16) PARITYodd33(((unsigned long long)u16)*0x20001)
// works for all 0...0xFFFF
// but, alas, generates 16 instances of u16.
#define PARITY_16(u16) (PARITY8((u16)&0xff) ^ PARITY8((u16)>>8))
#define PARITY8(u8) (PARITY4((u8)&0x0f) ^ PARITY4((u8)>>4))
#define PARITY4(u4) (PARITY2((u4)&0x03) ^ PARITY2((u4)>>2))
#define PARITY2(u2) (PARITY1((u2)&0x01) ^ PARITY1((u2)>>1))
#define PARITY1(u1) (u1)
int message1[] = { 0x1234, 0x5678, PARITY16(0x1234^0x5678) };
int message2[] = { 0x1234, 0x5678, PARITY_16(0x1234^0x5678) };
#include <stdio.h>
int main(void){
int errors = 0;
int i=0;
printf(" Testing parity ...\n");
printf(" 0x%x = message with PARITY16\n", message1[2] );
printf(" 0x%x = message with PARITY_16\n", message2[2] );
for(i=0; i<0x10000; i++){
int left = PARITY_16(i);
int right = PARITY16(i);
if( left != right ){
printf(" 0x%x: (%d != %d)\n", i, left, right );
errors++;
return 0;
};
};
printf(" 0x%x errors detected. \n", errors );
} /* vim: set shiftwidth=4 expandtab ignorecase : */
Much like the original code you posted, it pairs up bits and (in effect) calculates the XOR between each pair, then from the results it pairs up the bits again, halving the number of bits each time until only a single parity bit remains.
But is that really what you wanted ?
Many people say they are calculating "the parity" of a message.
But in my experience, most of the time they are really generating
a error-detection code bigger than a single parity bit --
a LRC, or a CRC, or a Hamming code, or etc.
further details
If the current system is compiling in a reasonable amount of time,
and it's giving the correct answers, I would leave it alone.
Refactoring "how the pre-processor generates some constant"
will produce bit-for-bit identically the same runtime executable.
I'd rather have easy-to-read source
even if it takes a full second longer to compile.
Many people use a language easier-to-read than the standard C preprocessor to generate C source code.
See pycrc, the character set extractor, "using Python to generate C", etc.
If the current system is taking way too long to compile,
rather than tweak the C preprocessor,
I would be tempted to put that message, including the parity, in a separate ".h" file
with hard-coded constants (rather than force the C pre-processor to calculate them every time),
and "#include" that ".h" file in the ".c" file for the embedded system.
Then I would make a completely separate program (perhaps in C or Python)
that does the parity calculations and
prints out the contents of that ".h" file as pre-calculated C source code,
something like
print("int message[] = { 0x%x, 0x%x, 0x%x };\n",
M[0], M[1], parity( M[0]^M[1] ) );
and tweak my MAKEFILE to run that Python (or whatever) program to regenerate that ".h" file
if, and only if, it is necessary.
As mfontanini says, an inline function is much better.
If you insist on a macro, you can define a temporary variable.
With gcc, you can do it and still have the macro which behaves as an expression:
#define PARITY(x) ({int tmp=x; PARITY16(tmp);})
If you want to stick to the standard, you have to make the macro a statement:
#define PARITY(x, target) do { int tmp=x; target=PARITY16(tmp); } while(0).
In both cases, you can have ugly bugs if tmp ends up a name used in the function (even worse - used within the parameter passed to the macro).