What is a good way to express the semantics "this function is always going to return a constant value" in C?
I'm thinking about inline assembly functions that reads read-only registers, and potentially shift and/or masks on them. Clearly, during run time, the function's return value isn't going to change; so the compiler can potentially avoid inlining or calling the function all the time, but instead aim to reuse the value from the first call in a given scope.
const int that_const_value()
{
return (ro_register >> 16) & 0xff;
}
I could store the value and reuse it. But there could be indirect calls to this function, say, through other macro expansions.
#define that_bit() that_const_value() & 0x1
#define other_bit() that_const_value() & 0x2
...
if (that_bit()) {
...
}
...
if (other_bit()) {
...
}
Defining the original function as const doesn't seem to cut it, or at least in the examples I tried.
I am not 100 percent sure that I understand your question correctly but are you looking for a solution like this:
#define that_const_value ((ro_register >> 16) &0xff)
#define that_bit (that_const_value & 0x1)
#define other_bit (that_const_value & 0x2)
This would just 'replace' everything at compille time, so you can do:
if(that_bit)
{
//Do That
}
if(other_bit)
{
//Do Other
}
Related
I need a macro variable check at design time (preprocesor), more specific that number to fit in 24 bits.
The macro is intended to be used in a if() statement so I have no idea how to test it.
This is a ARM systick timer (24 bits) and so many time I forgot to #define the right value, especially when change the MCU clock and of course, my if() never fired and this silly mistake was hard to debug.
So in this example, there is a trick to force gcc to ERROR when PARAMETER > 24 bits ?
#define PARAMETER 20000000 // over 24 bits, should throw a error at design time
#define MyMacro(var, par) (var > par)
uint32_t variable;
if(MyMacro(variable,PARAMETER))
{
// do something
// do something WRONG because PARAMETER > 24 bits
// Actually this is working as expected, test for < is valid because
// _Static_assert() is check for TRUE condition
// But I am still trying to find a way to combine this in original macro
_Static_assert(PARAMETER < 0xFFFFFF, "Ooopss... ERROR");
}
Thanks in advance!
Unfortunately, _Static_assert is syntactically defined as a declaration, which means you can't use it directly inside of an expression.
However, _Static_assert isn't needed anyway, because you can perfectly (sans the nice compile time error reporting--but you're a programmer, you should be able to figure out a compile time failure a slightly more technical compile-time error message) emulate it with
#define static_assert_0expr(Truth) ((int)(0*sizeof(struct { int _ : (Truth)?1:-1; })))
(or an equivalent) and that you can fit in an expression (even an integer constant expression) no problem:
#define static_assert_0expr(Truth) ((int)(0*sizeof(struct { int _ : (Truth)?1:-1; })))
#define PARAMETER 20000000 // over 24 bits, should throw a error at design time
#define MyMacro(var, par) (static_assert_0expr((par)<0xffffff) + ((var) > (par)))
//or this, but this is won't preserve integer-constant expressions because of the comma:
/*#define MyMacro(var, par) (static_assert_0expr((par)<0xffffff), ((var) > (par)))*/
//alternatively: (static_assert_0expr(assertion) ? (expr) : (expr)) is the most
//general form (though it leads to larger preprocessor expansions, which may worsen debugging experience with cc -E)
#include <stdint.h>
int main()
{
static_assert_0expr(1)+1;
uint32_t variable;
if(MyMacro(variable,PARAMETER))
{
}
}
The above static_assert_0expr macro could also be implemented with _Static_assert:
#define static_assert_0expr(Truth) \
((int)(0*sizeof(struct { int _; _Static_assert(Truth,""); })))
or you could paste the body of this directly in MyMacro and customize the message (but I consider _Static_assert and its custom compile-time error message feature an unnecessary addition to C and prefer not to use it).
Well, I don't want to reply my own answer, but I think I found a solution that is working (thanks #PSkoicik) and thanks to GCC that allows statement expressions (found in this reply)
Using and returning output in C macro
So basically I could use _Static_assert() inside if() statement, with a helper macro
#define CheckParameter(val) ({bool retval = true; _Static_assert((val)< 0xFFFFFF, "Timer value too large!"); retval;})
Now my macro become
#define MyMacro(var, par) ((var > par) && CheckParameter(par))
Which should work because CheckParameter() will always return TRUE at RUNTIME but at COMPILE time, _Static_assert() will catch my error parameter.
So now I can use
if(MyMacro(variable,PARAMETER))
{
// PAREMETER will be in range
}
Hope I'm not missing something :)
If you need to check that PARAMETER is > 24 bits during compile time you can simply do this:
#define PARAMETER 20000 // over 24 bits, should throw a error at design time
...
#if PARAMETER > (1<<24)
#error PARAMETER > 24 bits
#endif
What you do here is not compile time checking but run time checking:
if(MyMacro(variable,PARAMETER))
{
// do something
// do something WRONG because PARAMETER > 24 bits
}
but what is variable doing here anyway if you just want to know if PARAMETER is > 24 bits?
I am trying to build a macro that runs a code only once.
Very useful for example if you loop a code and want something inside to happen only once. The easy to use method:
static int checksum;
for( ; ; )
{
if(checksum == 0) { checksum == 1; // ... }
}
But it is a bit wasteful and confusing. So I have this macros that use checking bits instead of checking true/false state of a variable:
#define CHECKSUM(d) static d checksum_boolean
#define CHECKSUM_IF(x) if( ~(checksum_boolean >> x) & 1) \
{ \
checksum_boolean |= 1 << x;
#define CHECKSUM_END }1
The 1 at the end is to force the user to put semi-colon at the end. In my compiler this is allowed.
The problem is figuring out how to do this without having the user to specify x (n bit to be checked).
So he can use this:
CHECKSUM(char); // 7 run-once codes can be used
for( ; ; )
{
CHECKSUM_IF
// code..
CHECKSUM_END;
}
Ideas how can I achieve this?
I guess you're saying you want the macro to somehow automatically track which bit of your bitmask contains the flag for the current test. You could do it like this:
#define CHECKSUM(d) static d checksum_boolean; \
d checksum_mask
#define CHECKSUM_START do { checksum_mask = 1; } while (0)
#define CHECKSUM_IF do { \
if (!(checksum_boolean & checksum_mask)) { \
checksum_boolean |= checksum_mask;
#define CHECKSUM_END \
} \
checksum_mask <<= 1; \
} while (0)
#define CHECKSUM_RESET(i) do { checksum_boolean &= ~((uintmax_t) 1 << (i)); } while (0)
Which you might use like this:
CHECKSUM(char); // 7 run-once codes can be used
for( ; ; )
{
CHECKSUM_START;
CHECKSUM_IF
// code..
CHECKSUM_END;
CHECKSUM_IF
// other code..
CHECKSUM_END;
}
Note, however, that that has severe limitations:
The CHECKSUM_START macro and all the corresponding CHECKSUM_IF macros must all appear in the same scope
Control must always pass through CHECKSUM_START before any of the CHECKSUM_IF blocks
Control must always reach the CHECKSUM_IF blocks in the same order. It may only skip a CHECKSUM_IF block if it also skips all subsequent ones that use the same checksum bitmask.
Those constraints arise because the preprocessor cannot count.
To put it another way, barring macro redefinitions, a macro without any arguments always expands to exactly the same text. Therefore, if you don't use a macro argument to indicate which flag bit applies in each case then that needs to be tracked at run time.
I am writing C code (not c++) for a target with very limited ROM, but I want the code to be easy to customize for other similar targets with #defines. I have #defines used to specify the address and other values of the device, but as a code-saving technique, these values are necessary bitwise reversed. I can enter these by first manually reversing them, but this would be confusing for future use. Can I define some sort of macro that performs a bitwise reversal?
As seen here (Best Algorithm for Bit Reversal ( from MSB->LSB to LSB->MSB) in C), there is no single operation to switch the order in c. Because of this, if you were to create a #define macro to perform the operation, it would actually perform quite a bit of work on each use (as well as significantly increasing the size of your binary if used often). I would recommend manually creating the other ordered constant and just using clear documentation to ensure the information about them is not lost.
I think something like this ought to work:
#define REV2(x) ((((x)&1)<<1) | (((x)>>1)&1))
#define REV4(x) ((REV2(x)<<2) | (REV2((x)>>2)))
#define REV8(x) ((REV4(x)<<4) | (REV4((x)>>4)))
#define REV16(x) ((REV8(x)<<8) | (REV8((x)>>8)))
#define REV32(x) ((REV16(x)<<16) | (REV16((x)>>16)))
It uses only simple operations which are all safe for constant expressions, and it's very likely that the compiler will evaluate these at compile time.
You can ensure that they're evaluated at compile time by using them in a context which requires a constant expression. For example, you could initialize a static variable or declare an enum:
enum {
VAL_A = SOME_NUMBER,
LAV_A = REV32(VAL_A),
};
For the sake of readable code I'd not recommend it, but you could do something like
#define NUMBER 2
#define BIT_0(number_) ((number_ & (1<<0)) >> 0)
#define BIT_1(number_) ((number_ & (1<<1)) >> 1)
#define REVERSE_BITS(number_) ((BIT_1(number_) << 0) + (BIT_0(number_) << 1))
int main() {
printf("%d --> %d", NUMBER, REVERSE_BITS(NUMBER));
}
There are techniques for this kind of operation (see the Boost Preprocessor library, for example), but most of the time the easiest solution is to use an external preprocessor written in some language in which bit manipulation is easier.
For example, here is a little python script which will replace all instances of #REV(xxxx)# where xxxx is a hexadecimal string with the bit-reversed constant of the same length:
#!/bin/python
import re
import sys
reg = re.compile("""#REV\(([0-9a-fA-F]+)\)#""")
def revbits(s):
return "0X%x" % int(bin(int(s, base=16))[-1:1:-1].ljust(4*len(s), '0'), base=2)
for l in sys.stdin:
sys.stdout.write(reg.sub(lambda m: revbits(m.group(1)), l))
And here is a version in awk:
awk 'BEGIN{R["0"]="0";R["1"]="8";R["2"]="4";R["3"]="C";
R["4"]="2";R["5"]="A";R["6"]="6";R["7"]="E";
R["8"]="1";R["9"]="9";R["A"]="5";R["B"]="D";
R["C"]="3";R["D"]="B";R["E"]="7";R["F"]="F";
R["a"]="5";R["b"]="D";R["c"]="3";R["d"]="B";
R["e"]="7";R["f"]="F";}
function bitrev(x, i, r) {
r = ""
for (i = length(x); i; --i)
r = r R[substr(x,i,1)]
return r
}
{while (match($0, /#REV\([[:xdigit:]]+\)#/))
$0 = substr($0, 1, RSTART-1) "0X" bitrev(substr($0, RSTART+5, RLENGTH-7)) substr($0, RSTART+RLENGTH)
}1' \
<<<"foo #REV(23)# yy #REV(9)# #REV(DEADBEEF)#"
foo 0X32 yy 0X9 0Xfeebdaed
This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
#include anywhere
For the respective languages, is the following valid (acceptable programming practice):
#include "SomeHeader.h"
#include "HeaderDefs.h" //Includes #defines (like CONST_VAR)
void Function1(){;}
void Function2(){;} //etc
//Additionally:
void Function3(){while(1){
#include "Files.h"
;
}} //Result?
#include "HeaderUndefs.h" //Includes #undef (like undef CONST_VAR)
In expansion from below comment:
I expanded. Bear in mind it's 'validity' and not 'will it compile'. I am sure it can compile, but Stack Overflow offers insights (like Daniel Wagner's) that intrigues further exploration. Can I have a while loop of #includes? (Or does this break convention... #includes anywhere - are they valid also?).
Yep, #include directives can go just about anywhere. Even inside functions, or for loops, or anything. The preprocessor that fills that stuff in doesn't know anything about the C (or C++) language, it's totally dumb.
It is valid, but probably has no purpose for CPP files (which I assume yours is, since you have function bodies).
CPP files are not intended to be included in other files, so just undefining macros by #include "HeaderUndefs.h" at the end of CPP won't be seen anywhere. If, however, "HeaderUndefs.h" does something that is meaningful as a part of a CPP file (such as defining functions), it may make sense. This is usually a horrible thing for maintainability, but can be done...
The code you presented is not valid and will not compile. A #include directive, like any other preprocessor directive, must be on a line by itself. There can be arbitrary whitespace (including comments) before the #, between the # and the include, etc., but there can't be any other code (or other directives) on the same line. EDIT: This applied to the original version of the question; it was edited while I was writing this.
If your code is modified so the #include is on a line by itself, it's potentially valid (depending on the contents of the file you're including). But it's not at all a useful thing to do. In particular, #include directives are processed at compile time; putting a #include directive inside a loop does not mean that the file will be included multiple times.
It might make sense to put a chunk of code in a separate file and #include it inside a loop; for example, you might choose which file to include based on some configuration option. But it's an extremely ugly way to structure your code, and whatever you're trying to accomplish, there's almost certainly a cleaner way to do it. (For example, the included file might define a macro that you can invoke inside the loop; you could then have the #include directive at the top of your source file.)
Think of the preprocessor as a power tool with no safety features. Use it with discipline, and you can do useful things with it. Start using it in "clever" ways, and you can lose limbs.
Typically, included files are headers, which contain prototypes, definititions, types, and other data that must be declared/defined BEFORE the code in the file itself, which is why they are at the top. It is rare to have a included file that contains information that is more useful in another place in the file.
Undefs are possible, but seem like they would cause more problems, since the only possible purpose would be to put them in the middle of a compilation unit, which would just be confusing.
A more common usage is .inl files, which are "inline" files, which work like very large macros. The only one I use is a "BitField.inl" I made, which makes a handy bitfield class along with a ToString(...) member, which is used as follows:
#define BITNAME State
#define BITTYPES SEPERATOR(Alabama) SEPERATOR(Alaska) SEPERATOR(Arazona) SEPERATOR(Arkansas) SEPERATOR(California) \
SEPERATOR(Colorado) SEPERATOR(Connecticut) SEPERATOR(Delaware) SEPERATOR(Florida) SEPERATOR(Georga) \
SEPERATOR(Hawaii) SEPERATOR(Idaho) SEPERATOR(Illinois) SEPERATOR(Indiana) SEPERATOR(Iowa) \
SEPERATOR(Kansas) SEPERATOR(Kentucky) SEPERATOR(Louisiana) SEPERATOR(Maine) SEPERATOR(Maryland) \
SEPERATOR(Massachusettes) SEPERATOR(Michigan) SEPERATOR(Minnesota) SEPERATOR(Mississippi) SEPERATOR(Missouri) \
SEPERATOR(Montana) SEPERATOR(Nebraska) SEPERATOR(Nevada) SEPERATOR(NewHampshire) SEPERATOR(NewJersey) \
SEPERATOR(NewMexico) SEPERATOR(NewYork) SEPERATOR(NorthCarolina) SEPERATOR(NorthDakota) SEPERATOR(Ohio) \
SEPERATOR(Oklahoma) SEPERATOR(Oregon) SEPERATOR(Pennsylvania) SEPERATOR(RhodeIsland) SEPERATOR(SouthCarolina) \
SEPERATOR(SouthDakota) SEPERATOR(Tennessee) SEPERATOR(Texas) SEPERATOR(Utah) SEPERATOR(Vermont) \
SEPERATOR(Virginia) SEPERATOR(Washington) SEPERATOR(WestVerginia) SEPERATOR(Wisconsin) SEPERATOR(Wyoming)
#include "BitField.inl" // WOO MAGIC!
int main() {
StateBitfield States;
States.BitField = 0; // sets all values to zero;
States.Alaska = 1; // activates Alaska;
std::cout << "States.Bitfield=" << (int)States.BitField << std::endl;
//this is machine dependent.
States.BitField |= (StateBitfield::WashingtonFlag | StateBitfield::IdahoFlag);
// enables two more
std::cout << "CaliforniaFlag=" << (States.BitField & StateBitfield::CaliforniaFlag) << '\n';
// 0, false.
std::cout << "sizeof(colorBitField)=" << sizeof(colorBitfield) << std::endl;
// 4, since BITTYPE wasn't defined
States.BitField = (StateBitfield::AlaskaFlag | StateBitfield::MinnesotaFlag | StateBitfield::FloridaFlag | StateBitfield::NorthDakotaFlag |
StateBitfield::SouthDakotaFlag | StateBitfield::CaliforniaFlag | StateBitfield::OregonFlag| StateBitfield::NevadaFlag |
StateBitfield::IdahoFlag | StateBitfield::MichiganFlag | StateBitfield::OregonFlag| StateBitfield::NevadaFlag);
// sets the states I've been to
//for each state, display if I've been there
for(unsigned int i=0; i<50; i++) {
//This is showing what is enabled
if (States.BitField & (1LL << i))
std::cout << StateBitfield::ToString((StateBitfield::StateBitNum) i) << '\n';
}
//Shows the states that were flagged
std::cout << States.Alaska << States.Minnesota << States.Florida << States.NorthDakota << States.SouthDakota << States.California <<
States.Oregon << States.Nevada << States.Idaho << States.Michigan << States.Oregon << States.Nevada << std::endl;
//displays 111111111111 (I think I had this in for debugging.)
}
States.BitField &= StateBitfield::NoFlags;
//set all to zero
States.BitField |= StateBitfield::AllFlags;
//set all to one
}
You can put an #include directive just about anywhere in the code; that doesn't mean it's a good idea.
I remember having to maintain some code where the author put random snippets of code into header files and #included them where necessary (these weren't self-contained functions, they were just blocks of statements). This made code that was already badly written and hard to follow that much worse.
There's a temptation to use the preprocessor for really sophisticated tasks; resist that temptation.
i think this is valid if you don't use any variable or function from the HeaderUnderfs.h
before #include "HeaderUnderfs.h"
/#include is a preprocessor directive. They mean nothing to the actual language, and are valid. To the compiler, those physical files are inserted in-line with the other source.
I have to write a macro that get as parameter some variable, and for each two sequential bits with "1" value replace it with 0 bit.
For example: 10110100 will become 10000100.
And, 11110000->00000000
11100000->100000000
I'm having a troubles writing that macro. I've tried to write a macro that get wach bit and replace it if the next bit is the same (and they both 1), but it works only for 8 bits and it's very not friendly...
P.S. I need a macro because I'm learning C and this is an exercise i found and i couldn't solve it myself. i know i can use function to make it easily... but i want to know how to do it with macros.
Thanks!
#define foo(x,i) (((x) & (3<<i)) == (3<<i)) ? ((x) - (3 << i)) : (x)
#define clear_11(x) foo(foo(foo(foo(foo(foo(foo(foo(foo(x,8),7),6),5),4),3),2),1),0)
This will do the job. However the expansion is quite big and compilation may take a while. So do not try this at work ;)
#define clear_bit_pairs(_x) ((_x)&~(((_x)&((_x)>>1))*3))
#define clear_bit_pairs(_x) ((_x) ^ ((((_x)&((_x)>>1))<<1) | ((_x)&((_x)>>1))) )
This will work, but it does not pair up. If it finds the consecutive '1' it will just erase. for example 11100000 will become 00000000 because the first 111 are consecutive.
#define foo(x) ({ \
typeof(x) _y_ = x; \
for(int _i_ = 0; _i_ < (sizeof(typeof(x)) << 3) + 1; _i_++) { \
if((_y_ >> _i_ & 3) == 3) { \
_y_ &= ~(3 << _i_); \
} \
} \
_y_; \
})
This probably only works in GCC, since it uses inline statements. I haven't tested it, so it probably doesn't work at all. It is your job to make it work. :-)
The nice thing about this is that it will work with any integral type. It also doesn't rely on any external functions. The downside is that it is not portable. (And I realize that this is sort of cheating.)