This question already has an answer here:
What does [ N ... M ] mean in C aggregate initializers?
(1 answer)
Closed 5 years ago.
What does this line mean:
[0 ... 255] = &&default_label
in the definition:
static const void *jumptable[256] = {
[0 ... 255] = &&default_label,
/* Now overwrite non-defaults ... */
/* 32 bit ALU operations */
[BPF_ALU | BPF_ADD | BPF_X] = &&ALU_ADD_X,
…
};
http://lxr.devzen.net/source/xref/linux-4.8.15/kernel/bpf/core.c#473
The [0 ... 255] notation is a GCC extension to the designated initializer syntax (that is desperately needed in standard C). It sets elements 0 through 255 of the array (of void * values) to the address of the label default_label (another GCC syntax extension, but this is one that is not desperately needed in standard C).
Related
I have a definition of the following type in C:
#define NUM_OF_CHANNELS 8
I want to refer to this definition and use it also for shift operations, such as
a = b >> 3
The value 3 comes from log2(8) = 3.
So I wish there was something like
#define NUM_OF_CHANNELS_SHIFT LOG2(NUM_OF_CHANNELS)
a = b >> NUM_OF_CHANNELS_SHIFT
But obviously the above definition doesn't work. Is there a nifty way to get this accomplished?
Most commonly, you would just do the defines the other way around:
#define NUM_OF_CHANNELS_SHIFT 3
#define NUM_OF_CHANNELS (1 << NUM_OF_CHANNELS_SHIFT)
This forces you to keep the number of channels a power of two.
Answered by #EricPostpischil in a comment:
If b is known to be nonnegative, simply use b / NUM_OF_CHANNELS. Any
decent compiler will optimize it to a shift.
The compiler will translate the following C code into assembly code performing a 3-bit long right-shift.
#define NUM_OF_CHANNELS 8
a = b / NUM_OF_CHANNELS;
This question already has answers here:
C macros and use of arguments in parentheses
(2 answers)
Closed 3 years ago.
I have a question about C behavior I don't understand...
#define KB 1024
#define FOUR_KB 4*KB
int main() {
uint32_t a;
uint32_t b = 24576;
a = ceil((float)b/FOUR_KB);
//want to get number of 4K transfers
//(and possibly last transfer that is less than than 4K)
}
At this point I would expect a to be 6, but result I get is 96.
Any explanation for this?
Multiplication and division have the same priority but compiler compute from left to right when push to stack :
24576÷(4×1024) = 6 // what you expect
24576÷4×1024 = 6291456 // what compiler compute, this mean : 6144 * 1024
You should use of parenthesis and #define like this :
#define FOUR_KB (4*KB)
This question already has answers here:
Macro vs Function in C
(11 answers)
Closed 6 years ago.
Please explain the output — I am getting the output as 20,
but it should be 64 if I am not wrong.
#include<stdio.h>
#define SQUARE(X) (X*X)
main()
{
int a, b=6;
a = SQUARE(b+2);
printf("\n%d", a);
}
The correct result is 20.
Macros are simple text substitutions.
To see that the result is 20, just replace the X with b+2. Then you have:
b+2*b+2
as b is 6 it is
6+2*6+2
which is 20
When using macros it is important to use parenthesis so the macro should look
# define SQUARE(X) ((X)*(X))
Then the result will be 64 as the evaluation is
(b+2)*(b+2)
(6+2)*(6+2)
8*8
64
What is the value returned by following macro if SNDRV_CARDS is equal to 8
#define SNDRV_DEFAULT_IDX { [0 ... (SNDRV_CARDS-1)] = -1 }
I found this in a driver code.
That's a GNU extended designated initializer.
The code the macro expands to is:
{ [0 ... (8-1)] = -1 }
which in turn is an array of 8 integers, all set to -1.
This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
“static const” vs “#define” in C
I started to learn C and couldn't understand clearly the differences between macros and constant variables.
What changes when I write,
#define A 8
and
const int A = 8
?
Macros are handled by the pre-processor - the pre-processor does text replacement in your source file, replacing all occurances of 'A' with the literal 8.
Constants are handled by the compiler. They have the added benefit of type safety.
For the actual compiled code, with any modern compiler, there should be zero performance difference between the two.
Macro-defined constants are replaced by the preprocessor. Constant 'variables' are managed just like regular variables.
For example, the following code:
#define A 8
int b = A + 10;
Would appear to the actual compiler as
int b = 8 + 10;
However, this code:
const int A = 8;
int b = A + 10;
Would appear as:
const int A = 8;
int b = A + 10;
:)
In practice, the main thing that changes is scope: constant variables obey the same scoping rules as standard variables in C, meaning that they can be restricted, or possibly redefined, within a specific block, without it leaking out - it's similar to the local vs. global variables situation.
In C, you can write
#define A 8
int arr[A];
but not:
const int A = 8;
int arr[A];
if I recall the rules correctly. Note that on C++, both will work.
For one thing, the first will cause the preprocessor to replace all occurrences of A with 8 before the compiler does anything whereas the second doesn't involve the preprocessor