If statement with ZERO as condition - c
I am readin linux sources and notice statements like
if (0) {
....
}
What is this magic about?
Example: http://lxr.free-electrons.com/source/arch/x86/include/asm/percpu.h#L132
In this particular macro you're referring to:
132 if (0) { \
133 pao_T__ pao_tmp__; \
134 pao_tmp__ = (val); \
135 (void)pao_tmp__; \
136 } \
the if (0) { ... } block is a way of "using" val without actually using it. The body of this block of code will be evaluated by the compiler, but no code will actually be generated, as an if (0) should always be eliminated - it can never run.
Note that this is a macro. As such, var and val may be of any type - the preprocessor doesn't care. pao_T__ is typedefed to typeof(var). As Andy Shevchenko pointed out, this block of code exists to ensure that val and var are type-compatible, by creating a variable of the same type as var, and assigning val to it. If the types weren't compatible, this assignment would generate a compiler error.
In general, many of the Linux kernel header files should be considered black magic. They are an interesting example of the meta programming that one can do with the C preprocessor, usually for the sake of performance.
Related
Figure out function parameter count at compile time
I have a C library (with C headers) which exists in two different versions. One of them has a function that looks like this: int test(char * a, char * b, char * c, bool d, int e); And the other version looks like this: int test(char * a, char * b, char * c, bool d) (for which e is not given as function parameter but it's hard-coded in the function itself). The library or its headers do not define / include any way to check for the library version so I can't just use an #if or #ifdef to check for a version number. Is there any way I can write a C program that can be compiled with both versions of this library, depending on which one is installed when the program is compiled? That way contributors that want to compile my program are free to use either version of the library and the tool would be able to be compiled with either. So, to clarify, I'm looking for something like this (or similar): #if HAS_ARGUMENT_COUNT(test, 5) test("a", "b", "c", true, 20); #elif HAS_ARGUMENT_COUNT(test, 4) test("a", "b", "c", true); #else #error "wrong argument count" #endif Is there any way to do that in C? I was unable to figure out a way. The library would be libogc ( https://github.com/devkitPro/libogc ) which changed its definition of if_config a while ago, and I'd like to make my program work with both the old and the new version. I was unable to find any version identifier in the library. At the moment I'm using a modified version of GCC 8.3.
This should be done at the configure stage, using an Autoconf (or CMake, or whatever) test step -- basically, attempting to compile a small program which uses the five-parameter signature, and seeing if it compiles successfully -- to determine which version of the library is in use. That can be used to set a preprocessor macro which you can use in an #if block in your code.
I think there's no way to do this at the preprocesing stage (at least not without some external scripts). On the other hand, there is a way to detect a function's signature at compiling time if you're using C11: _Generic. But remember: you can't use this in a macro like #if because primary expressions aren't evaluated at the preprocessing stage, so you can't dynamically choose to call the function with signature 1 or 2 in that stage. #define WEIRD_LIB_FUNC_TYPE(T) _Generic(&(T), \ int (*)(char *, char *, char *, bool, int): 1, \ int (*)(char *, char *, char *, bool): 2, \ default: 0) printf("test's signature: %d\n", WEIRD_LIB_FUNC_TYPE(test)); // will print 1 if 'test' expects the extra argument, or 2 otherwise I'm sorry if this does not answer your question. If you really can't detect the version from the "stock" library header file, there are workarounds where you can #ifdef something that's only present in a specific version of that library. This is just a horrible library design. Update: after reading the comments, I should clarify for future readers that it isn't possible in the preprocessing stage but it is possible at compile time still. You'd just have to conditionally cast the function call based on my snippet above. typedef int (*TYPE_A)(char *, char *, char *, bool, int); typedef int (*TYPE_B)(char *, char *, char *, bool); int newtest(char *a, char *b, char *c, bool d, int e) { void (*func)(void) = (void (*)(void))&test; if (_Generic(&test, TYPE_A: 1, TYPE_B: 2, default: 0) == 1) { return ((TYPE_A)func)(a, b, c, d, e); } return ((TYPE_B)func)(a, b, c, d); } This indeed works although it might be controversial to cast a function this way. The upside is, as #pizzapants184 said, the condition will be optimized away because the _Generic call will be evaluated at compile-time.
I don't see any way to do that with standard C, if you are compiling with gcc a very very ugly way can be using gcc aux-info in a command and passing the number of parameters with -D: #!/bin/sh gcc -aux-info output.info demo.c COUNT=`grep "extern int foo" output.info | tr -dc "," | wc -m` rm output.info gcc -o demo demo.c -DCOUNT="$COUNT + 1" ./demo This snippet #include <stdio.h> int foo(int a, int b, int c); #ifndef COUNT #define COUNT 0 #endif int main(void) { printf("foo has %d parameters\n", COUNT); return 0; } outputs foo has 3 parameters
Attempting to support compiling code with multiple versions of a static library serves no useful purpose. Update your code to use the latest release and stop making life more difficult than it needs to be.
In Dennis Ritchie's original C language, a function could be passed any number of arguments, regardless of the number of parameters it expected, provided that the function didn't access any parameters beyond those that were passed to it. Even on platforms whose normal calling convention wouldn't be able to accommodate this flexibility, C compilers would generally used a different calling convention that could support it unless functions were marked with qualifiers like pascal to indicate that they should use the ordinary calling convention. Thus, something like the following would have had fully defined behavior in Ritchie's original C language: int addTwoOrThree(count, x, y, z) int count, x, y, z; { if (count == 3) return x+y+z; else return x+y; } int test() { return count(2, 10,20) + count(3, 1,2,3); } Because there are some platforms where it would be impractical to support such flexibility by default, the C Standard does not require that compilers meaningfully process any calls to functions which have more or fewer arguments than expected, except that functions which have been declared with a ... parameter will "expect" any number of arguments that is at least as large as the number of actual specified parameters. It is thus rare for code to be written that would exploit the flexibility that was present in Ritchie's language. Nonetheless, many implementations will still accept code written to support that pattern if the function being called is in a separate compilation unit from the callers, and it is declared but not prototyped within the compilation units that call it.
you don't. the tools you're working with are statically linked and don't support versioning. you can get around it using all kind of tricks and tips that have been mentioned, but at the end of the day they are ugly patch works of something you're trying to do that makes no sense in this context(toolkit/code environment). you design your code for the version of the toolkit you have installed. its a hard requirement. i also don't understand why you would want to design your gamecube/wii code to allow building on different versions. the toolkit is constantly changing to fix bugs, assumptions etc etc. if you want your code to use an old version that potentially have bugs or do things wrong, that is on you. i think you should realize what kind of botch work you're dealing with here if you need or want to do this with an constantly evolving toolkit.. I also think, but this is because i know you and your relationship with DevKitPro, i assume you ask this because you have an older version installed and your CI builds won't work because they use a newer version (from docker). its either this, or you have multiple versions installed on your machine for a different project you build (but won't update source for some odd reason).
If your compiler is a recent GCC, e.g. some GCC 10 in November 2020, you might write your own GCC plugin to check the signature in your header files (and emit appropriate and related C preprocessor #define-s and/or #ifdef, à la GNU autoconf). Your plugin could (for example) fill some sqlite database and you would later generate some #include-d header file. You then would set up your build automation (e.g. your Makefile) to use that GCC plugin and the data it has computed when needed. For a single function, such an approach is overkill. For some large project, it could make sense, in particular if you also decide to also code some project-specific coding rules validator in your GCC plugin. Writing a GCC plugin could take weeks of your time, and you may need to patch your plugin source code when you would switch to a future GCC 11. See also this draft report and the European CHARIOT and DECODER projects (funding the work described in that report). BTW, you might ask the authors of that library to add some versioning metadata. Inspiration might come from libonion or Glib or libgccjit. BTW, as rightly commented in this issue, you should not use an unmaintained old version of some opensource library. Use the one that is worked on. I'd like to make my program work with both the old and the new version. Why? making your program work with the old (unmaintained) version of libogc is adding burden to both you and them. I don't understand why you would depend upon some old unmaintained library, if you can avoid doing that. PS. You could of course write a plugin for GCC 8. I do recommend switching to GCC 10: it did improve.
I'm not sure this solves your specific problem, or helps you at all, but here's a preprocessor contraption, due to Laurent Deniau, that counts the number of arguments passed to a function at compile time. Meaning, something like args_count(a,b,c) evaluates (at compile time) to the constant literal constant 3, and something like args_count(__VA_ARGS__) (within a variadic macro) evaluates (at compile time) to the number of arguments passed to the macro. This allows you, for instance, to call variadic functions without specifying the number of arguments, because the preprocessor does it for you. So, if you have a variadic function void function_backend(int N, ...){ // do stuff } where you (typically) HAVE to pass the number of arguments N, you can automate that process by writing a "frontend" variadic macro #define function_frontend(...) function_backend(args_count(__VA_ARGS__), __VA_ARGS__) And now you call function_frontend() with as many arguments as you want: I made you Youtube tutorial about this. #include <stdint.h> #include <stdarg.h> #include <stdio.h> #define m_args_idim__get_arg100( \ arg00,arg01,arg02,arg03,arg04,arg05,arg06,arg07,arg08,arg09,arg0a,arg0b,arg0c,arg0d,arg0e,arg0f, \ arg10,arg11,arg12,arg13,arg14,arg15,arg16,arg17,arg18,arg19,arg1a,arg1b,arg1c,arg1d,arg1e,arg1f, \ arg20,arg21,arg22,arg23,arg24,arg25,arg26,arg27,arg28,arg29,arg2a,arg2b,arg2c,arg2d,arg2e,arg2f, \ arg30,arg31,arg32,arg33,arg34,arg35,arg36,arg37,arg38,arg39,arg3a,arg3b,arg3c,arg3d,arg3e,arg3f, \ arg40,arg41,arg42,arg43,arg44,arg45,arg46,arg47,arg48,arg49,arg4a,arg4b,arg4c,arg4d,arg4e,arg4f, \ arg50,arg51,arg52,arg53,arg54,arg55,arg56,arg57,arg58,arg59,arg5a,arg5b,arg5c,arg5d,arg5e,arg5f, \ arg60,arg61,arg62,arg63,arg64,arg65,arg66,arg67,arg68,arg69,arg6a,arg6b,arg6c,arg6d,arg6e,arg6f, \ arg70,arg71,arg72,arg73,arg74,arg75,arg76,arg77,arg78,arg79,arg7a,arg7b,arg7c,arg7d,arg7e,arg7f, \ arg80,arg81,arg82,arg83,arg84,arg85,arg86,arg87,arg88,arg89,arg8a,arg8b,arg8c,arg8d,arg8e,arg8f, \ arg90,arg91,arg92,arg93,arg94,arg95,arg96,arg97,arg98,arg99,arg9a,arg9b,arg9c,arg9d,arg9e,arg9f, \ arga0,arga1,arga2,arga3,arga4,arga5,arga6,arga7,arga8,arga9,argaa,argab,argac,argad,argae,argaf, \ argb0,argb1,argb2,argb3,argb4,argb5,argb6,argb7,argb8,argb9,argba,argbb,argbc,argbd,argbe,argbf, \ argc0,argc1,argc2,argc3,argc4,argc5,argc6,argc7,argc8,argc9,argca,argcb,argcc,argcd,argce,argcf, \ argd0,argd1,argd2,argd3,argd4,argd5,argd6,argd7,argd8,argd9,argda,argdb,argdc,argdd,argde,argdf, \ arge0,arge1,arge2,arge3,arge4,arge5,arge6,arge7,arge8,arge9,argea,argeb,argec,arged,argee,argef, \ argf0,argf1,argf2,argf3,argf4,argf5,argf6,argf7,argf8,argf9,argfa,argfb,argfc,argfd,argfe,argff, \ arg100, ...) arg100 #define m_args_idim(...) m_args_idim__get_arg100(, ##__VA_ARGS__, \ 0xff,0xfe,0xfd,0xfc,0xfb,0xfa,0xf9,0xf8,0xf7,0xf6,0xf5,0xf4,0xf3,0xf2,0xf1,0xf0, \ 0xef,0xee,0xed,0xec,0xeb,0xea,0xe9,0xe8,0xe7,0xe6,0xe5,0xe4,0xe3,0xe2,0xe1,0xe0, \ 0xdf,0xde,0xdd,0xdc,0xdb,0xda,0xd9,0xd8,0xd7,0xd6,0xd5,0xd4,0xd3,0xd2,0xd1,0xd0, \ 0xcf,0xce,0xcd,0xcc,0xcb,0xca,0xc9,0xc8,0xc7,0xc6,0xc5,0xc4,0xc3,0xc2,0xc1,0xc0, \ 0xbf,0xbe,0xbd,0xbc,0xbb,0xba,0xb9,0xb8,0xb7,0xb6,0xb5,0xb4,0xb3,0xb2,0xb1,0xb0, \ 0xaf,0xae,0xad,0xac,0xab,0xaa,0xa9,0xa8,0xa7,0xa6,0xa5,0xa4,0xa3,0xa2,0xa1,0xa0, \ 0x9f,0x9e,0x9d,0x9c,0x9b,0x9a,0x99,0x98,0x97,0x96,0x95,0x94,0x93,0x92,0x91,0x90, \ 0x8f,0x8e,0x8d,0x8c,0x8b,0x8a,0x89,0x88,0x87,0x86,0x85,0x84,0x83,0x82,0x81,0x80, \ 0x7f,0x7e,0x7d,0x7c,0x7b,0x7a,0x79,0x78,0x77,0x76,0x75,0x74,0x73,0x72,0x71,0x70, \ 0x6f,0x6e,0x6d,0x6c,0x6b,0x6a,0x69,0x68,0x67,0x66,0x65,0x64,0x63,0x62,0x61,0x60, \ 0x5f,0x5e,0x5d,0x5c,0x5b,0x5a,0x59,0x58,0x57,0x56,0x55,0x54,0x53,0x52,0x51,0x50, \ 0x4f,0x4e,0x4d,0x4c,0x4b,0x4a,0x49,0x48,0x47,0x46,0x45,0x44,0x43,0x42,0x41,0x40, \ 0x3f,0x3e,0x3d,0x3c,0x3b,0x3a,0x39,0x38,0x37,0x36,0x35,0x34,0x33,0x32,0x31,0x30, \ 0x2f,0x2e,0x2d,0x2c,0x2b,0x2a,0x29,0x28,0x27,0x26,0x25,0x24,0x23,0x22,0x21,0x20, \ 0x1f,0x1e,0x1d,0x1c,0x1b,0x1a,0x19,0x18,0x17,0x16,0x15,0x14,0x13,0x12,0x11,0x10, \ 0x0f,0x0e,0x0d,0x0c,0x0b,0x0a,0x09,0x08,0x07,0x06,0x05,0x04,0x03,0x02,0x01,0x00, \ ) typedef struct{ int32_t x0,x1; }ivec2; int32_t max0__ivec2(int32_t nelems, ...){ // The largest component 0 in a list of 2D integer vectors int32_t max = ~(1ll<<31) + 1; // Assuming two's complement va_list args; va_start(args, nelems); for(int i=0; i<nelems; ++i){ ivec2 a = va_arg(args, ivec2); max = max > a.x0 ? max : a.x0; } va_end(args); return max; } #define max0_ivec2(...) max0__ivec2(m_args_idim(__VA_ARGS__), __VA_ARGS__) int main(){ int32_t max = max0_ivec2(((ivec2){0,1}), ((ivec2){2,3}, ((ivec2){4,5}), ((ivec2){6,7}))); printf("%d\n", max); }
Expand pragma to a comment (for doxygen)
Comments are usually converted to a single white-space before the preprocesor is run. However, there is a compelling use case. #pragma once #ifdef DOXYGEN #define DALT(t,f) t #else #define DALT(t,f) f #endif #define MAP(n,a,d) \ DALT ( COMMENT(| n | a | d |) \ , void* mm_##n = a \ ) /// Memory map table /// | name | address | description | /// |------|---------|-------------| MAP (reg0 , 0 , foo ) MAP (reg1 , 8 , bar ) In this example, when the DOXYGEN flag is set, I want to generate doxygen markup from the macro. When it isn't, I want to generate the variables. In this instance, the desired behaviour is to generate comments in the macros. Any thoughts about how? I've tried /##/ and another example with more indirection #define COMMENT SLASH(/) #define SLASH(s) /##s neither work.
In doxygen it is possible to run commands on the sources before they are fed into the doxygen kernel. In the Doxyfile there are some FILTER possibilities. In this case: INPUT_FILTER the line should read: INPUT_FILTER = "sed -e 's%^ *MAP *(\([^,]*\),\([^,]*\),\([^)]*\))%/// | \1 | \2 | \3 |%'" Furthermore the entire #if construct can disappear and one, probably, just needs: #define MAP(n,a,d) void* mm_##n = a
The ISO C standard describes the output of the preprocessor as a stream of preprocessing tokens, not text. Comments are not preprocessing tokens; they are stripped from the input before tokenization happens. Therefore, within the standard facilities of the language, it is fundamentally impossible for preprocessing output to contain comments or anything that resembles them. In particular, consider #define EMPTY #define NOT_A_COMMENT_1(text) /EMPTY/EMPTY/ text #define NOT_A_COMMENT_2(text) / / / text NOT_A_COMMENT_1(word word word) NOT_A_COMMENT_2(word word word) After translation phase 4, both the fourth and fifth lines of the above will both become the six-token sequence [/][/][/][word][word][word] where square brackets indicate token boundaries. There isn't any such thing as a // token, and therefore there is nothing you can do to make the preprocessor produce one. Now, the ISO C standard doesn't specify the behavior of doxygen. However, if doxygen is reusing a preprocessor that came with someone's C compiler, the people who wrote that preprocessor probably thought textual preprocessor output should be, above all, an accurate reflection of the token sequence that the "compiler proper" would receive. That means it will forcibly insert spaces where necessary to make separate tokens remain separate. For instance, with test.c the above example, $ gcc -E test.c ... / / / word word word / / / word word word (I have elided some irrelevant chatter above the output we're interested in.) If there is a way around this, you are most likely to find it in the doxygen manual. There might, for instance, be configuration options that teach it that certain macros should be understood to define symbols, and what symbols those are, and what documentation they should have.
Will the compiler allocate any memory for code disabled by macro in C language?
For example: int main() { fun();//calling a fun } void fun(void) { #if 0 int a = 4; int b = 5; #endif } What is the size of the fun() function? And what is the total memory will be created for main() function?
Compilation of a C source file is done in multiple phases. The phase where the preprocessor runs is done before the phase where the code is compiled. The "compiler" will not even see code that the preprocessor has removed; from its point of view, the function is simply void fun(void) { } Now if the function will "create memory" depends on the compiler and its optimization. For a debug build the function will probably still exist and be called. For an optimized release build the compiler might not call or even keep (generate boilerplate code for) the function.
Compilation is split into 4 stages. Preprocessing. Compilation. Assembler. Linker Compiler performs preprocessor directives before starting the actual compilation, and in this stage conditional inclusions are performed along with others. The #if is a conditional inclusion directive. From C11 draft 6.10.1-3: Preprocessing directives of the forms #if constant-expression new-line groupopt #elif constant-expression new-line groupopt check whether the controlling constant expression evaluates to nonzero. As in your code #if 0 tries to evaluate to nonzero but remains false, thereby the code within the conditional block is excluded. The preprocessing stage can be output to stdout with -E option: gcc -E filename.c from the command above the output will give, # 943 "/usr/include/stdio.h" 3 4 # 2 "filename.c" 2 void fun(void) { } int main() { fun(); return 0; } As we can see the statements with the #if condition are removed during the preprocessing stage. This directive can be used to avoid compilation of certain code block. Now to see if there is any memory allocated by the compiler for an empty function, filename.c: void fun(void) { } int main() { fun(); return 0; } The size command gives, $ size a.out text data bss dec hex filename 1171 552 8 1731 6c3 a.out and for the code, filename.c: void fun(void) { #if 0 int a = 4; int b = 5; #endif } int main() { fun(); return 0; } The output of size command for the above code is, $ size a.out text data bss dec hex filename 1171 552 8 1731 6c3 a.out As seen in both cases memory allocated is same by which can conclude that the compiler does not allocate memory for the block of code disabled by macro.
According to Gcc reference: The simplest sort of conditional is #ifdef MACRO controlled text #endif /* MACRO */ This block is called a conditional group. controlled text will be included in the output of the preprocessor if and only if MACRO is defined. We say that the conditional succeeds if MACRO is defined, fails if it is not. The controlled text inside of a conditional can include preprocessing directives. They are executed only if the conditional succeeds. You can nest conditional groups inside other conditional groups, but they must be completely nested. In other words, ‘#endif’ always matches the nearest ‘#ifdef’ (or ‘#ifndef’, or ‘#if’). Also, you cannot start a conditional group in one file and end it in another. Even if a conditional fails, the controlled text inside it is still run through initial transformations and tokenization. Therefore, it must all be lexically valid C. Normally the only way this matters is that all comments and string literals inside a failing conditional group must still be properly ended. The comment following the ‘#endif’ is not required, but it is a good practice if there is a lot of controlled text, because it helps people match the ‘#endif’ to the corresponding ‘#ifdef’. Older programs sometimes put MACRO directly after the ‘#endif’ without enclosing it in a comment. This is invalid code according to the C standard. CPP accepts it with a warning. It never affects which ‘#ifndef’ the ‘#endif’ matches. Sometimes you wish to use some code if a macro is not defined. You can do this by writing ‘#ifndef’ instead of ‘#ifdef’. One common use of ‘#ifndef’ is to include code only the first time a header file is included.
if 0 is compiled in c program
I'm trying to removed unused code from my program - I can't delete the code for now, I just want to disable it for a start. Let's say that I have the following code: if (cond){ doSomething() } and cond is always false so doSomething is never called. I want to do something like: #define REMOVE_UNUSED_CODE 0 if (cond && REMOVE_UNUSED_CODE){ doSomething() } Now this is obvious to us (and hopefully for the compiler) that this code is unused. Will the compiler remove all this if condition or it will leave it and never get in? P.S.: I can't use #if 0 for this purpose
GCC will explicitly remove conditional blocks that have a constant expression controlling them, and the GNU coding standards explicitly recommend that you take advantage of this, instead of using cruder methods such as relying on the preprocessor. Although using preprocessor #if may seem more efficient because it removes code at an earlier stage, in practice modern optimizers work best when you give them as much information as possible (and of course it makes your program cleaner and more consistent if you can avoid adding a phase-separation dependency at a point where it isn't necessary). Decisions like this are very, very easy for a modern compiler to make - even the very simplistic TCC will perform some amount of dead-code elimination; powerful systems like LLVM and GCC will spot this, and also much more complex cases that you, as a human reader, might miss (by tracing the lifetime and modification of variables beyond simply looking at single points). This is in addition to other advantages of full compilation, like the fact that your code within the if will be checked for errors (whereas #if is more useful for removing code that would be erroneous on your current system, like references to nonexistent platform functions).
Try #ifndef REMOVE_UNUSED_CODE if (cond) { doSomething(); } #endif Instead. Don't forget to do a #define REMOVE_UNUSED_CODE somewhere.
The answer depends on the implementation of the compiler. You should never depend on this. Instead be explicit: #define REMOVE_UNUSED_CODE 0 if (cond && REMOVE_UNUSED_CODE){ #ifndef REMOVE_UNUSED_CODE doSomething() #endif }
Looking at this snippet of your code #define REMOVE_UNUSED_CODE 0 if(cond && REMOVE_UNUSED_CODE) { doSomething(); } REMOVE_UNUSED_CODE is replaced with 0 by preprocessor before compilation. So if(cond && 0) will always be evaluated as false and will never be executed. So you can say, irrespective of what cond is, it will always be false. So I'd rather prefer it doing this way: //#define REMOVE_UNUSED_CODE 0 #ifdef REMOVE_UNUSED_CODE if(cond) { doSomething(); } #endif
You should not do it that way: Suppose you have a function bool f(void* input) with side effects. Then with if( f(pointer) && false ) { ... } the body is not evaluated, but f should be. In the case of if( false && f(pointer) ) { ... } f is not evaluated because of the short-cut rule of the &&operator. If you use #define REMOVE_UNUSED_CODE #ifndef REMOVE_UNUSED_CODE if( f(pointer) && false ) { } #endif the whole code is not even compiled in and will never be executed.
Not a direct answer, but may be helpful. There are compiler-specific intrinsics to do code removal. For MSVC, it is __assume(0) For GCC/Clang, it is __builtin_unreachable(). Notice, however, that if you write something like: if (cond){ __builtin_unreachable(); doSomething() } Your condition should never evaluate to true, otherwise behavior is undefined. Combining both && REMOVE_UNUSED_CODE and __builtin_unreachable() ensures that your code will be removed by the compiler.
Why are these C macros not written as functions?
I'm studying the code of the netstat tool (Linux), which AFAIK mostly reads a /proc/net/tcp file and dowa pretty-printing out of it. (My focus is on the -t mode right now.) I'm a bit puzzled by the coding style the authors have chosen: static int tcp_info(void) { INFO_GUTS6(_PATH_PROCNET_TCP, _PATH_PROCNET_TCP6, "AF INET (tcp)", tcp_do_one); } where #define INFO_GUTS6(file,file6,name,proc) \ char buffer[8192]; \ int rc = 0; \ int lnr = 0; \ if (!flag_arg || flag_inet) { \ INFO_GUTS1(file,name,proc) \ } \ if (!flag_arg || flag_inet6) { \ INFO_GUTS2(file6,proc) \ } \ INFO_GUTS3 where #define INFO_GUTS3 \ return rc; and #if HAVE_AFINET6 #define INFO_GUTS2(file,proc) \ lnr = 0; \ procinfo = fopen((file), "r"); \ if (procinfo != NULL) { \ do { \ if (fgets(buffer, sizeof(buffer), procinfo)) \ (proc)(lnr++, buffer); \ } while (!feof(procinfo)); \ fclose(procinfo); \ } #else #define INFO_GUTS2(file,proc) #endif etc. Clearly, my coding sense is tilting and says "those should be functions". I don't see any benefit those macros bring here. It kills readability, etc. Is anybody around familiar with this code, can shed some light on what "INFO_GUTS" is about here and whether there could have been (or still has) a reason for such an odd coding style? In case you're curious about their use, the full dependency graph goes like this: # /---> INFO_GUTS1 <---\ # INFO_GUTS --* INFO_GUTS2 <----*---- INFO_GUTS6 # î \---> INFO_GUTS3 <---/ î # | | # unix_info() igmp_info(), tcp_info(), udp_info(), raw_info()
Your sense that "those macros should be functions" seems correct to me; I'd prefer to see them as functions. It would be interesting to know how often the macros are used. However, the more they're used, the more there should be a space saving if they're a real function instead of a macro. The macros are quite big and use (inherently slow) I/O functions themselves, so there isn't going to be a speed-up from using the macro. And these days, if you want inline substitution of functions, you can use inline functions in C (as well as in C++). You can also argue that INFO_GUTS2 should be using a straight-forward while loop instead of the do ... while loop; it would only need to check for EOF once if it was: while (fgets(buffer, sizeof(buffer), procinfo)) (*proc)(lnr++, buffer); As it is, if there is an error (as opposed to EOF) on the channel, the code would probably go into an infinite loop; the fgets() would fail, but the feof() would return false (because it hasn't reached EOF; it has encountered an error - see ferror()), and so the loop would continue. Not a particularly plausible problem; if the file opens, you will seldom get an error. But a possible problem.
There is no reason why. The person who wrote the code was likely very confused about code optimizations in general, and the concept of inlining in particular. Since the compiler is most likely GCC, there are several ways to achieve function inlining, if inlining was even necessary for this function, which I very much doubt. Inlining a function containing file I/O calls would be the same thing as shaving an elephant to reduce its weight...
It reads as someones terrible idea to implement optional IPv6 support. You would have to walk through the history to confirm, but the archive only seems to go back to 1.46 and the implied damage is at 1.20+. I found a git archive going back to 1.24 and it is still there. Older code looks doubtful. Neither BusyBox or BSD code includes such messy code. So it appeared in the Linux version and suffered major bit rot.
Macros generate code: when it is called, the whole macro definition is expanded at the place of the call. If say, INFO_GUTS6 were a function, it wouldn't be able to declare, e.g., the buffer variable which would subsequently be usable by the code that follows the macro invocation. The example you pasted is actually very neat :-)