What are the benefits of using macros instead of functions in C? - c

First Code:
#include <stdio.h>
int area(int a,int b){
int area1 = a*b;
return area1;
}
int main()
{
int l1 = 10, l2 = 5, area2;
area2 = area(l1,l2);
printf("Area of rectangle is: %d", area2);
return 0;
}
Second Code:
#include <stdio.h>
// macro with parameter
#define AREA(l, b) (l * b)
int main()
{
int l1 = 10, l2 = 5, area;
area = AREA(l1, l2);
printf("Area of rectangle is: %d", area);
return 0;
}
Same output to both codes: Area of rectangle is: 50
My Question: So obviously, macros in C language are the same as functions, but macros take less space (fewer lines) than functions. Is this the only benefit of using macros instead of functions? Because they look roughly the same.

A case where macros are useful is when you combine them with __FILE__ and __LINE__.
I have a concrete example in the Bismon software project. In its file cmacros_BM.h I define
// only used by FATAL_BM macro
extern void fatal_stop_at_BM (const char *, int) __attribute__((noreturn));
#define FATAL_AT_BIS_BM(Fil,Lin,Fmt,...) do { \
fprintf(stderr, "BM FATAL:%s:%d: <%s>\n " Fmt "\n\n", \
Fil, Lin, __func__, ##__VA_ARGS__); \
fatal_stop_at_BM(Fil,Lin); } while(0)
#define FATAL_AT_BM(Fil,Lin,Fmt,...) FATAL_AT_BIS_BM(Fil,Lin,Fmt,##__VA_ARGS__)
#define FATAL_BM(Fmt,...) FATAL_AT_BM(__FILE__,__LINE__,Fmt,##__VA_ARGS__)
and fatal errors are calling something like (example from file user_BM.c)
FILE *fil = fopen (contributors_filepath_BM, "r+");
if (!fil)
FATAL_BM ("find_contributor_BM cannot open contributors file %s : %m",
contributors_filepath_BM);
When that fopen fails, the fatal error message shows the source file and line number of that FATAL_BM macro invocation.
The fatal_stop_at_BM function is defined in file main_BM.c
Notice also that some of your C files could be generated by programs like GNU bison, GNU m4, ANTLR, SWIG and that preprocessor symbols are also used by GNU autoconf.
Study also the source code of the Linux kernel. It uses macros extensively.
Most importantly, read the documentation of your C compiler (e.g. GCC). Many C compilers can show you the preprocessed form of your C code.
Your
// macro with parameter
#define AREA(l, b) (l * b)
is wrong and should be #define AREA(l, b) ((l) * (b)) if you want AREA(x+2,y-3) to work as expected.
For performance reasons, you could have defined your function as
inline int area(int a,int b){ return a*b; }
See also:
The Modern C book;
this C reference website;
the documentation of GNU cpp;
the documentation of the GCC compiler (to be invoked as gcc -Wall -Wextra -g);
how to write your GCC plugins;
some recent draft C standard like n1570;
examples of macro usage in the source code of free software projects like...
GNU make,
or GNU findutils;
the simple Nils Weller's C compiler and its source code
the Frama-C static analyzer (open source)
the Clang static analyzer (open source)
the Tiny C compiler
this DRAFT report on Bismon
the CHARIOT European project
the DECODER European project
various ACM SIGPLAN conference papers mentioning C.
some chapters of the Artificial Beings: the Conscience of a Conscious Machine book (ISBN 13: 978-1848211018) describing a software (CAIA, symbolic artificial intelligence) generating all the half million lines of its C code (and this blog of the same author, the late Jacques Pitrat)
some chapters of the Dragon book (explaining compilers)
the book about A retargetable C compiler ISBN-13: 978-0805316704

Macros are most definitely not the same as functions. Macros are text substitutions1; they are not called like functions.
The problem with your AREA macro is that it won’t behave well if you pass an expression like AREA(l1+x,l2) - that will expand to (l1+x * l2), which won’t do what you want. Arguments to macros are not evaluated, they are expanded in place.
Macros and function-like macros are useful for creating symbolic constants, simplifying repeated blocks of text, and for implementing crude template-like behavior.
Strictly speaking they are token substitutions, but the principle is the same.

I agree with #Eric Postpischil. Macros are not supported by old compilers and they don't know anything about macros. So, this makes debugging harder if you use an old compiler.
Is this the only benefit of using macros instead of functions?
No, macros are function-like and can be used to define only very simple things such as simple formulas but it's not recommended to define a C function with them because they try to put everything linear and flat and that's why you may face software design issues and they make debugging harder. So, this is not always a benefit.
And in your case, I think it's not a big and serious problem.

Related

Figure out function parameter count at compile time

I have a C library (with C headers) which exists in two different versions.
One of them has a function that looks like this:
int test(char * a, char * b, char * c, bool d, int e);
And the other version looks like this:
int test(char * a, char * b, char * c, bool d)
(for which e is not given as function parameter but it's hard-coded in the function itself).
The library or its headers do not define / include any way to check for the library version so I can't just use an #if or #ifdef to check for a version number.
Is there any way I can write a C program that can be compiled with both versions of this library, depending on which one is installed when the program is compiled? That way contributors that want to compile my program are free to use either version of the library and the tool would be able to be compiled with either.
So, to clarify, I'm looking for something like this (or similar):
#if HAS_ARGUMENT_COUNT(test, 5)
test("a", "b", "c", true, 20);
#elif HAS_ARGUMENT_COUNT(test, 4)
test("a", "b", "c", true);
#else
#error "wrong argument count"
#endif
Is there any way to do that in C? I was unable to figure out a way.
The library would be libogc ( https://github.com/devkitPro/libogc ) which changed its definition of if_config a while ago, and I'd like to make my program work with both the old and the new version. I was unable to find any version identifier in the library. At the moment I'm using a modified version of GCC 8.3.
This should be done at the configure stage, using an Autoconf (or CMake, or whatever) test step -- basically, attempting to compile a small program which uses the five-parameter signature, and seeing if it compiles successfully -- to determine which version of the library is in use. That can be used to set a preprocessor macro which you can use in an #if block in your code.
I think there's no way to do this at the preprocesing stage (at least not without some external scripts). On the other hand, there is a way to detect a function's signature at compiling time if you're using C11: _Generic. But remember: you can't use this in a macro like #if because primary expressions aren't evaluated at the preprocessing stage, so you can't dynamically choose to call the function with signature 1 or 2 in that stage.
#define WEIRD_LIB_FUNC_TYPE(T) _Generic(&(T), \
int (*)(char *, char *, char *, bool, int): 1, \
int (*)(char *, char *, char *, bool): 2, \
default: 0)
printf("test's signature: %d\n", WEIRD_LIB_FUNC_TYPE(test));
// will print 1 if 'test' expects the extra argument, or 2 otherwise
I'm sorry if this does not answer your question. If you really can't detect the version from the "stock" library header file, there are workarounds where you can #ifdef something that's only present in a specific version of that library.
This is just a horrible library design.
Update: after reading the comments, I should clarify for future readers that it isn't possible in the preprocessing stage but it is possible at compile time still. You'd just have to conditionally cast the function call based on my snippet above.
typedef int (*TYPE_A)(char *, char *, char *, bool, int);
typedef int (*TYPE_B)(char *, char *, char *, bool);
int newtest(char *a, char *b, char *c, bool d, int e) {
void (*func)(void) = (void (*)(void))&test;
if (_Generic(&test, TYPE_A: 1, TYPE_B: 2, default: 0) == 1) {
return ((TYPE_A)func)(a, b, c, d, e);
}
return ((TYPE_B)func)(a, b, c, d);
}
This indeed works although it might be controversial to cast a function this way. The upside is, as #pizzapants184 said, the condition will be optimized away because the _Generic call will be evaluated at compile-time.
I don't see any way to do that with standard C, if you are compiling with gcc a very very ugly way can be using gcc aux-info in a command and passing the number of parameters with -D:
#!/bin/sh
gcc -aux-info output.info demo.c
COUNT=`grep "extern int foo" output.info | tr -dc "," | wc -m`
rm output.info
gcc -o demo demo.c -DCOUNT="$COUNT + 1"
./demo
This snippet
#include <stdio.h>
int foo(int a, int b, int c);
#ifndef COUNT
#define COUNT 0
#endif
int main(void)
{
printf("foo has %d parameters\n", COUNT);
return 0;
}
outputs
foo has 3 parameters
Attempting to support compiling code with multiple versions of a static library serves no useful purpose. Update your code to use the latest release and stop making life more difficult than it needs to be.
In Dennis Ritchie's original C language, a function could be passed any number of arguments, regardless of the number of parameters it expected, provided that the function didn't access any parameters beyond those that were passed to it. Even on platforms whose normal calling convention wouldn't be able to accommodate this flexibility, C compilers would generally used a different calling convention that could support it unless functions were marked with qualifiers like pascal to indicate that they should use the ordinary calling convention.
Thus, something like the following would have had fully defined behavior in Ritchie's original C language:
int addTwoOrThree(count, x, y, z)
int count, x, y, z;
{
if (count == 3)
return x+y+z;
else
return x+y;
}
int test()
{
return count(2, 10,20) + count(3, 1,2,3);
}
Because there are some platforms where it would be impractical to support such flexibility by default, the C Standard does not require that compilers meaningfully process any calls to functions which have more or fewer arguments than expected, except that functions which have been declared with a ... parameter will "expect" any number of arguments that is at least as large as the number of actual specified parameters. It is thus rare for code to be written that would exploit the flexibility that was present in Ritchie's language. Nonetheless, many implementations will still accept code written to support that pattern if the function being called is in a separate compilation unit from the callers, and it is declared but not prototyped within the compilation units that call it.
you don't.
the tools you're working with are statically linked and don't support versioning.
you can get around it using all kind of tricks and tips that have been mentioned, but at the end of the day they are ugly patch works of something you're trying to do that makes no sense in this context(toolkit/code environment).
you design your code for the version of the toolkit you have installed. its a hard requirement. i also don't understand why you would want to design your gamecube/wii code to allow building on different versions.
the toolkit is constantly changing to fix bugs, assumptions etc etc.
if you want your code to use an old version that potentially have bugs or do things wrong, that is on you.
i think you should realize what kind of botch work you're dealing with here if you need or want to do this with an constantly evolving toolkit..
I also think, but this is because i know you and your relationship with DevKitPro, i assume you ask this because you have an older version installed and your CI builds won't work because they use a newer version (from docker). its either this, or you have multiple versions installed on your machine for a different project you build (but won't update source for some odd reason).
If your compiler is a recent GCC, e.g. some GCC 10 in November 2020, you might write your own GCC plugin to check the signature in your header files (and emit appropriate and related C preprocessor #define-s and/or #ifdef, à la GNU autoconf). Your plugin could (for example) fill some sqlite database and you would later generate some #include-d header file.
You then would set up your build automation (e.g. your Makefile) to use that GCC plugin and the data it has computed when needed.
For a single function, such an approach is overkill.
For some large project, it could make sense, in particular if you also decide to also code some project-specific coding rules validator in your GCC plugin.
Writing a GCC plugin could take weeks of your time, and you may need to patch your plugin source code when you would switch to a future GCC 11.
See also this draft report and the European CHARIOT and DECODER projects (funding the work described in that report).
BTW, you might ask the authors of that library to add some versioning metadata. Inspiration might come from libonion or Glib or libgccjit.
BTW, as rightly commented in this issue, you should not use an unmaintained old version of some opensource library. Use the one that is worked on.
I'd like to make my program work with both the old and the new version.
Why?
making your program work with the old (unmaintained) version of libogc is adding burden to both you and them. I don't understand why you would depend upon some old unmaintained library, if you can avoid doing that.
PS. You could of course write a plugin for GCC 8. I do recommend switching to GCC 10: it did improve.
I'm not sure this solves your specific problem, or helps you at all, but here's a preprocessor contraption, due to Laurent Deniau, that counts the number of arguments passed to a function at compile time.
Meaning, something like args_count(a,b,c) evaluates (at compile time) to the constant literal constant 3, and something like args_count(__VA_ARGS__) (within a variadic macro) evaluates (at compile time) to the number of arguments passed to the macro.
This allows you, for instance, to call variadic functions without specifying the number of arguments, because the preprocessor does it for you.
So, if you have a variadic function
void function_backend(int N, ...){
// do stuff
}
where you (typically) HAVE to pass the number of arguments N, you can automate that process by writing a "frontend" variadic macro
#define function_frontend(...) function_backend(args_count(__VA_ARGS__), __VA_ARGS__)
And now you call function_frontend() with as many arguments as you want:
I made you Youtube tutorial about this.
#include <stdint.h>
#include <stdarg.h>
#include <stdio.h>
#define m_args_idim__get_arg100( \
arg00,arg01,arg02,arg03,arg04,arg05,arg06,arg07,arg08,arg09,arg0a,arg0b,arg0c,arg0d,arg0e,arg0f, \
arg10,arg11,arg12,arg13,arg14,arg15,arg16,arg17,arg18,arg19,arg1a,arg1b,arg1c,arg1d,arg1e,arg1f, \
arg20,arg21,arg22,arg23,arg24,arg25,arg26,arg27,arg28,arg29,arg2a,arg2b,arg2c,arg2d,arg2e,arg2f, \
arg30,arg31,arg32,arg33,arg34,arg35,arg36,arg37,arg38,arg39,arg3a,arg3b,arg3c,arg3d,arg3e,arg3f, \
arg40,arg41,arg42,arg43,arg44,arg45,arg46,arg47,arg48,arg49,arg4a,arg4b,arg4c,arg4d,arg4e,arg4f, \
arg50,arg51,arg52,arg53,arg54,arg55,arg56,arg57,arg58,arg59,arg5a,arg5b,arg5c,arg5d,arg5e,arg5f, \
arg60,arg61,arg62,arg63,arg64,arg65,arg66,arg67,arg68,arg69,arg6a,arg6b,arg6c,arg6d,arg6e,arg6f, \
arg70,arg71,arg72,arg73,arg74,arg75,arg76,arg77,arg78,arg79,arg7a,arg7b,arg7c,arg7d,arg7e,arg7f, \
arg80,arg81,arg82,arg83,arg84,arg85,arg86,arg87,arg88,arg89,arg8a,arg8b,arg8c,arg8d,arg8e,arg8f, \
arg90,arg91,arg92,arg93,arg94,arg95,arg96,arg97,arg98,arg99,arg9a,arg9b,arg9c,arg9d,arg9e,arg9f, \
arga0,arga1,arga2,arga3,arga4,arga5,arga6,arga7,arga8,arga9,argaa,argab,argac,argad,argae,argaf, \
argb0,argb1,argb2,argb3,argb4,argb5,argb6,argb7,argb8,argb9,argba,argbb,argbc,argbd,argbe,argbf, \
argc0,argc1,argc2,argc3,argc4,argc5,argc6,argc7,argc8,argc9,argca,argcb,argcc,argcd,argce,argcf, \
argd0,argd1,argd2,argd3,argd4,argd5,argd6,argd7,argd8,argd9,argda,argdb,argdc,argdd,argde,argdf, \
arge0,arge1,arge2,arge3,arge4,arge5,arge6,arge7,arge8,arge9,argea,argeb,argec,arged,argee,argef, \
argf0,argf1,argf2,argf3,argf4,argf5,argf6,argf7,argf8,argf9,argfa,argfb,argfc,argfd,argfe,argff, \
arg100, ...) arg100
#define m_args_idim(...) m_args_idim__get_arg100(, ##__VA_ARGS__, \
0xff,0xfe,0xfd,0xfc,0xfb,0xfa,0xf9,0xf8,0xf7,0xf6,0xf5,0xf4,0xf3,0xf2,0xf1,0xf0, \
0xef,0xee,0xed,0xec,0xeb,0xea,0xe9,0xe8,0xe7,0xe6,0xe5,0xe4,0xe3,0xe2,0xe1,0xe0, \
0xdf,0xde,0xdd,0xdc,0xdb,0xda,0xd9,0xd8,0xd7,0xd6,0xd5,0xd4,0xd3,0xd2,0xd1,0xd0, \
0xcf,0xce,0xcd,0xcc,0xcb,0xca,0xc9,0xc8,0xc7,0xc6,0xc5,0xc4,0xc3,0xc2,0xc1,0xc0, \
0xbf,0xbe,0xbd,0xbc,0xbb,0xba,0xb9,0xb8,0xb7,0xb6,0xb5,0xb4,0xb3,0xb2,0xb1,0xb0, \
0xaf,0xae,0xad,0xac,0xab,0xaa,0xa9,0xa8,0xa7,0xa6,0xa5,0xa4,0xa3,0xa2,0xa1,0xa0, \
0x9f,0x9e,0x9d,0x9c,0x9b,0x9a,0x99,0x98,0x97,0x96,0x95,0x94,0x93,0x92,0x91,0x90, \
0x8f,0x8e,0x8d,0x8c,0x8b,0x8a,0x89,0x88,0x87,0x86,0x85,0x84,0x83,0x82,0x81,0x80, \
0x7f,0x7e,0x7d,0x7c,0x7b,0x7a,0x79,0x78,0x77,0x76,0x75,0x74,0x73,0x72,0x71,0x70, \
0x6f,0x6e,0x6d,0x6c,0x6b,0x6a,0x69,0x68,0x67,0x66,0x65,0x64,0x63,0x62,0x61,0x60, \
0x5f,0x5e,0x5d,0x5c,0x5b,0x5a,0x59,0x58,0x57,0x56,0x55,0x54,0x53,0x52,0x51,0x50, \
0x4f,0x4e,0x4d,0x4c,0x4b,0x4a,0x49,0x48,0x47,0x46,0x45,0x44,0x43,0x42,0x41,0x40, \
0x3f,0x3e,0x3d,0x3c,0x3b,0x3a,0x39,0x38,0x37,0x36,0x35,0x34,0x33,0x32,0x31,0x30, \
0x2f,0x2e,0x2d,0x2c,0x2b,0x2a,0x29,0x28,0x27,0x26,0x25,0x24,0x23,0x22,0x21,0x20, \
0x1f,0x1e,0x1d,0x1c,0x1b,0x1a,0x19,0x18,0x17,0x16,0x15,0x14,0x13,0x12,0x11,0x10, \
0x0f,0x0e,0x0d,0x0c,0x0b,0x0a,0x09,0x08,0x07,0x06,0x05,0x04,0x03,0x02,0x01,0x00, \
)
typedef struct{
int32_t x0,x1;
}ivec2;
int32_t max0__ivec2(int32_t nelems, ...){ // The largest component 0 in a list of 2D integer vectors
int32_t max = ~(1ll<<31) + 1; // Assuming two's complement
va_list args;
va_start(args, nelems);
for(int i=0; i<nelems; ++i){
ivec2 a = va_arg(args, ivec2);
max = max > a.x0 ? max : a.x0;
}
va_end(args);
return max;
}
#define max0_ivec2(...) max0__ivec2(m_args_idim(__VA_ARGS__), __VA_ARGS__)
int main(){
int32_t max = max0_ivec2(((ivec2){0,1}), ((ivec2){2,3}, ((ivec2){4,5}), ((ivec2){6,7})));
printf("%d\n", max);
}

Does the standard "Function Calling Sequence" described in Sys V ABI specs (both i386 and AMD64) apply to the static C functions?

In computer software, an application binary interface (ABI) is an interface between two binary program modules; often, one of these modules is a library or operating system facility, and the other is a program that is being run by a user.
An ABI defines how data structures or computational routines are accessed in machine code, which is a low-level, hardware-dependent format; in contrast, an API defines this access in source code, which is a relatively high-level, relatively hardware-independent, often human-readable format. A common aspect of an ABI is the calling convention, which determines how data is provided as input to or read as output from computational routines; examples are the x86 calling conventions.
-- https://en.wikipedia.org/wiki/Application_binary_interface
I am sure that the standard "Function Calling Sequence" described in Sys V ABI specs (both i386 and AMD64) constraints the calling of those extern functions in a C library, but does it constraints the calling of those static functions too?
Here is an example:
$cat abi.c
#include<stdio.h>
typedef void (*ret_function_t)(int,int);
ret_function_t gl_fp = NULL;
static void prnt(int i, int j){
printf("hi from static prnt:%d:%d\n", i, j);
}
void api_1(int i){
gl_fp = prnt;
printf("hi from extern api_1:%d\n", i);
}
ret_function_t api_2(void){
return gl_fp;
}
$cat abi_main.c
#include<stdio.h>
typedef void (*ret_function_t)(int,int);
extern void api_1(int i);
extern ret_function_t api_2(void);
int main(){
api_1(1111);
api_2()(2222, 3333);
}
$gcc abi_main.c abi.c -o abi_test
$./abi_test
hi from extern api_1:1111
hi from static prnt:2222:3333
The function calling sequence (including registers usage, stack frame, parameters passing, variable arguments...) details are defined in the Sys V ABI when abi_main.c call the api_1 and api_2 since they are extern, but what about the calling of the static function prnt which been defined in abi.c? Does it belong to the ABI standard or to the compiler to decide?
Yes, they do apply. Static functions are just plain functions with traslation-unit visibility. The ABI is a compiler generation task, C standard deliberately says nothing about it. It becomes clear when removing the static word from your code. The reasoning is the same. The drawback with this approach is that compiler cannot check the linkage right (caller-callee), but only its type (void (*ret_function_t)(int,int);) at compile time, since you are the one who links at runtime. So, it is not recommended.
What happens is that your compiler will generate code for any calling function, following some ABI, lets call it ABI-a. And it will generate code for
a function being called according to some other ABI, lets say ABI-b. If ABI-a == ABI-b, that always work, and this is the case if you compile both files with the same ABI.
For example, this works if prnt function were located at address 0x12345678:
ret_function_t gl_fp = (ret_function_t)0x12345678;
It also works as long as there is a function with the right arguments at 0x12345678. As you can see, the function cannot be inlined because the compiler does not know which function definition will end up in that memory spot, there could be many.

C code after preprocessor

This is an exercise taken by a book. The question is what is the output of this code.
This code prints always "N is undefined", but I don't know why. The command "#undef N" is after the function f. Then, why the output is always "N is undefined"?
#define N 100
void f(void);
int main(void)
{
f();
#ifdef N
#undef N
#endif
return 0;
}
void f(void)
{
#if defined(N)
printf("N is %d\n", N);
#else
printf("N is undefined\n");
#endif
}
The point of this exercise is to demonstrate that preprocessor's control flow is completely separate from the control flow of your program.
#if/#undef directives are processed in the order that they appear in the text of your program. They are processed only once at compile time; the decision to define or undefine a preprocessor variable cannot be reconsidered at runtime.
That's why the fact that f executes before #if/#undef line of the main is irrelevant. You can change the output of this program only by moving f to a position in file before main.
If you run the compiler with the -E flag (for gcc at least) it'll show you what the code you're actually compiling is.
You'll see that the preprocessor doesn't follow the code execution - it performs its actions in the order that they appear in the file.
Then the compiler takes the resulting code and f just has the one call to printf in it that says N isn't defined.
The C preprocessor goes through your code line by line. As such, it is wrong to assume the #undef happens after the function f() because of the function call. Instead, it happens before your definition of function f().
To understand this, you have to distinguish between the preprocessor (line by line) and the control flow (follows function calls).
Because the preprocessors instructions run in the "physycal" order, line after line.
Think about it something is executed before actual compilation, in a such way your code be clear, only with plain C code for the compiler.

Force gcc to use syscalls

So I am currently learning assembly language (AT&T syntax). We all know that gcc has an option to generate assembly code from C code with -S argument. Now, I would like to look at some code, how it looks in assembly. The problem is, on laboratories we compile it with as+ld, and as for now, we cannot use C libraries. So for example we cannot use printf. We should do it by syscalls (32 bit is enough). And now I have this code in C:
#include <stdio.h>
int main()
{
int a = 5;
int b = 3;
int c = a + b;
printf("%d", c);
return 0;
}
This is simple code, so I know how it will look with syscalls. But if I have some more complicated code, I don't want to mess around and replace every call printf and modify other registers, cuz gcc generated code for printf, and I should have it with syscalls. So can I somehow make gcc generate assembly code with syscalls (for example for I/O (console, files)), not with C libs?
Under Linux there exist the macro family _syscallX to generate a syscall where the X names the number of parameters. It is marked as obsolete, but IMHO still working. E.g., the following code should work (not tested here):
_syscall3(int,syswrite,int,handle,char*,str,int len);
// ---
char str[]="Hello, world!\n";
// file handle 1 is stdout
syswrite(1,str,14);

Check if a system implements a function

I'm creating a cross-system application. It uses, for example, the function itoa, which is implemented on some systems but not all. If I simply provide my own itoa implementation:
header.h:115:13: error: conflicting types for 'itoa'
extern void itoa(int, char[]);
In file included from header.h:2:0,
from file.c:2:0,
c:\path\to\mingw\include\stdlib.h:631:40: note: previous declaration of 'itoa' was here
_CRTIMP __cdecl __MINGW_NOTHROW char* itoa (int, char*, int);
I know I can check if macros are predefined and define them if not:
#ifndef _SOME_MACRO
#define _SOME_MACRO 45
#endif
Is there a way to check if a C function is pre-implemented, and if not, implement it? Or to simply un-implement a function?
Given you have already written your own implementation of itoa(), I would recommend that you rename it and use it everywhere. At least you are sure you will get the same behavior on all platforms, and avoid the linking issue.
Don't forget to explain your choice in the comments of your code...
I assume you are using GCC, as I can see MinGW in your path... there's one way the GNU linker can take care of this for you. So you don't know whether there is an itoa implementation or not. Try this:
Create a new file (without any headers) called my_itoa.c:
char *itoa (int, char *, int);
char *my_itoa (int a, char *b, int c)
{
return itoa(a, b, c);
}
Now create another file, impl_itoa.c. Here, write the implementation of itoa but add a weak alias:
char* __attribute__ ((weak)) itoa(int a, char *b, int c)
{
// implementation here
}
Compile all of the files, with impl_itoa.c at the end.
This way, if itoa is not available in the standard library, this one will be linked. You can be confident about it compiling whether or not it's available.
Ajay Brahmakshatriya's suggestion is a good one, but unfortunately MinGW doesn't support weak definition last I checked (see https://groups.google.com/forum/#!topic/mingwusers/44B4QMPo8lQ, for instance).
However, I believe weak references do work in MinGW. Take this minimal example:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
__attribute__ ((weak)) char* itoa (int, char*, int);
char* my_itoa (int a, char* b, int c)
{
if(itoa != NULL) {
return itoa(a, b, c);
} else {
// toy implementation for demo purposes
// replace with your own implementation
strcpy(b, "no itoa");
return b;
}
}
int main()
{
char *str = malloc((sizeof(int)*3+1));
my_itoa(10, str, 10);
printf("str: %s\n", str);
return 0;
}
If the system provides an itoa implementation, that should be used and the output would be
str: 10
Otherwise, you'll get
str: no itoa
There are two really important related points worth making here along the "don't do it like this" lines:
Don't use atoi because it's not safe.
Don't use atoi because it's not a standard function, and there are good standard functions (such as snprintf) which are available to do what you want.
But, putting all this aside for one moment, I want to introduce you to autoconf, part of the GNU build system. autoconf is part of a very comprehensive, very portable set of tools which aim to make it easier to write code which can be built successfully on a wide range of target systems. Some would argue that autoconf is too complex a system to solve just the one problem you pose with just one library function, but as any program grows, it's likely to face more hurdles like this, and getting autoconf set up for your program now will put you in a much stronger position for the future.
Start with a file called Makefile.in which contains:
CFLAGS=--ansi --pedantic -Wall -W
program: program.o
program.o: program.c
clean:
rm -f program.o program
and a file called configure.ac which contains:
AC_PREREQ([2.69])
AC_INIT(program, 1.0)
AC_CONFIG_SRCDIR([program.c])
AC_CONFIG_HEADERS([config.h])
# Checks for programs.
AC_PROG_CC
# Checks for library functions.
AH_TEMPLATE([HAVE_ITOA], [Set to 1 if function atoi() is available.])
AC_CHECK_FUNC([itoa],
[AC_DEFINE([HAVE_ITOA], [1])]
)
AC_CONFIG_FILES([Makefile])
AC_OUTPUT
and a file called program.c which contains:
#include <stdio.h>
#include "config.h"
#ifndef HAVE_ITOA
/*
* WARNING: This code is for demonstration purposes only. Your
* implementation must have a way of ensuring that the size of the string
* produced does not overflow the buffer provided.
*/
void itoa(int n, char* p) {
sprintf(p, "%d", n);
}
#endif
int main(void) {
char buffer[100];
itoa(10, buffer);
printf("Result: %s\n", buffer);
return 0;
}
Now run the following commands in turn:
autoheader: This generates a new file called config.h.in which we'll need later.
autoconf: This generates a configuration script called configure
./configure: This runs some tests, including checking that you have a working C compiler and, because we've asked it to, whether an itoa function is available. It writes its results into the file config.h for later.
make: This compiles and links the program.
./program: This finally runs the program.
During the ./configure step, you'll see quite a lot of output, including something like:
checking for itoa... no
In this case, you'll see that the config.h find contains the following lines:
/* Set to 1 if function atoi() is available. */
/* #undef HAVE_ITOA */
Alternatively, if you do have atoi available, you'll see:
checking for itoa... yes
and this in config.h:
/* Set to 1 if function atoi() is available. */
#define HAVE_ITOA 1
You'll see that the program can now read the config.h header and choose to define itoa if it's not present.
Yes, it's a long way round to solve your problem, but you've now started using a very powerful tool which can help you in a great number of ways.
Good luck!

Resources