ANTLR4 C Grammar Not Supporting __cdecl? - c

I am running ANTLR 4.2 and using the canonical C grammar from:
https://github.com/antlr/grammars-v4/tree/master/c
I am doing the following steps: (using batch files from the ANTLR4 book)
antlr C.g4
javac C*.java
grun C compilationUnit -tokens test.c
Where test.c has the following code:
PASSING:
typedef
void
(*EFI_SET_MEM) (
void *Buffer,
UINTN Size,
UINT8 Value
);
FAILING:
error is: line 3:9 no viable alternative at input 'typedefvoid(__cdecl*'
typedef
void
(__cdecl *EFI_SET_MEM) (
void *Buffer,
UINTN Size,
UINT8 Value
);
The only difference is __cdecl. I tried several changes to fix this, e.g.:
functionSpecifier
: ('inline'
| '_Noreturn'
| '__inline__' // GCC extension
| '__cdecl'
| '__stdcall')
| gccAttributeSpecifier
| '__declspec' '(' Identifier ')'
;
...but this is not working. Any ideas on how to fix this problem? Since what I'm doing doesn't care about the calling convention, creating this lexer rule makes the problem go away:
Cdecl
: '__cdecl'
-> skip
;
I still wish I had a real solution.

__cdecl is used in C++ to declare an interface as using the C-calling convention for linkage (explicitly with undecorated names and the like). __cdecl is C++ (and I believe specific to certain compilers on top of that), not C, so the C grammar doesn't specify it.
I'm not sure why your proposed fix isn't working, tho.

Related

Figure out function parameter count at compile time

I have a C library (with C headers) which exists in two different versions.
One of them has a function that looks like this:
int test(char * a, char * b, char * c, bool d, int e);
And the other version looks like this:
int test(char * a, char * b, char * c, bool d)
(for which e is not given as function parameter but it's hard-coded in the function itself).
The library or its headers do not define / include any way to check for the library version so I can't just use an #if or #ifdef to check for a version number.
Is there any way I can write a C program that can be compiled with both versions of this library, depending on which one is installed when the program is compiled? That way contributors that want to compile my program are free to use either version of the library and the tool would be able to be compiled with either.
So, to clarify, I'm looking for something like this (or similar):
#if HAS_ARGUMENT_COUNT(test, 5)
test("a", "b", "c", true, 20);
#elif HAS_ARGUMENT_COUNT(test, 4)
test("a", "b", "c", true);
#else
#error "wrong argument count"
#endif
Is there any way to do that in C? I was unable to figure out a way.
The library would be libogc ( https://github.com/devkitPro/libogc ) which changed its definition of if_config a while ago, and I'd like to make my program work with both the old and the new version. I was unable to find any version identifier in the library. At the moment I'm using a modified version of GCC 8.3.
This should be done at the configure stage, using an Autoconf (or CMake, or whatever) test step -- basically, attempting to compile a small program which uses the five-parameter signature, and seeing if it compiles successfully -- to determine which version of the library is in use. That can be used to set a preprocessor macro which you can use in an #if block in your code.
I think there's no way to do this at the preprocesing stage (at least not without some external scripts). On the other hand, there is a way to detect a function's signature at compiling time if you're using C11: _Generic. But remember: you can't use this in a macro like #if because primary expressions aren't evaluated at the preprocessing stage, so you can't dynamically choose to call the function with signature 1 or 2 in that stage.
#define WEIRD_LIB_FUNC_TYPE(T) _Generic(&(T), \
int (*)(char *, char *, char *, bool, int): 1, \
int (*)(char *, char *, char *, bool): 2, \
default: 0)
printf("test's signature: %d\n", WEIRD_LIB_FUNC_TYPE(test));
// will print 1 if 'test' expects the extra argument, or 2 otherwise
I'm sorry if this does not answer your question. If you really can't detect the version from the "stock" library header file, there are workarounds where you can #ifdef something that's only present in a specific version of that library.
This is just a horrible library design.
Update: after reading the comments, I should clarify for future readers that it isn't possible in the preprocessing stage but it is possible at compile time still. You'd just have to conditionally cast the function call based on my snippet above.
typedef int (*TYPE_A)(char *, char *, char *, bool, int);
typedef int (*TYPE_B)(char *, char *, char *, bool);
int newtest(char *a, char *b, char *c, bool d, int e) {
void (*func)(void) = (void (*)(void))&test;
if (_Generic(&test, TYPE_A: 1, TYPE_B: 2, default: 0) == 1) {
return ((TYPE_A)func)(a, b, c, d, e);
}
return ((TYPE_B)func)(a, b, c, d);
}
This indeed works although it might be controversial to cast a function this way. The upside is, as #pizzapants184 said, the condition will be optimized away because the _Generic call will be evaluated at compile-time.
I don't see any way to do that with standard C, if you are compiling with gcc a very very ugly way can be using gcc aux-info in a command and passing the number of parameters with -D:
#!/bin/sh
gcc -aux-info output.info demo.c
COUNT=`grep "extern int foo" output.info | tr -dc "," | wc -m`
rm output.info
gcc -o demo demo.c -DCOUNT="$COUNT + 1"
./demo
This snippet
#include <stdio.h>
int foo(int a, int b, int c);
#ifndef COUNT
#define COUNT 0
#endif
int main(void)
{
printf("foo has %d parameters\n", COUNT);
return 0;
}
outputs
foo has 3 parameters
Attempting to support compiling code with multiple versions of a static library serves no useful purpose. Update your code to use the latest release and stop making life more difficult than it needs to be.
In Dennis Ritchie's original C language, a function could be passed any number of arguments, regardless of the number of parameters it expected, provided that the function didn't access any parameters beyond those that were passed to it. Even on platforms whose normal calling convention wouldn't be able to accommodate this flexibility, C compilers would generally used a different calling convention that could support it unless functions were marked with qualifiers like pascal to indicate that they should use the ordinary calling convention.
Thus, something like the following would have had fully defined behavior in Ritchie's original C language:
int addTwoOrThree(count, x, y, z)
int count, x, y, z;
{
if (count == 3)
return x+y+z;
else
return x+y;
}
int test()
{
return count(2, 10,20) + count(3, 1,2,3);
}
Because there are some platforms where it would be impractical to support such flexibility by default, the C Standard does not require that compilers meaningfully process any calls to functions which have more or fewer arguments than expected, except that functions which have been declared with a ... parameter will "expect" any number of arguments that is at least as large as the number of actual specified parameters. It is thus rare for code to be written that would exploit the flexibility that was present in Ritchie's language. Nonetheless, many implementations will still accept code written to support that pattern if the function being called is in a separate compilation unit from the callers, and it is declared but not prototyped within the compilation units that call it.
you don't.
the tools you're working with are statically linked and don't support versioning.
you can get around it using all kind of tricks and tips that have been mentioned, but at the end of the day they are ugly patch works of something you're trying to do that makes no sense in this context(toolkit/code environment).
you design your code for the version of the toolkit you have installed. its a hard requirement. i also don't understand why you would want to design your gamecube/wii code to allow building on different versions.
the toolkit is constantly changing to fix bugs, assumptions etc etc.
if you want your code to use an old version that potentially have bugs or do things wrong, that is on you.
i think you should realize what kind of botch work you're dealing with here if you need or want to do this with an constantly evolving toolkit..
I also think, but this is because i know you and your relationship with DevKitPro, i assume you ask this because you have an older version installed and your CI builds won't work because they use a newer version (from docker). its either this, or you have multiple versions installed on your machine for a different project you build (but won't update source for some odd reason).
If your compiler is a recent GCC, e.g. some GCC 10 in November 2020, you might write your own GCC plugin to check the signature in your header files (and emit appropriate and related C preprocessor #define-s and/or #ifdef, à la GNU autoconf). Your plugin could (for example) fill some sqlite database and you would later generate some #include-d header file.
You then would set up your build automation (e.g. your Makefile) to use that GCC plugin and the data it has computed when needed.
For a single function, such an approach is overkill.
For some large project, it could make sense, in particular if you also decide to also code some project-specific coding rules validator in your GCC plugin.
Writing a GCC plugin could take weeks of your time, and you may need to patch your plugin source code when you would switch to a future GCC 11.
See also this draft report and the European CHARIOT and DECODER projects (funding the work described in that report).
BTW, you might ask the authors of that library to add some versioning metadata. Inspiration might come from libonion or Glib or libgccjit.
BTW, as rightly commented in this issue, you should not use an unmaintained old version of some opensource library. Use the one that is worked on.
I'd like to make my program work with both the old and the new version.
Why?
making your program work with the old (unmaintained) version of libogc is adding burden to both you and them. I don't understand why you would depend upon some old unmaintained library, if you can avoid doing that.
PS. You could of course write a plugin for GCC 8. I do recommend switching to GCC 10: it did improve.
I'm not sure this solves your specific problem, or helps you at all, but here's a preprocessor contraption, due to Laurent Deniau, that counts the number of arguments passed to a function at compile time.
Meaning, something like args_count(a,b,c) evaluates (at compile time) to the constant literal constant 3, and something like args_count(__VA_ARGS__) (within a variadic macro) evaluates (at compile time) to the number of arguments passed to the macro.
This allows you, for instance, to call variadic functions without specifying the number of arguments, because the preprocessor does it for you.
So, if you have a variadic function
void function_backend(int N, ...){
// do stuff
}
where you (typically) HAVE to pass the number of arguments N, you can automate that process by writing a "frontend" variadic macro
#define function_frontend(...) function_backend(args_count(__VA_ARGS__), __VA_ARGS__)
And now you call function_frontend() with as many arguments as you want:
I made you Youtube tutorial about this.
#include <stdint.h>
#include <stdarg.h>
#include <stdio.h>
#define m_args_idim__get_arg100( \
arg00,arg01,arg02,arg03,arg04,arg05,arg06,arg07,arg08,arg09,arg0a,arg0b,arg0c,arg0d,arg0e,arg0f, \
arg10,arg11,arg12,arg13,arg14,arg15,arg16,arg17,arg18,arg19,arg1a,arg1b,arg1c,arg1d,arg1e,arg1f, \
arg20,arg21,arg22,arg23,arg24,arg25,arg26,arg27,arg28,arg29,arg2a,arg2b,arg2c,arg2d,arg2e,arg2f, \
arg30,arg31,arg32,arg33,arg34,arg35,arg36,arg37,arg38,arg39,arg3a,arg3b,arg3c,arg3d,arg3e,arg3f, \
arg40,arg41,arg42,arg43,arg44,arg45,arg46,arg47,arg48,arg49,arg4a,arg4b,arg4c,arg4d,arg4e,arg4f, \
arg50,arg51,arg52,arg53,arg54,arg55,arg56,arg57,arg58,arg59,arg5a,arg5b,arg5c,arg5d,arg5e,arg5f, \
arg60,arg61,arg62,arg63,arg64,arg65,arg66,arg67,arg68,arg69,arg6a,arg6b,arg6c,arg6d,arg6e,arg6f, \
arg70,arg71,arg72,arg73,arg74,arg75,arg76,arg77,arg78,arg79,arg7a,arg7b,arg7c,arg7d,arg7e,arg7f, \
arg80,arg81,arg82,arg83,arg84,arg85,arg86,arg87,arg88,arg89,arg8a,arg8b,arg8c,arg8d,arg8e,arg8f, \
arg90,arg91,arg92,arg93,arg94,arg95,arg96,arg97,arg98,arg99,arg9a,arg9b,arg9c,arg9d,arg9e,arg9f, \
arga0,arga1,arga2,arga3,arga4,arga5,arga6,arga7,arga8,arga9,argaa,argab,argac,argad,argae,argaf, \
argb0,argb1,argb2,argb3,argb4,argb5,argb6,argb7,argb8,argb9,argba,argbb,argbc,argbd,argbe,argbf, \
argc0,argc1,argc2,argc3,argc4,argc5,argc6,argc7,argc8,argc9,argca,argcb,argcc,argcd,argce,argcf, \
argd0,argd1,argd2,argd3,argd4,argd5,argd6,argd7,argd8,argd9,argda,argdb,argdc,argdd,argde,argdf, \
arge0,arge1,arge2,arge3,arge4,arge5,arge6,arge7,arge8,arge9,argea,argeb,argec,arged,argee,argef, \
argf0,argf1,argf2,argf3,argf4,argf5,argf6,argf7,argf8,argf9,argfa,argfb,argfc,argfd,argfe,argff, \
arg100, ...) arg100
#define m_args_idim(...) m_args_idim__get_arg100(, ##__VA_ARGS__, \
0xff,0xfe,0xfd,0xfc,0xfb,0xfa,0xf9,0xf8,0xf7,0xf6,0xf5,0xf4,0xf3,0xf2,0xf1,0xf0, \
0xef,0xee,0xed,0xec,0xeb,0xea,0xe9,0xe8,0xe7,0xe6,0xe5,0xe4,0xe3,0xe2,0xe1,0xe0, \
0xdf,0xde,0xdd,0xdc,0xdb,0xda,0xd9,0xd8,0xd7,0xd6,0xd5,0xd4,0xd3,0xd2,0xd1,0xd0, \
0xcf,0xce,0xcd,0xcc,0xcb,0xca,0xc9,0xc8,0xc7,0xc6,0xc5,0xc4,0xc3,0xc2,0xc1,0xc0, \
0xbf,0xbe,0xbd,0xbc,0xbb,0xba,0xb9,0xb8,0xb7,0xb6,0xb5,0xb4,0xb3,0xb2,0xb1,0xb0, \
0xaf,0xae,0xad,0xac,0xab,0xaa,0xa9,0xa8,0xa7,0xa6,0xa5,0xa4,0xa3,0xa2,0xa1,0xa0, \
0x9f,0x9e,0x9d,0x9c,0x9b,0x9a,0x99,0x98,0x97,0x96,0x95,0x94,0x93,0x92,0x91,0x90, \
0x8f,0x8e,0x8d,0x8c,0x8b,0x8a,0x89,0x88,0x87,0x86,0x85,0x84,0x83,0x82,0x81,0x80, \
0x7f,0x7e,0x7d,0x7c,0x7b,0x7a,0x79,0x78,0x77,0x76,0x75,0x74,0x73,0x72,0x71,0x70, \
0x6f,0x6e,0x6d,0x6c,0x6b,0x6a,0x69,0x68,0x67,0x66,0x65,0x64,0x63,0x62,0x61,0x60, \
0x5f,0x5e,0x5d,0x5c,0x5b,0x5a,0x59,0x58,0x57,0x56,0x55,0x54,0x53,0x52,0x51,0x50, \
0x4f,0x4e,0x4d,0x4c,0x4b,0x4a,0x49,0x48,0x47,0x46,0x45,0x44,0x43,0x42,0x41,0x40, \
0x3f,0x3e,0x3d,0x3c,0x3b,0x3a,0x39,0x38,0x37,0x36,0x35,0x34,0x33,0x32,0x31,0x30, \
0x2f,0x2e,0x2d,0x2c,0x2b,0x2a,0x29,0x28,0x27,0x26,0x25,0x24,0x23,0x22,0x21,0x20, \
0x1f,0x1e,0x1d,0x1c,0x1b,0x1a,0x19,0x18,0x17,0x16,0x15,0x14,0x13,0x12,0x11,0x10, \
0x0f,0x0e,0x0d,0x0c,0x0b,0x0a,0x09,0x08,0x07,0x06,0x05,0x04,0x03,0x02,0x01,0x00, \
)
typedef struct{
int32_t x0,x1;
}ivec2;
int32_t max0__ivec2(int32_t nelems, ...){ // The largest component 0 in a list of 2D integer vectors
int32_t max = ~(1ll<<31) + 1; // Assuming two's complement
va_list args;
va_start(args, nelems);
for(int i=0; i<nelems; ++i){
ivec2 a = va_arg(args, ivec2);
max = max > a.x0 ? max : a.x0;
}
va_end(args);
return max;
}
#define max0_ivec2(...) max0__ivec2(m_args_idim(__VA_ARGS__), __VA_ARGS__)
int main(){
int32_t max = max0_ivec2(((ivec2){0,1}), ((ivec2){2,3}, ((ivec2){4,5}), ((ivec2){6,7})));
printf("%d\n", max);
}

How to verify external symbols in an .h file to the .c file?

In C it is an idiomatic pattern to have your .h file contain declarations of the externally visible symbols in the corresponding .c file. The purpose if this is to support a kind of "module & interface" thinking, e.g enabling a cleaner structure.
In a big legacy C system I'm working on it is not uncommon that functions are declared in the wrong header files probably after moving a function to another module, since it still compiles, links and runs, but that makes the modules less explicit in their interfaces and indicates wrong dependencies.
Is there a way to verify / confirm / guarantee that the .h file has all the external symbols from .c and no external symbols that are not there?
E.g. if I have the following files
module.c
int func1(void) {}
bool func2(int c) {}
static int func3(void) {}
module.h
extern int func1(void);
extern bool func4(char *v);
I want to be pointed to the fact that func4 is not an external visible symbol in module.c and that func2 is missing.
Modern compilers give some assistance in as so much that they can detect a missing declaration that you actually referenced, but it does not care from which file it comes.
What are my options, other than going over each pair manually, to obtain this information?
I want to be pointed to the fact that func4 is not an external visible symbol in module.c and that func2 is missing.
Using POSIX-ish linux with bash, diff and ctags and given really simple example of input files, you could do this:
$ #recreate input
$ cat <<EOF >module.c
int func1(void) {}
bool func2(int c) {}
static int func3(void) {}
EOF
$ cat <<EOF >module.h
extern int func1(void);
extern bool func4(char *v);
EOF
$ # helper function for extracting only non-static function declarations
$ f() { ctags -x --c-kinds=fp "$#" | grep -v static | cut -d' ' -f1; }
$ # simply a diff
$ diff <(f module.c) <(f module.h)
2,3c2
< func2
---
> func4
$ diff <(f module.c) <(f module.h) |
> grep '^<\|^>' |
> sed -E 's/> (.*)/I would like to point the fact that \1 is not externally visible symbol/; s/< (.*)/\1 is missing/'
func2 is missing
I would like to point the fact that func4 is not externally visible symbol
This will break if for example static keyword is not on the same line as function identifier is introduced, because ctags will not output it them. So the real job of this is getting the list of externally visible function declarations. This is not an easy task and writing such tool is left to others : )
It does not make any sense as if you call not defined function, the linker will complain.
More important is to have all functions prototypes - as compiler has to know how to call them. But in this case compilers emit warnings.
Some notes: you do not need the keyword extern as functions are extern by default.
This is the time to shine for some of my favorite compiler warning flags:
CFLAGS += -Wmissing-prototypes \
-Wstring-prototypes \
-Wmissing-declarations \
-Wold-style-declaration \
-Wold-style-definition \
-Wredundant-decls
This at least ensures, that all the source files containing implementations of a function that is not static also have a previous external declaration & prototype of said function, ie. in your example:
module.c:4:6: warning: no previous prototype for ‘func2’ [-Wmissing-prototypes]
4 | bool func2(int c) { return c == 0; }
| ^~~~~
If we'd provide just a forward declaration that doesn't constitute a full prototype we'd still get:
In file included from module.c:1:
module.h:7:1: warning: function declaration isn’t a prototype [-Wstrict-prototypes]
7 | extern bool func2();
| ^~~~~~
module.c:4:6: warning: no previous prototype for ‘func2’ [-Wmissing-prototypes]
4 | bool func2(int c) { return c == 0;}
| ^~~~~
Only providing a full prototype will fix that warning. However, there's no way to make sure that all declared functions are actually also implemented. One could go about this using linker module definition files, a script using nm(1) or a simple "example" or unit test program, that includes every header file and tries to call all functions.
To list the differences between the exported symbols in a .c module in C and the corresponding .h file you can use chcheck. Just give the module name on the command line
python3 chcheck.py <module>
and it will list what externally visible functions are defined in the .c module but not exposed in the .h header file, and if there are any functions in the header module that are not defined in the corresponding .c file.
It only checks for function declarations/definitions at this point.
Disclaimer I wrote this to solve my own problem. Its built in Python on top of #eliben:s excellent pycparser.
Output for the example in the question is
Externally visible definitions in 'module.c' that are not in 'module.h':
func2
Declarations in 'module.h' that have no externally visible definition in 'module.c':
func4

How to catch undefined preprocessor macro with gcc?

I've been working on a piece of code that had an overlooked derp in it:
#include<stdio.h>
#include<stdlib.h>
#include<limits.h>
#define MAX_N_LENGTH
/*function prototypes*/
int main(){
...
}
It should be easy to spot with the context removed: #define MAX_N_LENGTH should have read #define MAX_N_LENGTH 9. I have no idea where that trailing constant went.
Since that macro was only used in one place in the form of char buf[ MAX_N_LENGTH + 1], it was extremely difficult to track down and debug the program.
Is there a way to catch errors like this one using the gcc compiler?
You can use char buf[1 + MAX_N_LENGTH], because char buf[1 +] should not compile with the error message error: expected expression before ']' token:
http://ideone.com/5m2LYw
What you have there isn't an undefined macro. It's an empty macro. And defined empty macros are perfectly legit, because you can test for their definedness.
They're used quite a lot in the implementation header files, although all those empty macros will be in the implementation namespace, which means they will either contain two underscores or an underscore followed by an uppercase letter.
What you could do is test whether you have an empty macro that's not in the implementation namespace, and you can do that with:
cpp -dM YOUR_FILE.c |
cut -d\ -f2- | grep '^[a-zA-Z0-9_]* $' |grep -v -e __ -e ^_[A-Z]
For your example, it should output just MAX_N_LENGTH.
It's not possible to catch this error in the general sense, because it isn't an error. There's plenty of cases where this sort of behavior is desired, so the compiler cannot treat it as an error or a warning.
If you can track the error down to a line, using gcc's -E command line argument will cause it to output the result of the preprocessor. In that case, your char line would have turned to char buf[+1], which is legal C code, but might catch your attention because you expected it to be char buf[9+1]. -E causes gcc to print those results, so you would actually see char buf[+1] in the output of gcc.
Issues like this are why C++ discourages use of define macros in this way (C++, of course, has more alternatives than C which makes it easier to discourage them)
You can use the preprocessor to catch when a macro is either 0 or defined without a value:
#define VAR
#if VAR+0 == 0
#error "VAR is either 0 or defined without a value."
#endif

Rust calling C, static const in C code

I have used rust-bindgen to generate rust interface code.
Now in the C code you can find this:
extern const struct mps_key_s _mps_key_ARGS_END;
#define MPS_KEY_ARGS_END (&_mps_key_ARGS_END)
Note that in the hole rest of the code _mps_key_ARGS_END does not appear again.
The macro MPS_KEY_ARGS_END gets used regularly amung other simular mps_key_s.
Now the code produced by rust-bindgen is this:
pub static _mps_key_ARGS_END: Struct_mps_key_s;
Now in C code here is a example usage:
extern void _mps_args_set_key(mps_arg_s args[MPS_ARGS_MAX], unsigned i,
mps_key_t key);
_mps_args_set_key(args, 0, MPS_KEY_ARGS_END);
In rust it looks like this:
pub fn _mps_args_set_key(args: [mps_arg_s, ..32u], i: ::libc::c_uint,
key: mps_key_t);
Now I try to call it like this:
_mps_args_set_key(args, 0 as u32, _mps_key_ARGS_END );
But I get a error:
error: mismatched types: expected *const Struct_mps_key_s, found
Struct_mps_key_s (expected *-ptr, found enum Struct_mps_key_s)
I am not a good C programmer, and I dont even understand where these C static even get there values from.
Thanks for your help.
Edit:
Update based on the answer of Chris Morgan.
I added this code (note, I replaced *const mps_key_s with mps_key_t):
pub static MPS_KEY_ARGS_END: mps_key_t = &_mps_key_ARGS_END;
Just for some extra information on why Im using mps_key_t, in C:
typedef const struct mps_key_s *mps_key_t;
In rust:
pub type mps_key_t = *const Struct_mps_key_s;
This seams seam to work better then befor but now Im getting a bad crash:
error: internal compiler error: unexpected failure note: the compiler
hit an unexpected failure path. this is a bug. note: we would
appreciate a bug report:
http://doc.rust-lang.org/complement-bugreport.html note: run with
RUST_BACKTRACE=1 for a backtrace task 'rustc' failed at 'expected
item, found foreign item _mps_key_ARGS_END::_mps_key_ARGS_END
(id=1102)',
/home/rustbuild/src/rust-buildbot/slave/nightly-linux/build/src/libsyntax/ast_map/mod.rs:327
#define MPS_KEY_ARGS_END (&_mps_key_ARGS_END)
The & part indicates that it is taking a pointer to the object, that the type of MPS_KEY_ARGS_END will be mps_key_s const*. In Rust, that is *const mps_key_s (a raw pointer), and can be achieved in the same way as in C, &_mps_key_ARGS_END. You can define MPS_KEY_ARGS_END in a way that you can use conveniently like this:
static MPS_KEY_ARGS_END: *const mps_key_s = &_mps_key_ARGS_END;

porting C compilation from MinGW to VisualStudio(nmake)

My current job at the university is to port a C program from MinGW (windows) to Visual Studio (nmake).
I have got a valid "makefile.vc" file for a very similar C program.
My approach was to adopt the Makefile (i.e. "makefile.vc") to the program I need to port.
All but four C files seem to compile fine. Those four files have various errors for example, syntax errors and "unknown size".
Should I continue with my approach to change the Makefile or use CMAKE instead of nmake?
Is there a tutorial or any other pointer on porting a C program from MinGW/gcc to nmake?
typedef struct {
A_TypeConverter *converter;
char *domain;
} enumeratorConverterEntry;
static enumeratorConverterEntry enumeratorConverterEntries[]; /* line 186 */
error:
f.c(186) : error C2133: 'enumeratorConverterEntries' : unknown size
typedef struct AsmInstructionInfo {
int flags;
CONST char **argTypes; /* line 7 */
int minArgs;
int maxArgs;
int cArgs;
} AsmInstructionInfo;
error:
fAssemble.c(7) : error C2061: syntax error : identifier 'CONST'
..
/* file fStack.c: */
#ifdef CHECK_ACTIVATION_COUNTS
/* code */
#endif
/* more code */
void fShowStack(l_Interp *interp) { /* line 94 */
l_CallFrame *framePtr;
/* more code */
error:
fStack.c(94) : error C2143: syntax error : missing ')' before '*'
fStack.c(94) : error C2143: syntax error : missing '{' before '*'
fStack.c(94) : error C2059: syntax error : ')'
fStack.c(94) : error C2054: expected '(' to follow 'interp'
static enumeratorConverterEntry enumeratorConverterEntries[]; /* line 186 */
That looks like a valid incomplete, forward declaration of an array, which would be valid syntax, except I think for the static qualifier. I don't have a copy of the 'C' standard in front of me, but reading between the lines on the results of Googling "forward declaration of static array" seems to indicate that an incomplete definition of a static array results in undefined behavior, so Microsoft and GNU are legitimately entitled to do whatever they want with it. GNU accepts it, and Microsoft rejects it. As Mark Wilkins points out you should be make the Microsoft compiler happy by replacing it with:
extern enumeratorConverterEntry enumeratorConverterEntries[]; /* line 186 */
In general it's worth noting that the Microsoft compiler only supports the C89 standard, while the GNU compiler supports portions of the C99 standard, and several of their own extensions, depending on the arguments to the compiler.
The errors in fAssemble.c and fStack.c look like one or more preprocessor files are missing or incomplete. You should search your source to find out where CONST and l_Interp are defined, and then figure out why they are not being picked up in the files where the errors are occurring.
I just now tried out that array declaration with MinGW, and it does compile. To make it link, though, it needs the array to be defined elsewhere. The result is that it appears to be the same as the extern storage class:
extern enumeratorConverterEntry enumeratorConverterEntries[];
I'm not sure if there are other subtleties associated with the original declaration using a static storage class for the global.

Resources