Errors while compiling Neko VM OS X - c

I'm trying to compile the Neko VM on Mac OS X (10.5.7) using GCC 4.01 and I'm completely stuck, because it stops while compiling saying:
vm/threads.c:202: error: conflicting types for 'neko_thread_register'
vm/neko_vm.h:37: error: previous declaration of 'neko_thread_register' was here
I've tried googling this and some say it's because of lack of a "prototype" and some say it's because of a header include being done several times, and I can't really find any of those.
The affected line in threads.c:202 looks like this:
EXTERN bool neko_thread_register( bool t ) {
And the affected line in neko_vm.h:37 looks like this:
EXTERN bool neko_thread_register( bool t );
I can't see any difference in them, besides one of them being the implementation of the other.
The compiler command I'm using is:
cc -Wall -O3 -v -fPIC -fomit-frame-pointer -I vm -D_GNU_SOURCE -arch i386 -L/usr/local/lib -L/opt/local/lib -I/opt/local/include -o vm/threads.o -c vm/threads.c
I'd appreciate some ideas on what i might be able to do here, I don't really know where to go from here.
A mirror of the code for Neko which I'm trying to compile can be found here.
Thanks!

Have you tried compiling that file alone and outputting the preprocessed version? It could be that the scope or linkage macros are being modified somewhere in between the header file and the implementation file-- the same could be true of the 'bool' type, which is usually a macro defined by a system header.
According to the GCC 4.2 docs here, you should need to add the -E flag to the compilation line above, and you ought to change -o vm/threads.o to -o vm/threads.i so a file with the correct extension is created (.i means 'preprocessed file', essentially).

First, make sure you compile this as C, not C++.
Second, without seeing the code, it's pretty much impossible to say what the problem is.
But reading the error messages is often helpful (even before you google them):
Apparently neko_thread_register is declared twice, once in threads.c:202 and once in neko_vm.h:37, and the two declarations have different (conflicting) types. So look at the two declarations. If you can't see a problem with them, show us some code.
At the very least, seeing those two lines of code would be necessary. Most likely, the types are typedefs or macros or something similar, and then we'd need to see where they are defined as well.
Without seeing the code, all we can do is repeat the compiler error. "neko_thread_register has two conflicting definitions, at the lines specified."

Did you uncomment this line:
# For OSX
#
# MACOSX = 1 <-- this one
In the makefile?

Related

Compiler does not give line number of error undefined reference

Why does the compiler sometimes not give line number of the error? Where is the use case of that "undefined reference". I've already included everything as header files I myself wrote so it needs to give a specific line number. It is not closed source. Have I changed some setting of the compiler by accident or is it another thing whatever that another thing is?
D:\Projects\DanceOfPixels\GLEW>gcc main.c glad.c -IC:\mingw_dev_lib\include\SDL2 -LC:\mingw_dev_lib\lib -lmingw32 -lopengl32 -lSDL2main -lSDL2 -lSDL2_image -o main.exe -ansi -std=c89 -pedantic -w
C:\Users\user\AppData\Local\Temp\ccMooHZm.o:main.c:(.text+0x126ce): undefined reference to `drawImagePartScaledHW'
collect2.exe: error: ld returned 1 exit status
Edit: I have solved the problem. I have included two different versions of the draw.h, one coming from software renderer, other from OpenGL renderer. Since they use same
#ifndef DRAW_H
#define DRAW_H
...
#endif
structure for both files; the compiler didn't include the second. Once I've changed the DRAW_H to DRAW_HW I managed to compile and run the application.
That error comes from the "linker" (ld), not the compiler proper.
Typically, the compiler compiles each source file into its own, individual object file, containing just the code and data from that source file. Then, the linker combines one or more object files together, and also links in any needed library functions.
Crucially, there's no problem if a single source file (a single object file) calls an undefined function -- that's normal, if the definition of the function is in another source file, or a library. So that's why it's the linker (not the compiler) that finally discovers that there's not a definition for a function anywhere, that it's truly undefined.
But since the linker is working with object files, typically it doesn't know which source file line numbers the functions were originally called on.
(Some C compilers work more closely with their linkers, so that these "undefined external" error messages can, more usefully, contain actual source file line numbers, but that's a relatively recent innovation. For this to work it may be important to compile with debugging enabled, e.g. by using the -g flag, so that the compiler includes source line number information in its object files.)

Linking shared libraries with gcc on Linux

I need to compile and, most importantly, link a C program that uses a proprietary function present in a shared library file. Because of lack of communication with the previous development team, there is no proper documentation. I declared a function prototype (because I know the number and type of arguments):
int CustomFunction(unsigned char *in, int size);
Since that function name can be grepped from /customlibs/libcustom.so, I tried to compile the code and link it like this:
gcc -L/customlibs testing.c -o testing -lcustom
Which throws a few error messages looking like this:
/customlibs/libcustom.so: undefined reference to `AnotherCustomFunction'
Obviously, I need to tell linker to include other libraries as well, and, to make things worse, they need to be in certain order. I tried exporting LD_LIBRARY_PATH, using -Wl,-rpath=, -Wl,--no-undefined and -Wl,--start-group. Is there an easy way to give the linker all the .so files without the proper order?
I found the solution (or a workaround) to my problem: adding -Wl,--warn-unresolved-symbols, which turns errors to warnings. Note that this works only if you are ABSOLUTELY certain your function does not depend on the symbols mentioned in undefined refernce to: messages.
Add them on the command line is a way to do it. Something like this below. The LD_LIBRARY_PATH tells gcc where to look for libraries, but you still need to say what libraries to include.
gcc -L/customlibs testing.c -o testing -lcustom -lmylib1 -lmylib2 -lmylib3
You should also include all the header files of your shared library by adding the -I option of gcc, for example : gcc [...] -I/path/to/your/lib/header/files [...]

Issue on adding werror flag

I am trying to add warning as error flag in my makefiles. But I am getting the following problem.
When I am compiling without adding the flag it is successful. But when I am adding Werror flag in some ".mk" files, compilation is failing with some error. But in the successful build log warning was not there for that source file(".c") which is throwing error now(Werror).
I am adding he following flags.
UN_CDEFS := -Wno-error=%
CDEFS := -Wall -Werror -Wextra
SUB_CDEFS := -Wall -Werror -Wextra
So please suggest what might be the problem.
Caveat: This isn't a complete answer because we need more information, but it would become [too] lengthy for more top comments like the ones I've already posted.
As you refine the problem and/or post more data, I can edit this answer accordingly. At a minimum, posting your actual makefiles might help, as well as, the actual final cc commands and the compiler warning/error output for the failing .c file [There may be multiple ones, but the single/first one should be sufficient].
Below are some detailed instructions on how to debug this, based on my own experience with such issues.
But, before I get to that, I'll hazard a guess. I notice that you're doing:
CDEFS := -Wall -Werror
[leaving off the -Wextra as you mentioned in a comment].
If this is done as [nearly] the first thing in the makefile, it's fine. However, if it occurs in the middle, you are replacing CDEFS with your own value. If a prior line in the makefile did (e.g.):
CDEFS = -Dwont_build_cleanly_without_this_option
then, when you add your line, that could be the issue, because this gets [effectively] removed. You might try this instead:
CDEFS += -Wall -Werror
This just appends to the existing symbol, so any prior value will be retained.
Also, the base makefile might have something like:
ifndef CDEFS
CDEFS := -Dwont_build_cleanly_without_this_option
endif
Normally, make will output the full text of commands it executes to create targets. For compilation, this is (e.g.) cc -c foo.c.
Some fancier builds wrap the command in (e.g.) #doit cc -c foo.c where doit prints a message like compiling foo.c ... and only outputs the full command if there is an error. (e.g. the linux kernel build does this, IIRC). I'm assuming you don't have this, but if you do, there is usually a command line override such as make VERBOSE=1
So, there is some .c file somewhere that builds cleanly with the normal options but generates an error when extra compile options are added. Let's call this file badnews.c
What we want to see is the compilation command that make printed for badnews.c and the warning/error output for two cases:
without the extra options
with the extra options in various combinations
In particular, examining the case (1) command against the case (2) commands might show that options other than the -W are different. This indicates a makefile issue, similar to my "guess" above. You've said that [your equivalent of] case (1) is clean with no warnings, but, given the trouble you're having, it wouldn't hurt to double check.
You can cut and paste the case (1) cc command into a shell script and manually add the -W options. Watch out for things with spaces, such as -DSTRING="foo bar" in the makefile that may need extra quotes in a shell script.
To alleviate conflicts similar to yours, in my own makefiles I separate the symbols.
DFLAGS for all -DFOO=1
COPTS for -g, -O2, -Wall, -fno-inline-functions, etc.
Then, I either do:
CFLAGS := $(COPTS) $(DFLAGS)
Or:
%.o: %.c:
cc -c $(COPTS) $(DFLAGS) $<
There are other ways to do this as well.
UPDATE:
I am using following command to build: emq PRODUCT=ASG >build_log_0508.log
I'm unfamiliar with emq. I can't find a reference to it, except as "enterprise mail queue for JIRA", which [AFAICT] may be part of cPanel?
Getting the following error on compilation: prod/libs/app/app.c:720:5: error: incompatible implicit declaration of built-in function 'free' [-Werror] free(tmp_dn);
This is the smoking gun ...
I don't know what compiler you're using, or what OS/environment, but it appears to not flag this as a warning/error by default.
However, it is a bug in the source app.c that needs to be fixed. It was correctly flagged as a warning/error by the addition of -Wall and -Werror
Note: As I mentioned in my original answer, it would be helpful to have the final cc command line that produced this error [as well as the cc command when this file is not flagged].
I created a simple test case:
void
myfree(void *ptr)
{
free(ptr);
}
Here, under gcc, I did gcc -c test.c and I get:
test.c: In function 'myfree':
test.c:5:2: warning: implicit declaration of function 'free' [-Wimplicit-function-declaration]
free(ptr);
^
test.c:5:2: warning: incompatible implicit declaration of built-in function 'free'
test.c:5:2: note: include '<stdlib.h>' or provide a declaration of 'free'
So, gcc flags this by default [even without -Wall or -Werror]. But, your compiler does not unless it is given -Wall. This could occur if your compiler were clang and you also specified -std=c89
As I implied earlier, if you just specify -Wall but not -Werror, you should get the same warnings but they just won't stop the build. In a large build, they can be easily overlooked in the log [by a human (e.g.) me :-)].
Referring to the suggestions in my original answer, assuming that the cc commands between case (1) ["good"] and case (2) ["bad"] only differed by the addition of -Wall, the correct way to fix this is to edit app.c and add #include <stdlib.h> as part of the includes.
Is there any problem with "SUB_CDEFS := -Wall -Werror"?
It will have similar problems/benefits as with CDEFS.
I am adding at the end of the makefiles
This is all the more reason to use += instead of :=. You might be "killing off" the -std=c89 if that were specified somewhere.
UPDATE #2:
It worked after doing += instead of :=.
As I mentioned, using := removed some critical compile options, that were specified elsewhere in the makefile(s).
But, once again, the source code has a bug and is broken. It was broken before you ever touched it. By adding -Wall -Werror using :=, you uncovered this bug, that previously was masked incorrectly. This is a good thing.
Using += just sweeps the bug under the rug [again], by restoring some build options that were lost with :=. But, these "lost" build options were wrong. They allowed a genuine flaw in the C code to escape detection.
This is not about getting the build to work [with a workaround], but to fix the root cause of the build problems, which is to modify the C source code. There are probably other such C source code bugs and some may be more severe.
With the workaround to "fix" the build, you've now got a piece of built software that can not be trusted to run correctly. It could fail in intermittent ways on your system(s). Or, produce incorrect results. Or, allow your system to be hacked [and potentially expose you to legal liability] if you're putting this on a publicly facing site.
If you're not comfortable doing the source modification yourself, file a bug report with the original author of the software. The source code should have a README file, or BUGS file, or whatever that should outline a procedure for doing so.
Just need one more clarification for what is the difference between SUB_CDEFS, UN_CDEFS, and CDEFS
It's completely arbitrary.
Software projects built with make, can often build multiple programs or libraries. These often are placed in subdirectories. Each such subdirectory often has its own Makefile.
To avoid needless duplication [and potential error], the parts that would be common to these makefiles are placed in a single makefile, often called a rules file [but it's just a makefile]. The individual makefiles then have a line like: include ../common/rules.mk
The rules file expects that certain symbols are defined that help guide it to build the targets for the given subdirectory.
CDEFS et. al. are an example of such symbols. Names that are descriptive of function are [should be] chosen. That is, CDEFS [probably] means "C definitions". The actual symbol names and their function depends upon the rules file. We could use the symbol SHRONK instead of CDEFS. That doesn't help much with understanding things, but if all makefiles were edited to change CDEFS to SHRONK, it would work.
For example, in other software, instead of CDEFS, a similar symbol might be named CFLAGS or COPTS. This is fairly common.
Side note: It's a bit moot at this point, but things would have gone much more smoothly and quickly if you had edited your question and posted the output cc commands and [some of] your makefiles as I had requested. You would have gotten specific answers in a matter of hours instead of general guidelines [that took several days].
So, without the rules file, it's not possible to tell. Only make a guess, based upon the names:
CDEFS -- global cc options for a subdirectories
SUB_CDEF -- cc options for this particular subdirectory
UN_CDEFS -- specify -Ufoo options
The particular software you are building may have documentation for this in a documentation file or in comments in one or more of the makefiles.
To understand this generally, there are many online guides to make. Under Linux, there are "info" files. So, try info make. Other systems have detailed manpages, so do man make

How can compiling the same source code generate different object files?

After a long sequence of debugging I've narrowed my problem down to one file. And the problem is that the file compiles differently in two different directories, when everything else is the same.
I'm using CodeSourcery's arm gcc compiler (gcc version 4.3.3, Sourcery G++ Lite 2009q1-161) to compile a simple file. I was using it in one module with no issues and then I copied it to another module to use there. When it compiles, the object file is significantly different. The command line to compile the two files is identical (I used the linux history to make sure), and the 3 include files are also identical copies (checked with diff).
I did a binary compare on the two object files and they have a lot of individual byte differences scattered around. I did an objdump -D of both and compared them and there are a lot of differences. Here is dump1, dump2, and the diff. The command line is "
arm-none-eabi-gcc --std=gnu99 -Wall -O3 -g3 -ggdb -Wextra -Wno-unused -c crc.c -o crc.o".
How is this possible? I've also compiled with -S instead of -c and looked at the assembler output and that's identical except for the directory path. So how can the object file be different?
My real problem is that when I try to link the object file for dump2 into my program, I get undefined reference errors, so something in the object is wrong, whereas the object for dump1 gets no such errors and links fine.
For large scale software, there are many implementations are doing hashing on pointers. This is one major reason that cause result randomization. Usually if the program logic is correct, the order of some internal data structures could be different which is not harmful in most cases.
And also, don't compare the 'objdump -D' output, since your are compiling the code from different directory, the string table, symbol table, DWARF or eh_frame should be different. You will certainly get lots of diff lines.
The only comparison that makes sense is to compare the output of 'objdump -d' which only takes care of the text section. If text section is same(similar) then it can be considered as identical.
Most likely your file picks up different include files. This this the most likely reason.
Check that your include paths are exactly the same, paths in the include statements. They may point to different directories. C and C++ has a feature that when you #include abcd.h it tries to load abcd.h from the directory of the calling file. Check this.

gsoap client compile/link error

Now I am writing a program to call a web service. I write testMain.c. The others are generated by wsdl2h and soapcpp2.
My compiling command is like this:
gcc -Wall -g -c -L. soapC.c soapClient.c stdsoap2.c testMain.c
gcc -o testMain -L/usr/lib -lgsoap -lgsoapck -lgsoapssl soapC.o soapClient.o stdsoap2.o testMain.o
And I get these errors. Please help me.
stdsoap2.o: In function `soap_print_fault':
/test/stdsoap2.c:16279: undefined reference to `soap_check_faultsubcode'
/test/stdsoap2.c:16281: undefined reference to `soap_check_faultdetail'
stdsoap2.o: In function `soap_sprint_fault':
/test/stdsoap2.c:16341: undefined reference to `soap_check_faultdetail'
collect2: ld returned 1 exit status
Recent versions of GCC/ld/the GNU toolchain require that the object and library files be specified in a certain order, so that symbols can be found by the linker in the same order they depend on each other. This means that libraries should go to the end of the command line; your second line (when you're linking) should be
gcc -o testMain -L/usr/lib soapC.o soapClient.o stdsoap2.o testMain.o -lgsoap -lgsoapck -lgsoapssl
instead.
I search the web, and found a post which is very similar with my problem. I use this solution and have solved the problem. http://www.mail-archive.com/gsoap#yahoogroups.com/msg01022.html
You should not need to link stdsoap2.o to your project because it's already included in libgsoap (given through the gcc linker option -lgsoap). Try to exclude stdsoap2.c from your project. From the gSOAP FAQ:
I get a link error with gcc/g++ (GNU GCC). What should I do? For C
apps: use soapcpp2 option -c to generate C code, use only the
package's .c files, link with libgsoap.a (-lgsoap) or use the lib's
source stdsoap2.c (and dom.c when applicable).
I had the same problem with gsoap-2.8.16 compiled from source. (That version was shipped with CentOS 6.)
First I checked for a missing library. According to nm used on all static libraries provided by gsoap-2.8.16:
for X in /usr/local/lib/libgsoap*.a ; do echo $X; nm $X | grep soap_check_faultdetail; done`
it turned out that none of the libraries provided the missing symbols.
A brief look at the source code revealed that the expected return type of both methods soap_check_faultdetail and soap_check_faultsubcode was const char*, and that these were used to generate error messages.
It looked to me as if these are meant to be callbacks that the client must provide. Maybe their implementation is WSDL-dependent and would be supplied by the gsoap code generation utilities - that I don't know, see the answer from #ChristianAmmer above or below.
Anyway, since I knew the symbols were nowhere supplied, and that null-terminated strings were probably acceptable here, I just supplied my own no-op implementation:
// gsoap-missing-symbols.cpp
extern "C" {
const char* soap_check_faultdetail() { return 0; }
const char* soap_check_faultsubcode() { return 0; }
}
This is a brute-force solution. If you follow this solution, you should maybe check for linker warnings in the future; maybe some mechanism (eg. from the gsoap code generator) will supply conflicting implementations later during development.
For later versions of gsoap, I believe these symbols are no longer used and can be dropped (or renamed), see soap_check_faultX in https://www.genivia.com/changelog.html.

Resources