C/GCC Warnings - Write once and test everywhere? - c

I'm writing a command line program in ANSI C to parse a Quake 2 map file to report how many entities and textures are being used. My development machine is MacBook. I'm testing on OS X Snow Leopard (32-bit), Windows XP (32-bit) and Vista (64-bit), and Ubuntu 9.10 (32-bit).
The code is flawless on the OS X and GCC 4.2. The other platforms, not so flawless.
Visual Studio 2005 complained about an array declaration in the middle of the main() block -- size info for the array isn't available until then--that should be declared at top. Fixed that by declaring a pointer at top and writing a function to create the array.
Dev-C++ and GCC (3.4) on Windows has no complaints.
Cygwin and GCC (4.4) on Windows complained about array subscript has type char. I added (int) casts to fix that.
Ubuntu and GCC (4.4) is complaining about ignoring the return value of fread. Although I read elsewhere that I might be a bug in the way Ubuntu packaged GCC. Seems like this one in the context I'm using fread in may be safe to ignore. The warning only appears with the -O3 flag.
Except for Visual Studio 2005, all the compilers that I tested with are some version of GCC. Chasing down all these errors and warnings is serious pain in the butt. Until now, I been using the following flags in my Makefile:
debug: -pedantic -Wall
release: -O3
Is there a set of GCC flags that I should be using to catch all the errors on my primary development machine? Or is write once and test everywhere a fact of life?

Irritatingly enough, the C dialect in Visual Studio (even the beta of Visual Studio 2010!) is old, and doesn't support all C99 features, and being able to mix declarations and executable statements is probably the most irritating of them all. The least evil option might be to compile as C++ instead on this platform, even if it requires some re-arranging of your code to make it both valid C++ and C.
Other than that, as Ken's comment said, "write-once-test-everywhere is a fact of life". Lint can be a good help (as Chris wrote), but to find all the incompatibilities in syntax and semantics you really need to compile and test your program on several compilers and systems. Doing this can actually help find errors and problems in your code, even if you don't intend to actually run the program on more than one system.

Get yourself a copy of Lint. Lint is a static analysis tool that will pretty much cover the full range of compiler errors and warning and then some. As someone who frequently writes code targeting different platforms and compilers ensuring that the code passes through Lint is a pretty good barometer for getting the code to run across all compilers.
The best set of gcc flags to approximate lint is something like:
-ansi -pedantic -W -Wall -Wundef -Wstrict-prototypes -Wmissing-prototypes -Wmissing-declarations -Wcast-qual -Wwrite-strings -Weffc++
I also often use -Wno-long-long if I am going to be writing 64-bit code because many compilers complain that long long is not a c++ type.
*Edit: Fixing copy and paste error in command line options

Add -ansi to the mix. The canonical set is -ansi -pedantic -Wall.
That said, you will probably find other quirks with MSVC, because it's a different compiler altogether, with its own warnings - so you might have to tweak its options too.

You can save much time by using -Wall and Lint, because they will help you understand your code better. Rework your code to minimize the safe-to-ignore warnings. You will be less likely to agonize over a hard to reproduce run-time failure. Also, whoever maintains your code will find it easier to make changes.
When you are using Visual Studio, explore the compile options for Lint like tools. I forget where they are, and they slow down your build, but they are helpful.

Related

Should I use gcc or cc when programming in C?

I searched a little bit and one google search was enough to discover the differences between the gcc and cc compilers, but I did not find the advantages in using one or another to compile C programs
Which compiler should I use? and why?
The compiler installed as part of X-Code on OS/X is a recent version of clang whose development is sponsored by Apple.
gcc is not provided nor supported by Apple.
Unless you install gcc explicitly from one of its distributions, gcc is an alias for clang on OS/X, just like cc.
The reason for this is to support packages that use gcc explicitly as the C compiler.
On your system, it does not matter which alias you use, the compiler invoked will be clang, which has a high degree of compatibility with gcc extensions but generates different code. Both are very advanced and dependable.

Cygwin: Linux on Windows? How to ensure that program created in Windows works for Linux

we have a school project and it requires our C programs to work on Linux and is c99 compatible. Since I am working on Windows 10, I installed Cygwin and assumed that if it will compile in cygwin, it will most certainly work in Linux. It works fine with windows and I tried to compile it in cygwin and it works as well. So can I assure that it will work in Linux and is C99 Compatible? If no, why not? I am only using stdio and stdlib
You can never be sure. Even though your program seems to work it may still contain bugs that invoke undefined behaviour.
However, you can reduce the risk of malfunctions by using best practice, for example:
Heed compiler warnings
Turn up compiler warning. For example:
gcc -Wall -Wextra -pedantic -std=c99
This turns up the warning level to a fairly high level. . Compiler warnings are often indications of bugs, so fix them all. To ensure that you fix them, add -Werror so you can't run your code until all warnings have been fixed.
-std=c99 tells the compiler to use the rules for the C99 version of the C standard. The current version is C11 (-stdc11). This is an important flag, and if omitted gcc defaults to the older C89 standard.
Use a tool to detect invalid use of memory
Tools like Valgrind (linux) will report common memory bugs like buffer overruns and memory leaks. Not sure which tool is most often used in Windows, but there is a wide selection listed here.
If it is vanilla C99, at 99.9% if works on Cygwin it will work on any Linux. I never saw a case that did not work.
Reverse is not always true, I would estimate ~ 98%, as linking could give some headache on some corner case.

C - code compatibility

How to make sure, that my code is compatible with every gcc compiler set to the most strict level? Is there any tester tool or do I need to test it manually somehow? Thanks
Every gcc compiler is easy, just use an old one and it will compile for sure with the newer ones.
If you want it to compile with any compiler it's a bit more hard. As suggested you might use the -ansi switch to disable gcc extensions.

Using atomic operations in gcc 3.4.3

The built in atomic operations were introduced in gcc-4.1.2. However, I am using gcc on OpenIndiana which only has gcc 3.4.3. Now my question is how to use atomic operations in gcc 3.4.3? Moreover I have tried to use gcc 4.6.1 in OpenIndiana but it doesnt work, as it complains about some runtime libraries. If anyone has successfully used it, kindly let me know.
I would suggest you to upgrade your GCC compiler. A GCC 3 is an ancient thing.
If you cannot install a newer version of GCC, you should try compiling a GCC 4.6.1 compiler from its source code. (don't forget to compile it in a build tree outside of the source tree, and don't forget all the dependencies).
You did not mention or explained why your compilation of GCC 4.6.1 failed. What runtime libraries did it complain about? Did you run ldconfig after installing it?
GCC has great inline assembly support, so you could just use __asm to make your own variant of the various atomic ops. It'll be specific to your target platform however, so you'll need some good macros to switch to the right versions.
To add to existing answers - have you looked at Spec Files Extra Repository? I never used it myself but it seems like it offers gcc 4.6 compiler package.
On Solaris, the alternative could be to fall back to libc atomic_ops(3C) interfaces. These might or might not get inlined, but they're guaranteed always available (and always behave in the same way) no matter which compiler you use.
Beyond that, I second the suggestion to either upgrade your gcc, and/or to get the SunStudio 12.2 compilers (they're royalty-free; even if you only use it for testing, code quality tends to go up if it's made to work with more than one compiler ...). Yes, it'll install/run on OpenSolaris-based distributions as well.

Which 4.x version of gcc should one use?

The product-group I work for is currently using gcc 3.4.6 (we know it is ancient) for a large low-level c-code base, and want to upgrade to a later version. We have seen performance benefits testing different versions of gcc 4.x on all hardware platforms we tested it on. We are however very scared of c-compiler bugs (for a good reason historically), and wonder if anyone has insight to which version we should upgrade to.
Are people using 4.3.2 for large code-bases and feel that it works fine?
The best quality control for gcc is the linux kernel. GCC is the compiler of choice for basically all major open source C/C++ programs. A released GCC, especially one like 4.3.X, which is in major linux distros, should be pretty good.
GCC 4.3 also has better support for optimizations on newer cpus.
When I migrated a project from GCC 3 to GCC 4 I ran several tests to ensure that behavior was the same before and after. Can you just run a run a set of (hopefully automated) tests to confirm the correct behavior? After all, you want the "correct" behavior, not necessarily the GCC 3 behavior.
I don't have a specific version for you, but why not have a 4.X and 3.4.6 installed? Then you could try and keep the code compiling on both versions, and if you run across a show-stopping bug in 4, you have an exit policy.
Use the latest one, but hunt down and understand each and every warning -Wall gives. For extra fun, there are more warning flags to frob. You do have an extensive suite of regression (and other) tests, run them all and check them.
GCC (particularly C++, but also C) has changed quite a bit. It does much better code analysis and optimization, and does handle code that turns out to invoke undefined bahaviiour differently. So code that "worked fine" but really did rely on some particular interpretation of invalid constructions will probably break. Hopefully making the compiler emit a warning or error, but no guarantee of such luck.
If you are interested in OpenMP then you will need to move to gcc 4.2 or greater. We are using 4.2.2 on a code base of around 5M lines and are not having any problems with it.
I can't say anything about 4.3.2, but my laptop is a Gentoo Linux system built with GCC 4.3.{0,1} (depending on when each package was built), and I haven't seen any problems. This is mostly just standard desktop use, though. If you have any weird code, your mileage may vary.

Resources