Which free C compiler gives options for greater optimizations? - c

Can you please give me some comparison between C compilers especially with respect to optimization?

Actually there aren't many free compilers around. gcc is "the" free compiler and probably one of the best when it comes to optimisation, even when compared to proprietary compilers.
Some independent benchmarks are linked from here:
http://gcc.gnu.org/benchmarks/

I believe Intel allows you to use its ICC compilers under Linux for non-commercial development for free. ICC beats gcc and Visual Studio hands down when it comes to code generation for x86 and x86-64 (i.e. it typically generates faster code, and can do a decent job of auto-vectorization (SIMD) in some cases).

This is a hard question to answer since you did not tell us what platform you are using, neither hardware or os....
But joemoe is right, gcc tend to excel in this field.
(As a side note: On some platforms there are commercial compilers that are better, but since you gain so much more that just the compiler gcc is hard to beat...)

the Windows SDK is a free download. it includes current versions of the Visual C++ compilers. These compilers do a very good job of optimisation.

Related

Benchmarks for C compiler optimization

What are the standard benchmarks for comparing C the optimizer of various C compilers?
I'm particularly interested in benchmarks for ARM (or those that can be ported to ARM).
https://en.wikipedia.org/wiki/SPECint is mostly written in C, and is the industry standard benchmark for real hardware, computer-architecture theoretical research (e.g. a larger ROB or some cache difference in a simulated CPU), and for compiler developers to test proposed patches that change code-gen.
The C parts of SPECfp (https://en.wikipedia.org/wiki/SPECfp) are also good choices. Or for a compiler back-end optimizer, the choice of front-end language isn't very significant. The Fortran programs are fine too.
Related: Tricks of a Spec master is a paper that covers the different benchmarks. Maybe originally from a conference.
In this lightning round talk, I will
cover at a high level the performance characteristics of
these benchmarks in terms of optimizations that GCC
does. For example, some benchmarks are classic floating point applications and benefit from SIMD (single instruction multiple data) instructions, while other benchmarks don’t.
Wikipedia is out of date. SPECint/fp 2017 was a long time coming, but it was released in 2017 and is a significant improvement over 2006. e.g. some benchmarks trivialized by clever compiler optimizations like loop inversion. (Some compilers over the years have added basically pattern-recognition to optimize the loop in libquantum, but they can't always do that in general for other loops even when it would be safe. Apparently it can also be easily auto-parallelized.)
For testing a compiler, you might actually want code that aggressive optimization can find major simplifications in, so SPECcpu 2006 is a good choice. Just be aware of the issues with libquantum.
https://www.anandtech.com/show/10353/investigating-cavium-thunderx-48-arm-cores/12 describes gcc as a compiler that "does not try to "break" benchmarks (libquantum...)". But compilers like ICC and SunCC that CPU vendors use / used for SPEC submissions for their own hardware (Intel x86 and Sun UltraSPARC and later x86) are as aggressive as possible on SPEC benchmarks.
SPEC result submissions are required to include compiler version and options used (and OS tuning options), so you can hopefully replicate them.

Arm Neon Intrinsics vs hand assembly

https://web.archive.org/web/20170227190422/http://hilbert-space.de/?p=22
On this site which is quite dated it shows that hand written asm would give a much greater improvement then the intrinsics. I am wondering if this is the current truth even now in 2012.
So has the compilation optimization improved for intrinsics using gnu cross compiler?
My experience is that the intrinsics haven't really been worth the trouble. It's too easy for the compiler to inject extra register unload/load steps between your intrinsics. The effort to get it to stop doing that is more complicated than just writing the stuff in raw NEON. I've seen this kind of stuff in pretty recent compilers (including clang 3.1).
At this level, I find you really need to control exactly what's happening. You can have all kinds of stalls if you do things in just barely the wrong order. Doing it in intrinsics feels like surgery with welder's gloves on. If the code is so performance critical that I need intrinsics at all, then intrinsics aren't good enough. Maybe others have difference experiences here.
I've had to use NEON intrinsics in several projects for portability. The truth is that GCC doesn't generate good code from NEON intrinsics. This is not a weakness of using intrinsics, but of the GCC tools. The ARM compiler from Microsoft produces great code from NEON intrinsics and there is no need to use assembly language in that case. Portability and practicality will dictate which you should use. If you can handle writing assembly language then write asm. For my personal projects I prefer to write time-critical code in ASM so that I don't have to worry about a buggy/inferior compiler messing up my code.
Update: The Apple LLVM compiler falls in between GCC (worst) and Microsoft (best). It doesn't do great with instruction interleaving nor optimal register usage, but at least it generates reasonable code (unlike GCC in some situations).
Update2: The Apple LLVM compiler for ARMv8 has been improved dramatically. It now does a great job generating ARMv8 code from C and intrinsics.
So this question is four years old, now, and still shows up in search results...
In 2016 things are much better.
A lot of simple code that I've transcribed from assembly to intrinsics is now optimised better by the compilers than by me because I'm too lazy to do the pipeline work (for how many different pipelines now?), while the compilers just needs me to pass the right --mtune=.
For complex code where register allocation can get tight, GCC and Clang both can still produce slower than handwritten code by a factor of two... or three(ish). It's mostly on register spills, so you should know from the structure of your code whether that's a risk.
But they both sometimes have disappointing accidents. I'd say that right now that's worth the risk (although I'm paid to take risk), and if you do get hit by something then file a bug. That way things will keep on getting better.
By now you even get auto-vectorization for the plain C code and the intrinsics are handled properly:
https://godbolt.org/z/AGHupq

Optimal Windows C tool stack

I am planning a bigger C-only project. It should run on both Linux and Windows. My question is, what is the optimal development stack (compiler, IDE) on Windows? Problem is, we would like to use C99 (if possible).
On Linux it's quite easy because usually combination GCC+VIM+GIT is optimal. But on Windows?
I am concerning: Visual Studio 2010 (no C99 support), MinGW and Intel C++ Compiler.
How do they compare with each other in terms of performance?
For IDE, I've happily used VC++, MonoDevelop, and Code::Blocks (the latter two are cross-platform, as a plus). C isn't a very difficult language to make an IDE for, so most anything will work and it'll just come down to personal preference.
For compiler... C is very quick to compile on all of them, so I guess by performance you mean of generated code?
In my experience, Intel C++ optimizes best if you're targeting Intel CPUs. MinGW GCC optimizes best for everything else. VC++ optimizes very good, but not quite as much as GCC or ICC. This is of course in the general sense only -- I've had plenty experiences where VC++ bests them both.
VC++ compiler can integrate with some non-VC++ IDEs, like Code::Blocks. As you said, lacks C99 support (though, it does have stdint.h).
MinGW integrates with most IDEs, except VC++. Once upon a time its port of libstdc++ lacked support for wchar_t, making Unicode apps very difficult to write. I'm not sure if this has changed.
ICC integrates with the VC++ IDE, some non-VC++ IDEs, as well as supporting C99 and Linux, however it has been shown in the past to deliberately use sub-optimal code when used with non-Intel CPUs -- I'm not sure if this is still the case.
Agner Fog's Optimizing software in C++ provides a decent comparison of compilers, included optimization capabilities.
Well, GCC is GCC. Performance is identical over different OSes, minus ABI and C stdlib differences that may impact performance.
This is an easy problem to solve, use GCC everywhere, meaning MinGW, or the more up to date mingw-w64 (includes 32 and 64-bit compilation capabilities). It provides all (most, meaning 99.9%) of the Win32 API when you need it.
Note that although it's the same compiler, ABI is different for say, Windows x64 vs Linux x64 (in this case: the size of long), and you should ensure the code compiles and works on all platforms you intend to target, regularly.
Using GCC with -pedantic-errors -Wall -Wextra is a nice help for this (if you silence all warnings!), but not perfect.
The Intel compiler will bring better performance, but is only free for personal use on Linux (you can't distribute binaries produced by the free version if I remember correctly), so if you want free tools, that's out. Visual C sucks at C99. It's a C89 compiler, and that isn't going to change soon.
Most development tools are available on Windows as well, see for example msysgit and vim.

Current Standard C Compiler?

I wanted to know what is the current standard C compiler being used by companies. I know of the following compilers and don't understand which one to use for learning purposes.
Turbo C
Borland C
GCC
DJGPP
I am learning C right now and referring to the K&R book.
Can anyone please guide me to which compiler to use?
GCC would be the standard, best supported and fastest open source compiler used by most (sane) people.
GCC is going to have the best support of the choices you've listed for the simple reason that it comes standard in GNU and is the target of Linux. It's very unlikely any organization would use the other three beyond possibly supporting some horrible legacy application.
Other C compilers you might look into include:
Clang: an up-and-comer, particularly for BSD and Mac OS X
Visual Studio Express: for Windows programming
Intel Compiler Suite: very high performance; costs money
Portland Group: another high-performance commercial compiler; used typically for supercomputers
PathScale: yet another commercial high-performance compiler
If you are starting to learn the language, Clang's much better diagnostics will help you.
To make your (job) applications tools section look better, GCC (and maybe Visual Studio) are good to have knowledge of.
GCC (which I use in those rare moments when I use C) or ICC (Intel C Compiler), though ICC is known for making code that runs slowly on AMD processors.
Depends on the platform you are using and planning to learn on or will do future development.
On Windows you can use Visual Studio Express C++ which supports standard ANSI C usage. Option two is Cygwin which is a library and tool set that replicates much of what you would use on Linux or other Unix style OS's ( it uses GCC ).
On the Mac you would want XCode which is the standard development tools including C compiler ( based on GCC ).
On many Unix type systems it will be cc or gcc depending on the OS vendor.
If you have the money some of the paid compilers like the Intel one are exceptional but likely won't be much help in learning the programming craft at this point.
If you use LINUX operating system GCC is the best compiler. You can separate each compiler steps like preprocessing , assembler , linker separately in GCC compiler using some command line options. You can analyze step by step of compilation of your C source code easily. I suggest to go for "GNU C COMPILER(GCC)". You can use "CC" command, its nothing but a symbolic link to GCC.
I can recommend OpenWatcom which was once used to develop Netware. Only supports IA-32 but does it well. Contains a basic IDE and a basic but competent profiler. Something for the real programmer :)
Then there is Pelles C which supports x86-64. It has a basic VC-like IDE but few support programs.
I like these two because the compilers are competent and you get going quickly without having to pore over manuals and wondering what the options mean.
If you are on windows use MinGW or like most have suggested ggo with GCC on Linux
Though ofcourse since it's commandline so you might find Dev-C++ and/or Code::Blocks, Eclipse CDT etc which are IDEs useful for you to make your job simpler.
There is no standard and each compiler and it's libraries differ from one another.
gcc is best and free. GO FOR GNU!

What is wrong with using turbo C?

I always find that some people (a majority from India) are using turbo C.
I cannot find any reason to use such outdated compiler...
But I don't know what reasons to give when trying to tell them to use modern compiler(gcc,msvc,...).
Turbo C is a DOS only product. This means that it no longer runs "natively" on 64-bit versions of Windows, and must be run inside the XP compatibility penalty box.
While there are plenty of reasons not to use Turbo C (it's old, predates standards, generates 16-bit code, etc.), it's not valid to answer a question like "How do I do X in Turbo C?" with "Just use GCC". That would be like somebody asking "How do I do X with my 1992 Toyota?" and you saying "Just get a newer car".
People who are using Turbo C are probably doing so because it's a requirement, not because they don't know about anything better. Odds are it's for a programming class where the assignments they turn in have to work in that compiler. When I was grading C++ assignments, it didn't matter what compiler the students used, but they had to compile and run properly with the compiler I was using.
I would say support and standards compliance would be the two big issues for me.
Good luck even finding Borland/Inprise/Borland/Codegear/Embarcadero, or whatever they call themselves nowadays. Even more kudos if you can get them to admit these products exist (although I did at some point get them from the Borland museum on BDN).
Performance can be important but the vast majority of applications I write spend 90% of their time waiting for the user (I don't do genome sequencing, SETI analysis or protein folding - the market is pretty small).
Honestly, if I have the choice between two free products (where obviously money is not an issue), I'll always select the best (that would be GCC for me).
Turbo C generates 16-bit X86 code. Kiiinda nice when you're developing on a 16-bit x86 processor.
Been there. Done that.
The pragmatic reasons for changing are: gcc is under development, with bug-fixes. It deploys on modern operating systems and modern chips natively.
It was also my first compiler (4 yrs ago), though I switched to gcc soon enough when I learned it didn't follow latest standards and relied on features that are considered deprecated or bad practice. These were enough reasons for me to make the switch.
The most important reason you should use decent C compiler is performance. Since GCC optimizes the code aggressively, the compiled programs would have the performance tens of percents higher than before.
Turbo C is much simpler to configure & use, runs on old DOS machines. Also it is compact in size.I guess that is the reason.
However, it does take a very little advantage of modern processors.

Resources