Is it a good idea to recreate Win32's headers? - c

I'm finding myself doing more C/C++ code against Win32 lately, and coming from a C# background I've developed an obsession with "clean code" that is completely consistent, so moving away from the beautiful System.* namespace back to the mishmash of #defines that make up the Win32 API header files is a bit of a culture shock.
After reading through MSDN's alphabetical list of core Win32 functions I realised how simple Win32's API design actually is, and it's unfortunate that it's shrouded with all the cruft from the past 25 years, including many references to 16-bit programming that are completely irrelevant in today's 64-bit world.
I'm due to start a new C/C++ project soon, and I was thinking about how I could recreate Win32's headers on an as-needed basis. I could design it to be beautiful, and yet it would maintain 100% binary (and source) compatibility with existing programs (because the #defines ultimately resolve the same thing).
I was wondering if anyone had attempted this in the past (Google turned up nothing), or if anyone wanted to dissuade me from it.
Another thing I thought of, was how with a cleaner C Win32 API, it becomes possible to design a cleaner and easier to use C++ Win32 API wrapper on top, as there wouldn't be any namespace pollution from the old C Win32 items.
EDIT:
Just to clarify, I'm not doing this to improve compilation performance or for any kind of optimisation, I'm fully aware the compiler does away with everything that isn't used. My quest here is to have a Win32 header library that's a pleasure to work with (because I won't need to depress Caps-lock every time I use a function).

Don't do this.
It may be possible, but it will take a long time and will probably lead to subtle bugs.
However, and more importantly, it will make your program utterly impossible for anyone other than you to maintain.

There's no point in doing this. Just because there's additional cruft doesn't mean it's compiled into the binary (anything unused will be optimized out). Furthermore, on the EXTREME off-chance that anything DOES change (I dunno, maybe WM_INPUT's number changes) it's just a lot easier to use the system headers. Furthermore, what's more intuitive? I think #include <windows.h> is a lot easier to understand than #include "a-windows-of-my-own.h".
Also, honestly you never should need to even look at the contents of windows.h. Yeah I've read it, yeah it's ugly as sin, but it does what I need it to and I don't need to maintain it.
Probably the ONLY downside of using the real windows.h is that it MAY slow down compilation by a few milliseconds.

No. What's the point? Just include <windows.h>, and define a few macros like WIN32_LEAN_AND_MEAN, VC_EXTRALEAN, NOGDI, NOMINMAX, etc. to prune out the things you don't want/need to speed up your compile times.

Although the Win32 headers might be considered "messy", you pretty much never have to (or want to) look inside them. All you need to know is documented in the Win32 SDK. The exact contents of the header files are an implementation detail.
There is a ton of stuff in there that would be time-consuming and unnecessarily finicky to replicate, particularly relating to different versions of the Win32 SDK.
I recommend:
#include <windows.h>

In my opinion, this is bad practice. Tidiness and brevity is achieved by keeping to the standard practice as much as possible, and leveraging as much as possible from the platform. You need to assume Microsoft to have the ultimate expertise in their own platform, with some aspects going beyond what you know right now. In simple words, it's their product and they know best.
By rolling your own:
... you branch off from Microsoft's API, so Microsoft could no longer deliver updates to you through their standard channels
... you may introduce bugs due to your own hubris, feeling you've figured something out while you haven't
... you'd be wasting a lot of time for no tangible benefit (as the C headers don't carry any overhead into the compiled binary)
... you'd eventually create a project that's less elegant
The most elegant code is one that carries more LOC of actual program logic and as little as possible LOC for "housekeeping" (i.e. code not directly related to the task at hand). Don't fail to leverage the Platform SDK headers to make your project more elegant.

This has been attempted in the past.
In its include directory, MinGW contains its own version of windows.h. Presumably this exists to make the headers work with gcc. I don't know if it will work with a Microsoft compiler.

Related

Why does vim rely so heavily on compile-time feature configuration?

I understand that one of the main vim's main goals are portability and customisation. Abundance of comments in the manual about things working differently on various problems does not surprise me. However, I don't understand why almost every feature can be disabled with a compile flag, what purpose does it serve? Wouldn't it be easier to be able to disable features in runtime, with some kind of configuration?
As far as I understand (I tried diving into vim code, but didn't write any patches), it makes the code base much more complicated, and that's exactly what Neovim developers are trying to remove. Why did vim developers follow this approach?
There are several forces for having many flags (some of which have already been mentioned in the comments):
portability: with quite different (and exotic, like DOS and Amiga) operating systems to support, different (GUI) libraries must be included, so there's a definite need for conditional compilation via #ifdef
on many Linux distributions, a tiny build of Vim serves as the default implementation for vi; that one should not have any of the extended features
it allows for utter flexibility: developers / packagers can mix and match freely (though I guess most will stick to the default tiny / big / huge feature bundles)
So, my (unofficial) guess is that a useful and necessary (see above) feature was increasingly used for more and more features as Vim grew and matured. Initially, this is good (consistency, a single mechanism to configure). When problems started to appear, it was too late to retroactively switch to a (better?) approach (like runtime configuration); the (development and testing) effort would now be prohibitive.

Porting Autodesk Animator Pro to be cross platform

a previous relevant question from me is here Reverse Engineering old paint programs
I have set up my base of operations here: http://animatorpro.org
wiki coming soon.
Okay, so now I have a 300,000 line legacy MSDOS codebase. It's sort of a "be careful what you wish for" situation. I am not an experienced C programmer. I'm not entirely inexperienced either, but for all intents and purposes I'm a noob to the language and in particular the intricacies of its libraries. I am especially ignorant of the vagaries of the differences between C programs written specifically for MSDOS and programs that are cross platform. However I have been studying this code base for over a year now, and this is what I know about Animator Pro:
Compilers and tools used:
Watcom C compiler
tcmake (make program from Turbo C)
386asm, a specialised assembler for the Phar Lap dos extender
and of course, the Phar Lap dos extender itself.
a selection of obscure dos utilities
Much of the compilation seems to be driven by batch files. Though I have obtained copies of all these tools, I have not yet succeeded at compiling it. (though I have compiled its older brother, autodesk animator original.
It's got a plugin system that replicates DLL before DLL's were available, based on REX. The plugin system handles:
Video Drivers (with a plethora of included VESA drivers)
Input drivers (including wacom tablets, and keyboards)
Drawing Tools
Inks (Like photoshop's filters, or blending modes)
Scripting Addons (essentially compiled scripts)
File formats
It's got its own script interpreter named POCO, based on the C language- The scripting language has enough power to do virtually all the things the plugin system can do- Just slower.
Given this information, this is my development plan. Please criticise this. The source code is available in the link above, so you can easily, if you are so inclined, assess the situation yourself.
Compile with its original tools.
Switch to using DJGPP, and make the necessary changes to get it to compile with that, plus the original assembler.
Include the Allegro.cc "Game" library, and switch over as much functionality to that library as possible- Perhaps by simply writing new video and input drivers that use the Allegro API. I'm thinking allegro rather than SDL because: there is a DOS version of Allegro, and fascinatingly, one of its core functions is the ability to play Animator Pro's native format FLIC.
Hopefully after 3, I will have eliminated most or all of the Assembler in the project. I say hopefully, because it's in an obscure dialect that doesn't assemble in any modern free assembler without significant modification. I have tried them all. Whatever is left gets converted to assemble in NASM, or to C code if I can define the assembler's actual function.
Switch the dos extender from Phar Lap to HX Dos http://www.japheth.de/HX.html, Which promises to replicate as much of the WIN32 api as possible. Then make all the necessary code changes for that to work.
Switch to the win32 version of Allegro.cc, assuming that the win32 version can run on top of HXDos. Make any further necessary changes
Modify the plugin system to use some kind of standard cross platform plugin library. What this would be, I have no idea. Maybe you can offer some suggestions? I talked to the developer who originally wrote the plugin system, and he said some of the things it does aren't possible on modern OS's because of segmentation restrictions. I'm not sure what this means, but I'm guessing it means all the plugins will need to be rewritten almost from scratch.
Magically, I got all the above done, and we can try and make it run in windows, osx, and linux, whilst dealing with other cross platform niggles like long file names, and things I haven't thought of.
Anyone got a problem with any of this? Is allegro a good choice? if not, why? what would you do about this plugin system? What would you do different? Is this whole thing foolish, and should I just rewrite it from scratch, using the original as inpiration? (it would apparently take the original developer "About a month" to do that)
One thing I haven't covered above is the text/font system. Not sure what to do about that, but Animator Pro has its own custom font format, but also is able to use Postscript Type 1 fonts, and some other formats.
My biggest concern with your plan, in a nutshell: Your approach seems to be to attempt to keep the whole enormous thing working at all times, tweaking the environment ever-further away from DOS. During each tweak to the environment, that means you will have approximately a billion subtle assumptions that might have broken at once, none of which you necessarily understand yet. Untangling them all at once will be incredibly painful.
If I were doing the port, my approach would be to disable as much code as possible to get SOMETHING running in a modern environment, and bring the parts back online, one piece at a time. Write a simple test harness program that loads a display driver and draws some stuff, and compile it for DOS to make sure you understand the interface. Then write some C code that implements the same interface, but with Allegro (or SDL or SFML), and make that program work under Windows or Linux. When the output differs, you have a simple test case to work from.
Your entire job on this port is swapping out implementations of various interfaces and functions with completely new ones. This is a job that unit testing excels at. Don't write any new code without a test of some kind that runs on the old code under DOS! Make your potential problems as small and simple as you possibly can. Port assembly code instead of rewriting it only if you're reasonably confident that it will actually make your job easier (ie, algorithmic stuff that compiles fine with few tweaks under NASM). Don't bite off a bigger piece than you can comfortably fit in your brain at once.
I, for one, look forward to seeing your progress! I think what you're attempting to do is great. Thanks for doing it.
Hmmm - I might approach it by writing an OpenGL video "driver" for it. and todays machines are fast enough with tons of ram that you could do all the pixel specific algorithms on main CPU into a back buffer and it would work. As the "generic" VGA driver just mapped the video buffer to a pointer this would be a place to start. There was a zoom mode in the UI so you can look at the pixels on a high res display.
It is often very difficult to take an existing non-trivial code base that wasn't written with portability in mind - you mention a few - and then try to make it portable. There will be a lot of problems on the way. It is probably a better idea to start from scratch and rewrite the code using the existing code as reference only. If you start from scratch you can leverage existing portable UI solution in your new project like Qt.

How does one go about understanding GNU source code?

I'm really sorry if this sounds kinda dumb. I just finished reading K&R and I worked on some of the exercises. This summer, for my project, I'm thinking of re-implementing a linux utility to expand my understanding of C further so I downloaded the source for GNU tar and sed as they both seem interesting. However, I'm having trouble understanding where it starts, where's the main implementation, where all the weird macros came from, etc.
I have a lot of time so that's not really an issue. Am I supposed to familiarize myself with the GNU toolchain (ie. make, binutils, ..) first in order to understand the programs? Or maybe I should start with something a bit smaller (if there's such a thing) ?
I have little bit of experience with Java, C++ and python if that matters.
Thanks!
The GNU programs big and complicated. The size of GNU Hello World shows that even the simplest GNU project needs a lot of code and configuration around it.
The autotools are hard to understand for a beginner, but you don't need to understand them to read the code. Even if you modify the code, most of the time you can simply run make to compile your changes.
To read code, you need a good editor (VIM, Emacs) or IDE (Eclipse) and some tools to navigate through the source. The tar project contains a src directory, that is a good place to start. A program always start with the main function, so do
grep main *.c
or use your IDE to search for this function. It is in tar.c. Now, skip all the initialization stuff, untill
/* Main command execution. */
There, you see a switch for subcommands. If you pass -x it does this, if you pass -c it does that, etc. This is the branching structure for those commands. If you want to know what these macro's are, run
grep EXTRACT_SUBCOMMAND *.h
there you can see that they are listed in common.h.
Below EXTRACT_SUBCOMMAND you see something funny:
read_and (extract_archive);
The definition of read_and() (again obtained with grep):
read_and (void (*do_something) (void))
The single parameter is a function pointer like a callback, so read_and will supposedly read something and then call the function extract_archive. Again, grep on it and you will see this:
if (prepare_to_extract (current_stat_info.file_name, typeflag, &fun))
{
if (fun && (*fun) (current_stat_info.file_name, typeflag)
&& backup_option)
undo_last_backup ();
}
else
skip_member ();
Note that the real work happens when calling fun. fun is again a function pointer, which is set in prepare_to_extract. fun may point to extract_file, which does the actual writing.
I hope I walked you a great deal through this and shown you how I navigate through source code. Feel free to contact me if you have related questions.
The problem with programs like tar and sed is twofold (this is just my opinion, of course!). First of all, they're both really old. That means they've had multiple people maintain them over the years, with different coding styles and different personalities. For GNU utilities, it's usually pretty good, because they usually enforce a reasonably consistent coding style, but it's still an issue. The other problem is that they're unbelievably portable. Usually "portability" is seen as a good thing, but when taken to extremes, it means your codebase ends up full of little hacks and tricks to work around obscure bugs and corner cases in particular pieces of hardware and systems. And for programs as widely ported as tar and sed, that means there's a lot of corner cases and obscure hardware/compilers/OSes to take into account.
If you want to learn C, then I would say the best place to start is not trying to study code that others have written. Rather, try to write code yourself. If you really want to start with an existing codebase, choose one that's being actively maintained where you can see the changes that other people are making as they make them, follow along in the discussions on the mailing lists and so on.
With well-established programs like tar and sed, you see the result of the discussions that would've happened, but you can't see how software design decisions and changes are being made in real-time. That can only happen with actively-maintained software.
That's just my opinion of course, and you can take it with a grain of salt if you like :)
Why not download the source of the coreutils (http://ftp.gnu.org/gnu/coreutils/) and take a look at tools like yes? Less than 100 lines of C code and a fully functional, useful and really basic piece of GNU software.
GNU Hello is probably the smallest, simplest GNU program and is easy to understand.
I know sometimes it's a mess to navigate through C code, especially if you're not familiarized with it. I suggest you use a tool that will help you browse through the functions, symbols, macros, etc. Then look for the main() function.
You need to familiarize yourself with the tools, of course, but you don't need to become an expert.
Learn how to use grep if you don't know it already and use it to search for the main function and everything else that interests you. You might also want to use code browsing tools like ctags or cscope which can also integrate with vim and emacs or use an IDE if you like that better.
I suggest using ctags or cscope for browsing. You can use them with vim/emacs. They are widely used in the open-source world.
They should be in the repository of every major linux distribution.
Making sense of some code which uses a lot of macros, utility functions, etc, can be hard. To better browse the code of a random C or C++ software, I suggest this approach, which is what I generally use:
Install Qt development tools and Qt Creator
Download the sources you want to inspect, and set them up for compilation (usually just ./configure for GNU stuff).
Run qmake -project in the root of the source directory, to generate Qt .pro file for Qt Creator.
Open the .pro file in Qt Creator (do not use shadow build, when it asks).
Just to be safe, in Qt Creator Projects view, remove the default build steps. The .pro file is just for navigation inside Qt Creator.
Optional: set up custom build and run steps, if you want to build and run/debug under Qt Creator. Not needed for navigation only.
Use Qt Creator to browse the code. Note especially the locator (kb shortcut Ctrl+K) to find stuff by name, and "follow symbol under cursor" (kb shortcut F2), and "find usages" (kb shortcut Ctrl-Shift-U).
I had to take a look at "sed" just to see what the problem was; it shouldn't be that big. I looked and I see what the issue is, and I feel like Charleton Heston catching first sight of a broken statue on the beach. All of what I'm about to describe for "sed" might also apply to "tar". But I haven't looked at it (yet).
A lot of GNU code got seriously grunged up - to the point of unmaintainable morbid legacy - for reasons I don't know. I don't know exactly when it happened, maybe late 1990's or early 2000's, but it was like someone flipped a switch and suddenly, all the nice modular mostly self-contained code widgets got massively grunged with all sorts of extraneous entanglements having little or no connection to what the application itself was trying to do.
In your case, "sed": an entire library got (needlessly) dragged in with the application. This was the case at least as early as version 4.2 (the last version predating your query), probably before that - I'd have to check.
Another thing that got grunged up was the build system (again) to the point of unmaintainability.
So, you're really talking about legacy rescue here.
My advice ... which is generic for any codebase that's been around a long time ... is to dig as deep as you can and go back to its earliest forms first; and to branch out and look at other "sed"'s - like those in the UNIX archive.
https://www.tuhs.org/Archive/
or in the BSD archive:
https://github.com/freebsd
https://github.com/weiss/original-bsd
(the second one goes deeper into early BSD in its earlier commits.)
Many of the "sed"'s on the GNU page - but not all of them - may be found under "Downloads" as a link "mirrors" on the GNU sed page:
https://www.gnu.org/software/sed/
Version 1.18 is still intact. Version 1.17 is implicitly intact, since there is a 1.17 to 1.18 diff present there. Neither version has all the extra stuff piled on top of it. It's more representative of what GNU software looked like, before before becoming knotted up with all the entanglements.
It's actually pretty small - only 8863 lines for the *.c and *.h files, in all. Start with that.
For me the process of analysis of any codebase is destructive of the original and always entails a massive amount of refactoring and re-engineering; and simplification coming from just writing it better and more natively, while yet keeping or increasing its functionality. Almost always, it is written by people who only have a few years' experience (by which I mean: less than 20 years, for instance) and have thus not acquired full-fledged native fluency in the language, nor the breadth of background to be able to program well.
For this, if you do the same, it's strongly advised that you have some of test suite already in place or added. There's one in the version 4.2 software, for instance, though it may be stress-testing new capabilities added between 1.18 and 4.2. Just be aware of that. (So, it might require reducing the test suite to fit 1.18.) Every change you make has to be validated by whatever tests you have in your suite.
You need to have native fluency in the language ... or else the willingness and ability to acquire it by carrying out the exercise and others like it. If you don't have enough years behind you, you're going to hit a soft wall. The deeper you go, the harder it might be to move forward. That's an indication that you're not experienced enough yet, and that you don't have enough breadth. So, this exercise then becomes part of your learning experience, and you'll just have to plod through.
Because of how early the first versions date from, you will have to do some rewriting anyhow, just to bring it up to standard. Later versions can be used as a guide, for this process. At a bare minimum, it should be brought up to C99, as this is virtually mandated as part of POSIX. In other words, you should at least be at least as far up to date as the present century!
Just the challenge of getting it to be functional will be exercise enough. You'll learn a lot of what's in it, just by doing that. The process of getting it to be operational is establishing a "baseline". Once you do that, you have your own version, and you can start with the "analysis".
Once a baseline is established, then you can proceed full throttle forward with refactoring and re-engineering. The test suite helps to provide cover against stumbles and inserted errors. You should keep all the versions that you have (re)made in a local repository so that you can jump back to earlier ones, in case you need to track down the sudden emergence of test failures or other bugs. Some bugs, you may find, were rooted all the way back in the beginning (thus: the discovery of hidden bugs).
After you have the baseline (re)written to your satisfaction, then you can proceed to layer in the subsequent versions. On GNU's archive, 1.18 jumps straight to 2.05. You'll have to make a "diff" between the two to see where all the changes were, and then graft them into your version of 1.18 to get your version of 2.05. This will help you better understand both the issues that the changes made addressed, and what changes were made.
At some point you're going to hit GNU's Grunge Wall. Version 2.05 jumped straight to 3.01 in GNU's historical archive. Some entanglements started slipping in with version 3.01. So, it's a soft wall we have here. But there's also an early test suite with 3.01, which you should use with 1.18, instead of 4.2's test suite.
When you hit the Grunge Wall, you'll see directly what the entanglements were, and you'll have to decide whether to go along for the ride or cast them aside. I can't tell you which direction is the rabbit hole, except that SED has been perfectly fine for a long time, most or all of it is what is listed in and mandated by the POSIX standard (even the current one), and what's there before version 3 serves that end.
I ran diffs. Between 2.05 and 3.01, the diff file is 5000 lines. Ok. That's (mostly) fine and is natural for code that's in development, but some of that may be coming from the soft Grunge Wall. Running a diff on 3.01 versus 4.2 yields a diff file that over 60000 lines. You need only ask yourself: how can a program that's under 10000 lines - that abides by an international standard (POSIX) - be producing 60000 lines of differences? The answer is: that's what we call bloat. So, between 3.01 and 4.2, you're witnessing a problem that is very common to code bases: the rise of bloat.
So, that pretty much tells you which direction ("go along for the ride" versus "cast it aside") is the rabbit hole. I'd probably just stick with 3.01, and do a cursory review of the differences between 3.01 and 4.2 and of the change logs to get an overview of what the changes were, and just leave it at that, except maybe to find a different way to write in what they thought was necessary to change, if the reason for it was valid.
I've done legacy rescue before, before the term "legacy" was even in most people's vocabulary and am quick to recognize the hallmark signs of it. This is the kind of process one might go through.
We've seen it happen with some large codebases already. In effect, the superseding of X11 by Wayland was a massive exercise in legacy rescue. It's also possible that the ongoing superseding of GNU's gcc by clang may be considered instance of that.

Why do you obfuscate your code?

Have you ever obfuscated your code before? Are there ever legitimate reasons to do so?
I have obfuscated my JavaScript. It made it smaller, thus reducing download times. In addition, since the code is handed to the client, my company didn't want them to be able to read it.
Yes, to make it harder to reverse engineer.
To ensure a job for life, of course (kidding).
This is pretty hilarious and educational: How to Write Unmaintanable Code.
It's called "Job Security". This is also the reason to use Perl -- no need to do obfuscation as separate task, hence higher productivity, without loss of job security.
Call it "security through obsfuscability" if you will.
I don't believe making reverse engineering harder is a valid reason.
A good reason to obfuscate your code is to reduce the compiled footprint. For instance, J2ME appliactions need to be as small as possible. If you run you app through an obfuscator (and optimiser) then you can reduce the jar from a couple of Mb to a few hundred Kb.
The other point, nestled above, is that most obfuscators are also optimisers which can improve your application's performance.
Isn't this also used as security through obscurity? When your source code is publically available (javascript etc) you might want to at least it somewhat harder to understand what is actually occuring on the client side.
Security is always full of compromises. but i think that security by obscurity is one of the least effective methods.
I believe all TV cable boxes will have the java code obfuscated. This does make things harder to hack, and since the cable boxes will be in your home, they are theoretically hackable.
I'm not sure how much it will matter since the cable card will still control signal encryption and gets its authorization straight from the video source rather than the java code guide or java apps, but they are pretty dedicated to the concept.
By the way, it is not easy to trace exceptions thrown from an obfuscated stack! I actually memorized at one point that aH meant "Null Pointer Exception" for a particular build.
I remember creating a Windows Service for Online Backup application that was built in .NET. I could easily use either Visual Studio or tools like .NET Reflector to see the classes and the source code inside it.
I created a new Visual Studio Test application and added the Windows Service reference to it. Double clicked on the reference and I can see all the classes, namespaces everything (not the source code though). Anybody can figure out the internal working of your modules by looking at the class names. In my case, one such class was FTPHandler that clearly tells where the backups are going.
.NET Reflector goes beyond that by showing the actual code. It even has an option to Export the whole project so you get a VS project with all the classes and source code similar to what the developer had.
I think it makes sense to obfuscate, to make it atleast harder if not impossible for someone to disassemble. Also I think it makes sense for products involving large customer base where you do not want your competitors to know much about your products.
Looking at some of the code I wrote for my disk driver project makes me question what it means to be obfuscated.
((int8_t (*)( int32_t, void * )) hdd->_ctrl)( DISK_CMD_REQUEST, (void *) dr );
Or is that just system programming in C? Or should that line be written differently? Questions...
Yes and no, I haven't delivered apps with a tool that was easy decompilable.
I did run something like obfuscators for old Basic and UCSD Pascal interpreters, but that was for a different reason, optimizing run time.
If I am delivering Java Swing apps to clients, I always obfuscate the class files before distribution.
You can never be too careful - I once pointed a decent Java decompiler (I used the JD Java Decompiler - http://www.djjavadecompiler.com/ ) at my class files and was rewarded with an almost perfect reproduction of the original code. That was rather unnerving, so I started obfuscating my production code ever since. I use Klassmaster myself (http://www.zelix.com/klassmaster/)
I obfuscated code of my Android applications mostly. I used ProGuard tool to obfuscate the code.
When I worked on the C# project, our team used the ArmDot. It's licensing and obfuscation system.
Modern obfuscators are used not only to make hacking process difficult. They are able to protect programs and games from cheating, check licenses/keys and even optimize code.
But I don't think it is necessary to use obfuscator in every project.
It's most commonly done when you need to provide something in source (usually due to the environment it's being built in, such as systems without shared libraries, especially if you as the seller don't have the exact system being build for), but you don't want the person you're giving it to to be able to modify or extend it significantly (or at all).
This used to be far more common than today. It also led to the (defunct?) Obfuscated C Contest.
A legal (though arguably not "legitimate") use might be to release "source" for an app you're linking with GPL code in obfuscated fashion. It's source, it can be modified, it's just very hard. That would be a more extreme version of releasing it without comments, or releasing with all whitespace trimmed, or (and this would be pushing the legal grounds probably) releasing assembler source generated from C (and perhaps hand-tweaked so you can say it's not just intermediate code).

C (or any) compilers deterministic performance

Whilst working on a recent project, I was visited by a customer QA representitive, who asked me a question that I hadn't really considered before:
How do you know that the compiler you are using generates machine code that matches the c code's functionality exactly and that the compiler is fully deterministic?
To this question I had absolutely no reply as I have always taken the compiler for granted. It takes in code and spews out machine code. How can I go about and test that the compiler isn't actually adding functionality that I haven't asked it for? or even more dangerously implementing code in a slightly different manner to that which I expect?
I am aware that this is perhapse not really an issue for everyone, and indeed the answer might just be... "you're over a barrel and deal with it". However, when working in an embedded environment, you trust your compiler implicitly. How can I prove to myself and QA that I am right in doing so?
You can apply that argument at any level: do you trust the third party libraries? do you trust the OS? do you trust the processor?
A good example of why this may be a valid concern of course, is how Ken Thompson put a backdoor into the original 'login' program ... and modified the C compiler so that even if you recompiled login you still got the backdoor. See this posting for more details.
Similar questions have been raised about encryption algorithms -- how do we know there isn't a backdoor in DES for the NSA to snoop through?
At the end of the you have to decide if you trust the infrastructure you are building on enough to not worry about it, otherwise you have to start developing your own silicon chips!
For safety critical embedded application certifying agencies require to satisfy the "proven-in-use" requirement for the compiler. There are typically certain requirements (kind of like "hours of operation") that need to be met and proven by detailed documentation. However, most people either cannot or don't want to meet these requirements because it can be very difficult especially on your first project with a new target/compiler.
One other approach is basically to NOT trust the compiler's output at all. Any compiler and even language-dependent (Appendix G of the C-90 standard, anyone?) deficiencies need to be covered by a strict set of static analysis, unit- and coverage testing in addition to the later functional testing.
A standard like MISRA-C can help to restrict the input to the compiler to a "safe" subset of the C language. Another approach is to restrict the input to a compiler to a subset of a language and test what the output for the entire subset is. If our application is only built of components from the subset it is assumed to be known what the output of the compiler will be. The usually goes by "qualification of the compiler".
The goal of all of this is to be able to answer the QA representative's question with "We don't just rely on determinism of the compiler but this is the way we prove it...".
You know by testing. When you test, you're testing your both code and the compiler.
You will find that the odds that you or the compiler writer have made an error are much smaller than the odds that you would make an error if you wrote the program in question in some assembly language.
There are compiler validation suits available.
The one I remember is "Perennial".
When I worked on a C compiler for a embedded SOC processor we had to validate the compiler against this and two other validation suits (that I forget the name of). Validating the compiler to a certain level of conformance to these test suits was part of the contract.
It all boils down to trust. Does your customer trust any compiler? Use that, or at least compare output code between yours and theirs.
If they don't trust any, is there a reference implementation for the language? Could you convince them to trust it? Then compare yours against the reference or use the reference.
This all assuming you actually verify the actual code you get from the vendor/provider and that you check the compiler has not been tampered with, which should be the first step.
Anyhow this still leaves the question about how would you verify, without having references, a compiler, from scratch. That certainly looks like a ton of work and requires a definition of the language, which not always is available, sometimes the definition is the compiler.
How do you know that the compiler you are using generates machine code that matches the c code's functionality exactly and that the compiler is fully deterministic?
You don't, that's why you test the resultant binary, and why you make sure to ship the same binary you tested with. And why when you make 'minor' software changes, you regression test to make sure none of the old functionality broke.
The only software I've certified is avionics. FAA certification isn't rigorous enough to prove the software works correctly, while at the same time it does force you to jump through a certain amount of hoops. The trick is to structure your 'process' so it improves quality as much as possible, with as little extraneous hoop-jumping as you can get away with. So anything that you know is worthless and won't actually find bugs, you can probably weasel out of. And anything you know you should do because it will find bugs that isn't explicitly asked for by the FAA, your best bet is to twist words until it sounds like you're giving the FAA/your QA people what they asked for.
This actually isn't as dishonest as I've made it sound, in general the FAA cares more about you being conscientious and confident that you're trying to do a good job, than about what exactly you do.
Some intellectual ammunition might be found in Crosstalk, a magazine for defense software engineers. This question is the kind of thing they spend many waking hours on. http://www.stsc.hill.af.mil/crosstalk/2006/08/index.html (If i can find my old notes from an old project, i'll be back here...)
You can never fully trust the compiler, even highly recommended ones. They could release an update that has a bug, and your code compiles the same. This problem is compounded when updating old code with the buggy compiler, doing testing and shipping out the goods only to have the customer ring you 3 months later with a problem.
It all comes back to testing, and if there is one thing I have learnt it is to thouroughly test after any non-trivial change. If the problem seems impossible to find have a look at the compiled assembler and check it's doing what it should be doing.
On several occasions I have found bugs in the compiler. One time there was a bug where 16 bit variables would get incremented but without carry and only if the 16 bit variable was part of an extern struct defined in a header file.
...you trust your compiler implicitly
You'll stop doing that the first time you come across a compiler bug. ;-)
But ultimately this is what testing is for. It doesn't matter to your test regime how the bug got in to your product in the first place, all that matters is that it didn't pass your extensive testing regime.
Well.. you can't simply say that you trust your compiler's output - particularly if you work with embedded code. It is not hard to find discrepancies between the code generated when compiling the very same code with different compilers. This is the case because the C standard itself is too loose. Many details can be implemented differently by different compilers without breaking the standard. How do we deal with this stuff? We avoid compiler dependent constructs whenever possible. We may deal with it by choosing a safer subset of C like Misra-C as previously mentioned by the user cschol. I seldom have to inspect the code generated by the compiler but that has also happened to me at times. But, ultimately, you are relying on your tests in order to make sure that the code behaves as intended.
Is there a better option out there? Some people claim that there is. The other option is to write your code in SPARK/Ada. I have never written code in SPARK but my understanding is that you would still have to link it against routines written in C that would deal with the "bare metal" stuff. The beauty of SPARK/Ada is that you are absolutely guaranteed that the code generated by any compiler is always going to be the same. No ambiguity whatsoever. On top of that, the language allows you to annotate the code with explanations as to how the code is intended to behave. The SPARK toolset will use these annotations to formally prove that the code written does indeed do what the annotations have described. So I have been told that for critical systems, SPARK/Ada is a pretty good bet. I have never tried it myself though.
You don't know for sure that the compiler will do exactly what you expect. The reason is, of course, that a compiler is a peice of software, and is therefore susceptible to bugs.
Compiler writers have the advantage of working from a high quality spec, while the rest of us have to figure out what we're making as we go along. However, compiler specs also have bugs, and complex parts with subtle interactions. So, it's not exactly trivial to figure out what the compiler should be doing.
Still, once you decide what you think the language spec means, you can write a good, fast, automated test for every nuance. This is where compiler writing has a huge advantage over writing other kinds of software: in testing. Every bug becomes an automated test case, and the test suite can very thorough. Compiler vendors have a lot more budget to invest in verifying the correctness of the compiler than you do (you already have a day job, right?).
What does this mean for you? It means that you need to be open to the possibilities of bugs in your compiler, but chances are you won't find any yourself.
I would pick a compiler vendor that is not likely to go out of business any time soon, that has a history of high quality in their compilers, and that has demonstrated their ability to service (patch) their products. Compilers seem to get more correct over time, so I'd choose one that's been around a decade or two.
Focus your attention on getting your code right. If it's clear and simple, then when you do hit a compiler bug, you won't have to think really hard to decide where the problem lies. Write good unit tests, which will ensure that your code does what you expect it to do.
Try unit testing.
If that's not enough, use different compilers and compare the results of your unit tests. Compare strace outputs, run your tests in a VM, keep a log of disk and network I/O, then compare those.
Or propose to write your own compiler and tell them what it's going to cost.
The most you can easily certify is that you are using an untampered compiler from provider X. If they do not trust provider X, it's their problem (if X is reasonably trustworthy). If they do not trust any compiler provider, then they are totally unreasonable.
Answering their question: I make sure I'm using an untampered compiler from X through these means. X is well reputed, plus I have a nice set of tests that show our application behaves as expected.
Everything else is starting to open the can of worms. You have to stop somewhere, as Rob says.
Sometimes you do get behavioural changes when you request aggressive levels of optimisation.
And optimisation and floating point numbers? Forget it!
For most software development (think desktop applications) the answer is probably that you don't know and don't care.
In safety-critical systems (think nuclear power plants and commercial avionics) you do care and regulatory agencies will require you to prove it. In my experience, you can do this one of two ways:
Use a qualified compiler, where "qualified" means that it has been verified according to the standards set out by the regulatory agency.
Perform object code analysis. Essentially, you compile a piece of reference code and then manually analyze the output to demonstrate that the compiler has not inserted any instructions that can't be traced back to your source code.
You get the one Dijkstra wrote.
Select a formally verified compiler, like Compcert C compiler.
Changing the optimization level of the compiler will change the output.
Slight changes to a function may make the compiler inline or no longer inline a function.
Changes to the compiler (gcc versions for example) may change the output
Certain library functions may be instrinic (i.e., emit optimized assembly) while others most are not.
The good news is that for most things it really doesn't matter that much. Where it does, you may want to consider assembly if it really matters (e.g., in an ISR).
If you are concerned about unexpected machine code which doesn't produce visible results, the only way is probably to contact compiler vendor for certification of some sort which will satisfy your customer.
Otherwise you'll know it the same you know about bugs in your code - testing.
Machine code from modern compilers can be vastly different and totally incomprehensible for puny humans.
I think it's possible to reduce this problem to the Halting Problem somehow.
The most obvious problem is that if you use some kind of program to analyze the compiler and its determinism, how do you know that your program gets compiled correctly, and produces the correct result?
If you're using another, "safe" compiler though, I'm not sure. What I'm sure is writing a compiler from scratch would probably be an easier job.
Even a qualified or certified compiler can produce undesirable results. Keep your code simple and test, test, test. That or walk through the machine code by hand while not allowing any human error. PLus the operating system or whatever environment you are running on (preferably no operating system, just your program).
This problem has been solved in mission critical environments since software and compilers began. As many of the others who have responded also know. Each industry has its own rules from certified compilers to programming style (you must always program this way, never use this or that or the other), lots of testing and peer review. Verifying every execution path, etc.
If you are not in one of those industries, then you get what you get. A commercial program on a COTS operating system on COTS hardware. It will fail, that is a guarantee.
If your worried about malicious bugs in the compiler, one recommendation (IIRC, an NSA requirement for some projects) is that the compiler binary predate the writing of the code. At least then you know that no one has added bugs targeted at your program.

Resources