Any Tools to Catch Silly Mistakes in C Code? - c

I had a nasty typo that wasted my time and my colleague's time, it was something like this:
for (i = 0; i < blah; i++); // <- I had a semi-colon here, that's the bug!
{
// Some awesome logic here
}
First of all, it's very embarrassing, second thing, I should never repeat this. I'm relatively new to C. In Java, I guess I can use FindBugs to catch errors like these, what tool should I use for C code? Lint?

Yes, PC-Lint is probably the best tool available.

In addition to Lykathea's PC-Lint suggestion, you can also get better (or at least more) diagnostics if you bump up the warning level of the compiler. Something like /W4 or -Wall
Though I'm not sure if your particular problem would have been caught with this (MS VC doesn't seem to flag it even with all warnings enabled). I think that's because it's not an uncommon idiom for for loops to be empty when the work is done as side effects of the loop control expressions.

A few things that have saved me in the past, from the top of my head:
Use if (3 == bla) rather than (bla == 3), because if you misspell and type (3 = bla) the compiler will complain.
Use the all-warnings switch. Your compiler should warn you about empty statements like that.
Use assertions when you can and program defensively. Put good effort into making your program fail early, you will see the weaknesses that way.
Don't try to circumvent any safeguards the compiler or the OS have put in place. They are there for your ease of programming aswell.

Also look at clang static analysis

I would start by learning about splint and gdb. If you need more advanced, build on these two tools. But they are a good start.

GCC has most of the functionality that Lint has had built in via the warning flags.

Any good GUI programming environment ("IDE" - Integrated Development Environment) like Eclipse would generate a warning in a case like that.

A good syntax highlighter will make some cases like this more visible.

I would suggest seeing if you have the ability to enforce MISRA standards. They were written with great thought and many rules that are simple for a compiler to check. For example, A rule I use requires all NOP commands have their own line. This means when you put a ; on the end of a loop statement it will through an error saying that it is not on it's own line.

QA·C by Programming Research is another good static analysis tool for C.

In this (old) version of How to Shoot Yourself In the Foot, and in many other versions around the web, C is always the language that allows for the simplest procedure. When programming in C, you have to remember this and be careful. If you want protection, choose another language.
This saying is attributed to Bjarne Stroustrup (C++) himself. To (mis)quote:
"C makes it easy to shoot yourself in the foot"

Related

Make tolower as static

I need to make the standard library function tolower static instead of "public' scope.
I am compiling with MISRA C:2004, using IAR Embedded Workbench compiler. The compiler is declaring tolower as inline:
inline
int tolower(int _C)
{
return isupper(_C) ? (_C + ('A' - 'a')) : _C;
}
I am getting the following error from the compiler:
Error[Li013]: the symbol "tolower" in RS232_Server.o is public but
is only needed by code in the same module - all declarations at
file scope should be static where possible (MISRA C 2004 Rule 8.10)
Here are my suggested solutions:
Use tolower in another module, in a dummy circumstance, so that it
is needed by more than one module.
Implement the functionality without using tolower. This is an embedded system.
Add a "STATIC" macro, defined as empty by default, but can be
defined to static before the ctype.h header file is included.
I'm looking for a solution to the MISRA linker error. I would prefer to make the tolower function static only for the RS232_Server translation unit (If I make tolower static in the standard header file, it may affect other future projects.)
Edit 1:
Compiler is IAR Embedded Workbench 6.30 for ARM processor.
I'm using an ARM7TDMI processor in 32-bit mode (not Thumb mode).
The tolower function is used with a debug port.
Edit 2:
I'm also getting the error for _LocaleC_isupper and _LocaleC_tolower
Solution:
I notified the vendor of the issue according as recommended by Michael Burr.
I decided not to rewrite the library routine because of localization
issues.
I implemented a function pointer in the main.c file as suggested by
gbulmer; however this will be commented incredibly because it should
be removed after IAR resolves their issue.
I'd suggest that you disable this particular MISRA check (you should be able to do that just for the RS232_Server translation unit) rather than use the one of the workarounds you suggest. In my opinion the utility of rule 8.10 is pretty minimal, and jumping through the kinds of hoops in the suggested workarounds seems more likely to introduce a higher risk than just disabling the rule. Keep in mind that the point of MISRA is to make C code less likely to have bugs.
Note that MISRA recognizes that "in some instances it may be necessary to deviate from the rules" and there is a documented "Deviation procedure" (section 4.3.2 in MISRA-C 2004).
If you won't or can't disable the rule for whatever reason, in my opinion you should probably just reimplement tolower()'s functionality in your own function, especially if you don't have to deal with locale support. It may also be worthwhile to open a support incident with IAR. You can prod them with rule 3.6 says that "All libraries used in production code shall be written to comply with [MISRA-C]".
Who sells the MISRA linker? It seems to have an insane bug.
Can you work around it by taking its address int (*foo)(int) = tolower;at file scope?
Edit: My rationale is:
a. that is the stupidest thing I've seen this decade, so it may be a bug, but
b. pushing it towards an edge case (a symbol having its name exported via a global) might shut it up.
For that to be correct behaviour, i.e. not a bug, it would have to be a MISRA error to include any library function once, like initialise_system_timer, initialise_watchdog_timer, ... , which just hurts my head.
Edit: Another thought. Again based on an assumption that this is an edge-case error.
Theory: Maybe the compiler is doing both the inline, and generating an implementation of the function. Then the function is (of course) not being called. So the linker rules are seeing that un-called function.
GNU has options to prevent in-lining.
Can you do the same for that use of tolower? Does that change the error? You could do a test by
#define inline /* nothing */
before including ctype.h, then undef the macro.
The other test is to define:
#define inline static inline
before including ctype.h, which is the version of inline I expect.
EDIT2:
I think there is a bug which should be reported. I would imagine IAR have a workaround. I'd take their advice.
After a nights sleep, I strongly suspect the problem is inline int tolower() rather than static inline int tolower or int tolower, because it makes most sense. Having a public function which is not called does seem to be the symptoms of a bug in a program.
Even with documentation, all the coding approaches have downsides.
I strongly support the OP, I would not change a standard header file. I think there are several reasons. For example, a future upgrade of the tool chain (which comes with a new set of headers) breaks an old app if it ever gets maintained. Or simply building the application on a different machine gives an error, merging two apparently correct applications may give an error, ... . Nothing good is likely to come of that. -100
Use #define ... to make the error go away. I do not like using macros to change standard libraries. This seems like the seeds of a long-term bad idea. If in future another part of the program uses another function with a similar problem, the application of the 'fix' gets worse. Some inexperienced developer might get the job of maintaining the code, and 'learns' wrapping strange pieces of #define trickery around #include is 'normal' practice. It becomes company 'practice' to wrap #include <ctype.h> in weird macro workarounds, which remains years after it was fixed. -20
Use a command line compiler option to switch off inlining. If this works, and the semantics are correct, i.e. including the header and using the function in two source files does not lead to a multiple defined function, then okay. I would expect it leads to an error, but it is worth confirming as part of the bug report. It lays a frustrating trap for someone else, who comes along in the future. If a different inline standard library function is used, and for some reason a person has to look at the generated code it won't be included either. They might go a bit crazy wondering why inline is not honoured. I conjecture the reason they are looking at generated code is because performance is critical. In my experience, people would spend a lot of time looking at code, baffled before looking at the build script for a program which works. If suppressing inline is used as a fix, I do feel it is better to do it everywhere than for one file. At least the semantics are consistent and the high-level comment or documentation might get noticed. Anyone using the build scripts as a 'quick check' will get consistent behaviour which might cause them to look at the documentation. -1 (no-inline everywhere) -3 (no-inline on one file)
Use tolower in a bogus way in a second source file. This has the small benefit that the build system is not 'infected' with extra compiler options. Also it is a small file which will give the opportunity to explain the error being worked around. I do not like this much but I like it more than the fiddling with standard headers. My current concern is it might not work. It might include two copies which the linker can't resolve. I do like it is better than the weird 'take its address and see if the linker shuts up` (which I do think is an interesting way to test the edge cases). -2
Code your own tolower. I don't understand the wider context of the application. My reaction is not to code replacements for library functions because I am concerned about testing (and the unit tests which introduce even more code) and long term maintenance. I am even more nervous with character I/O because the application might need to become capable of handling a wider character set, for example UTF-8 encoding, and a user-defined tolower will tend to get out of synch. It does sound like a very specific application (debugging to a port), so I could cope. I don't like writing extra code to work around things which look like bugs, especially if it is commercial software. -5
Can you convert the error to a warning without losing all of the other checking? I still feel it is a bug in the toolchain, so I'd prefer it to be a warning, rather than silence so that there is a hook for the incident and some documentation, and there is less chance of another error creeping in. Otherwise go with switching off the error. +1 (Warning) 0 (Error)
Switching the error off seems to lose the 'corporate awareness' that (IMHO) IAR owes you an explanation, and longer term fix, but it seems much better than messing with the build system, writing macro-nastiness to futz with standard libraries, or writing your own code which increases your costs.
It may just be me, but I dislike writing code to work around a deficiency in a commercial product. It feels like this is where the vendor should be pleased to have the opportunity to justify its license cost. I remember Microsoft charged us for incidents, but if the problem was proven to be theirs, the incident and fix were free. The right thing to do seems to be give them a chance to earn the money. Products will have bugs, so silently working around it without giving them a chance to fix it seems less helpful too.
First of all, MISRA-C:2004 does not allow inline nor C99. Upcoming MISRA 2012 will allow it. If you try to run C99 or non-standard code through a MISRA-C:2004 static analyser, all bets are off. The MISRA checker should give you an error for the inline keyword.
I believe a MISRA-C compliant version of the code would look like:
static uint8_t tolower(uint8_t ch)
{
return (uint8_t)(isupper(ch) ? (uint8_t)((uint8_t)(ch + 'A') - 'a') :
ch);
}
Some comments on this: MISRA encourages char type for character literals, but at the same time warns against using the plain char type, as it has implementation-defined signedness. Therefore I use uint8_t instead. I believe it is plain dumb to assume that there exist ASCII tables with negative indices.
(_C + ('A' - 'a')) is most certainly not MISRA-compliant, as MISRA regards it, it contains two implicit type promotions. MISRA regards character literals as char, rather than int, like the C standard.
Also, you have to typecast to underlying type after each expression. And because the ?: operator contains implicit type promotions, you must typecast the result of it to underlying type as well.
Since this turned out to be quite an unreadable mess, the best idea is to forget all about ?: entirely and rewrite the function. At the same time we can get rid of the unnecessary reliance on signed calculations.
static uint8_t tolower (uint8_t ch)
{
if(isupper(ch))
{
ch = (uint8_t)(ch - 'A');
ch = (uint8_t)(ch + 'a');
}
return ch;
}

debugging c programs

Programming in a sense is easy. But bugs are something which always makes more trouble. Can anyone help me with good debugging tricks and softwares in c?
From "The Elements of Programming Style" Brian Kernighan, 2nd edition, chapter 2:
Everyone knows that debugging is twice
as hard as writing a program in the
first place. So if you're as clever as
you can be when you write it, how will
you ever debug it?
So from that; don't be "too clever"!
But apart from that and the answers already given; use a debugger! That is your starting point tool-wise. You'd be amazed how many programmers struggle along without the aid of a debugger, and they are fools to do so.
But before you even get to the debugger, get your compiler to help you as much as possible; set the warning level to high, and set warnings as errors. A static analysis tool such as lint, pclint, or QA-C would be even better.
Tools for debugging are all well and good and for some classes of error they will just point you straight to the problem. The best tip that I have for debugging is that you need to think about it in the right way. What works for me is the following:
The compiler probably isn't broken. I've been working with C for 25 years now and in all that time it's almost invariably something I'm doing wrong.
Read the error messages. Often I've looked back at the error message and in hindsight realized it was telling me exactly what was wrong.
Read the documentation. Make sure you aren't making assumptions about the language or library that aren't true.
Make a mental model of the problem. I ask myself what needs to be hapening in my code in order for the results I'm seeing to occur. Then add debug statements, assertions or just step through in the debugger (if you can) to see what is really happening.
Talk the problem through with someone else. Just describing it to a a third party often results in a revelation about what might be happening.
Other people will have other ways of approaching debugging, but I find if you have a structured approach to it rather than flailing around changing stuff at random you usually get there and when you do be prepared for the inevitable Why didn't I see that straight away!
Best debugger for C
gdb
Best tools for memory leak checking:
Valgrind
The following are popular debugging tools.
Valgrind
Purify
Duma
Some very simple Tricks/Suggestions
-> Always check that nowhere in your code you have dereferenced a wild/dangling pointer
Example 1)
int main()
{
int *p;
*p=10; //Undefined Behaviour (crash on most implementations)
}
Example 2)
int main()
{
int *p=malloc(sizeof(int));
//do something with p
free p;
printf("%d", *p); ////Undefined Behaviour (crash on most implementations)
}
-> Always initialize variables before using
int main()
{
int k;
for(int i= k;i<10;++i)
^^
Ouch
printf("%d",i");
}
In addition to all the other suggestions (gdb, valgrind, all that), some simple rules when writing the code help a lot when debugging afterwards.
Always use types with the proper
semantics. Unsigned types (best
size_t) for array indices and numbers that represent a cardinal,
ptrdiff_t for pointer differences,
off_t for file offsets etc. enum types for tags and case distinctions.
There is almost no need for the
builtin types int, long, char or
whatever. Avoid them whenever possible.
In particular don't use char for
arithmetic, the signedness problems with that are a plague. Use uint8_t or int8_t
if you feel the need for such a
thing.
Always initialize variables, all of them: integer, double, pointers, struct. It is
not true that this is less efficient
with a modern compiler. In most cases it will just
be optimized away when not necessary.
But especially pointer variables that
are not properly initialized can
produce spurious errors and make code
hard to debug. If you have them
initialized to NULL your program
will fail early, and your debugger will show you the place.
Compile with all warnings on, and
don't finish tidying your code until
the compiler doesn't give a single
warning. They are quite good at that nowadays, take advantage.
Compile with different optimization
options on, or even better with
different versions of your compiler,
or still better with completely
different compilers on different
platforms.
Use the assert macro. This forces you to think of your assumptions and also make your
code fail early if they are not fulfilled.
Unit testing. Makes getting your software correct a lot easier.
gdb is a debugger to analyse your program.
Other techinque is to use printf or logs
Valgrind provides dynamic analysis of the executable
Purify provides static and dynamic analysis. Sparrow and Prevent are some other tools in competition to Purify.
This can be separated into:
Prevention measures:
Use strict coding styles, don't make a mess
Use comments and code revisions
Use static code analysis tools
Use assertions where it's possible
Don't over complicate
Post-factum
Use debugger/tracer
Use memory checking tools
Use regression testing
Use your brain
Off the top of my head, Valgrind.
You might also want to hone your debugging skills by reading the book Debugging by David Agans. Every programmer should read this early on in their career.
valgrind for memory problems if you're on linux. use gdb/ddd on linux as well. On windows a lot of windows programmers don't seem to be knowledgeable of windbg. It is very useful but has a learning curve like gdb; more powerful than the built in debugger in visual studio. learn to use assert, you will catch lots of stuff and you can turn it off in release code if you so choose. Use a unit testing framework like Check, cunit, etc . Always initialize your pointer, to NULL if nothing else. When you free a pointer set it to NULL. Better you to catch a segfault than your user. Pick a coding standard and stick to it, consistency will help you make fewer mistakes. Keep your functions small if at all possible, this will keep you from having 10 level deep braces which are logic nightmares. If compiling using gcc use -Wall and -Wextra . Use the strn* functions instead of str* functions. Well worth the extra thinking they force you to do.

Large C macros. What's the benefit?

I've been working with a large codebase written primarily by programmers who no longer work at the company. One of the programmers apparently had a special place in his heart for very long macros. The only benefit I can see to using macros is being able to write functions that don't need to be passed in all their parameters (which is recommended against in a best practices guide I've read). Other than that I see no benefit over an inline function.
Some of the macros are so complicated I have a hard time imagining someone even writing them. I tried creating one in that spirit and it was a nightmare. Debugging is extremely difficult, as it takes N+ lines of code into 1 in the a debugger (e.g. there was a segfault somewhere in this large block of code. Good luck!). I had to actually pull the macro out and run it un-macro-tized to debug it. The only way I could see the person having written these is by automatically generating them out of code written in a function after he had debugged it (or by being smarter than me and writing it perfectly the first time, which is always possible I guess).
Am I missing something? Am I crazy? Are there debugging tricks I'm not aware of? Please fill me in. I would really like to hear from the macro-lovers in the audience. :)
To me the best use of macros is to compress code and reduce errors. The downside is obviously in debugging, so they have to be used with care.
I tend to think that if the resulting code isn't an order of magnitude smaller and less prone to errors (meaning the macros take care of some bookkeeping details) then it wasn't worth it.
In C++, many uses like this can be replaced with templates, but not all. A simple example of Macros that are useful are in the event handler macros of MFC -- without them, creating event tables would be much harder to get right and the code you'd have to write (and read) would be much more complex.
If the macros are extremely long, they probably make the code short but efficient. In effect, he might have used macros to explicitly inline code or remove decision points from the run-time code path.
It might be important to understand that, in the past, such optimizations weren't done by many compilers, and some things that we take for granted today, like fast function calls, weren't valid then.
To me, macros are evil. With their so many side effects, and the fact that in C++ you can gain same perf gains with inline, they are not worth the risk.
For ex. see this short macro:
#define max(a, b) ((a)>(b)?(a):(b))
then try this call:
max(i++, j++)
More. Say you have
#define PLANETS 8
#define SOCCER_MIDDLE_RIGHT 8
if an error is thrown, it will refer to '8', but not either of its meaninful representations.
I only know of two reasons for doing what you describe.
First is to force functions to be inlined. This is pretty much pointless, since the inline keyword usually does the same thing, and function inlining is often a premature micro-optimization anyway.
Second is to simulate nested functions in C or C++. This is related to your "writing functions that don't need to be passed in all their parameters" but can actually be quite a bit more powerful than that. Walter Bright gives examples of where nested functions can be useful.
There are other reasons to use of macros, such as using preprocessor-specific functionality (like including __FILE__ and __LINE__ in autogenerated error messages) or reducing boilerplate code in ways that functions and templates can't (the Boost.Preprocessor library excels here; see Boost.ScopeExit or this sample enum code for examples), but these reasons don't seem to apply for doing what you describe.
Very long macros will have performance drawbacks, like increased compiled binary size, and there are certainly other reasons for not using them.
For the most problematic macros, I would consider running the code through the preprocessor, and replacing the macro output with function calls (inline if possible) or straight LOC. If the macros exists for compatibility with other architectures/OS's, you might be stuck though.
Part of the benefit is code replication without the eventual maintenance cost - that is, instead of copying code elsewhere you create a macro from it and only have to edit it once...
Of course, you could also just make a method to be called but that is sort of more work... I'm against much macro use myself, just trying to present a potential rationale.
There are a number of good reasons to write macros in C.
Some of the most important are for creating configuration tables using x-macros, for making function like macros that can accept multiple parameter types as inputs and converting tables from human readable/configurable/understandable values into computer used values.
I cant really see a reason for people to write very long macros, except for the historic automatic function inline.
I would say that when debugging complex macros, (when writing X macros etc) I tend to preprocess the source file and substitute the preprocessed file for the original.
This allows you to see the C code generated, and gives you real lines to work with in the debugger.
I don't use macros at all. Inline functions serve every useful purpose a macro can do. Macro allow you to do very weird and counterintuitive things like splitting up identifiers (How does someone search for the identifier then?).
I have also worked on a product where a legacy programmer (who thankfully is long gone) also had a special love affair with Macros. His 'custom' scripting language is the height of sloppiness. This was compounded by the fact that he wrote his C++ classes in C, meaning all class functions and variables were all public. Anyways, he wrote almost everything in macro's and variadic functions (Another hideous monstrosity foisted on the world). So instead of writing a proper template class he would use a Macro instead! He also resorted to macro's to create factory classes as well, instead of normal code... His code is pretty much unmaintanable.
From what I have seen, macro's can be used when they are small and are used declaratively and don't contain moving parts like loops, and other program flow expressions. It's OK if the macro is one or at the most two lines long and it declares and instance of something. Something that won't break during runtime. Also macro's should not contain class definitions, or function definitions. If the macro contains code that needs to be stepped into using a debugger than the macro should be removed and replace with something else.
They can also be useful for wrapping custom tracing/debugging functionality. For instance you want custom tracing in debug builds but not release builds.
Anyways when you are working in legacy code like that, just be sure to remove a bit of the macro mess a bit at a time. If you keep it up, with enough time eventually you will remove them all and make life a bit easier for yourself. I have done this in the past, with especially messy macro's. What I do is turn on the compiler switch to have the preprocessor generate an output file. Then I raid that file, and copy the code, re-indent it, and replace the macro with the generated code. Thank goodness for that compiler feature.
Some of the legacy code I've worked with used macros very extensively in the place of methods. The reasoning was that the computer/OS/runtime had an extremely small stack, so that stack overflows were a common problem. Using macros instead of methods meant that there were fewer methods on the stack.
Luckily, most of that code was obsolete, so it is (mostly) gone now.
C89 did not have inline functions. If using a compiler with extensions disabled (which is a desirable thing to do for several reasons), then the macro might be the only option.
Although C99 came out in 1999, there was resistance to it for a long time; commercial compiler vendors didn't feel it was worth their time to implement C99. Some (e.g. MS) still haven't. So for many companies it was not a viable practical decision to use C99 conforming mode, even up to today in the case of some compilers.
I have used C89 compilers that did have an extension for inline functions, but the extension was buggy (e.g. multiple definition errors when there should not be), things like that may dissuade a programmer from using inline functions.
Another thing is that the macro version effectively forces that the function will actually be inlined. The C99 inline keyword is only a compiler hint and the compiler may still decide to generate a single instance of the function code which is linked like a non-inline function. (One compiler that I still use will do this if the function is not trivial and returning void).

How to compare compilers

What pointers do you use to compare between compilers?
I'm told gcc is the best C compiler, is this true? If so, why?
I mean this generally, so you can state which compiler is more appropriate for which architecture.
(I hear igc would be more appropriate for Intel for instance, but I don't know why)
Personally I intend to use AMD 64 bit, develop both in Linux and Windows, GUI and non GUI apps.
Um, dunno where you heard that gcc is the "best C compiler". It's simply the most ubiquitous and also a lot better than the native C compilers provided by most commercial UNIX vendors when gcc came about in the 1990s.
But what defines the "best"?
Time to compile code;
Size of compiled code;
Speed of compiled code;
Memory usage of compiled code;
Bugs and probability of seg faulting;
Support;
Community;
etc.
Different things matter to different people.
Here's one set of metrics comparing gcc to Intel's compiler and another comparison with clang. I'm sure you can find some comparisons to Microsoft's compiler too.
Generally speaking, people aren't all that concerned with the relateive size or speed or a compiler (or even necessarily with the size or speed of the output less than a factor of two) but whether it works or not (this was a real issue a decade or two ago), whether it supports the relevant standards and whether it has any oddities/bugs/features you have to workaround.
In general: first of all, the most important aspect of compiler quality is correctness. A compiler with bugs or unexpected behaviour can really wreck your day.
The quality of the resulting code, like speed, size and memory usage, is also at the top of the list.
The speed of compilation is another aspect, especially when compiling large projects.
One thing I find particularly important is error handling, the quality of messages you get when the compiler encounters stuff it can't (or won't) handle.
Correctness is the sine qua non.
I also like
To have a compiler that runs really fast (like lcc or ocamlc)
To have a compiler that produces really good code (like ocamlopt or MLton)
It's OK if they are two different compilers.
I hate having a compiler that makes programs break when a new version comes out. (Richard Stallman, phone your office.)
I know that the INTEL and MS compilers have started doing code generation for SSE3/4 instructions and doing clever things like unfolding loops and supporting vectorisation in the compiler. Not sure GCC does this yet.
I always thought that error messages and warnings make a difference. Some compilers will make it unnecessarily difficult for you to understand what they are trying to say. Others are way more user-friendly. It's also nice when you can enable warnings without the compiler warning you endlessly about stuff it created itself.
Do you mean
best by speed of compiling
best by smalest code
best by fastest code?
You can create a test app, probably with some nasty code (that needs an intelligent optimizer) and use all compilers to test it.
Compare your benchmarks and use the one you like the most.
gcc is a horrible compiler. It has the BEST tech support perhaps because of its price, the number of users and the internet (and google for finding that help). But its output is average to below average at best as far as the quality of the machine code it generates.

Code Obfuscation?

So, I have a penchant for Easter Eggs... this dates back to me being part of the found community of the Easter Egg Archive.
However, I also do a lot of open source programming.
What I want to know is, what do you think is the best way to SYSTEMATICALLY and METHODICALLY obfuscate code.
Examples in PHP/Python/C/C++ preferred, but in other languages is fine, if the methodology is explained properly.
Compile the code with full optimization. Completely strip the binary.
Use a decompiler on the code.
I can guarantee the result will be so utterly unreadable that you won't even be able to read it ;)
In that case, you should use/write an "obfuscator". A program that does the job for you.
The Salamander Obfuscator can be used to obfuscate .Net programs, but it is more to prevent decompilation, thus not exactly what you need.
A good place to learn about obfuscation in C is International Obfuscated C Code Contest
In the spirit of renaming symbols: overuse scope and visibility rules by naming different variables with the same name.
The question is how to create seemingly non-obfuscated code in plain sight (open source) without it appearing to perform another function.
Some obvious methods:
remove comments and as much whitespace as you can without breaking things
join lines
rename variables and functions to be meaningless (preferably 1 character)
For systematic and methodical obfuscation of code, you cannot beat Perl. If you want something that compiles to a binary, there is always APL.
If you are targeting the .NET framework, put your easter egg source code in a resource file as a binhex string. Then you can have one of your initialisaing routines fetch it, decode it and compile it into memory. You can invoke it using reflection.
If you need help with the technical aspects of compiling into memory and calling into the resultant assembly I can give you I library I wrote and a sample program that uses it.
You can use this technology to load plug-ins, which is a legit thing to do and reasonable in an initialiser.

Resources