C - Protect Persistent Settings - c

I have a C program which has always used hard coded define statements for a few settings. Example...
#define TRIGGER_TIMEOUT 50000
This has worked just fine. Now, this setting is to become adjustable. Example...
g_ulTriggerTimeout = ReadEEPROM(TRIGGER_TIMEOUT_OFFSET);
If persistent storage is detected (EEPROM) then the value will be read in and used. So, my safe literal value is now at risk of being corrupted (inadvertently written changed). I need to make this variable a constant, however I also need to read in the initial values from EEPROM. How is this scenario typically solved?

This is IMHO one of those cases where bending the rules is ok.
const int g_ulTriggerTimeout; //declare somewhere making sure it is in a writable section (see comment below from undur)
/* later */
//just for this assignment make it modifiable
*((int*) &g_ulTriggerTimeout) = ReadEEPROM(TRIGGER_TIMEOUT_OFFSET);

I do not like bending the rules (invoking undefinded behaviour) like RedX's answer does. Therefore, I now give a C standard-compliant solution. The price is having a constant function instead of a constant variable.
int g_ulTriggerTimeout()
{
static int done; // statically initialized to zero
static int value;
if( !done )
{
value = haveEEPROM ? ReadEEPROM(TRIGGER_TIMEOUT_OFFSET)
: TRIGGER_TIMEOUT;
done = 1;
}
return value;
}
Following is my first answer, it is however only valid in C++:
const int g_ulTriggerTimeout = haveEEPROM ? ReadEEPROM(TRIGGER_TIMEOUT_OFFSET)
: TRIGGER_TIMEOUT;

You cannot (portably/with C).
If you have to modify the variable in your program (by reading it from EEPROM), you cannot lock it against modification by your program.
There might be platform specific means to achieve that though.

Related

What are the possible values for a valid pointer?

Does the standard restrict possible memory addresses (which I would interpret as possible values for a pointer)? Can your code rely on some values to be never used and still be fully portable?
I just saw this in our code base (it's a C library), and I was wondering if it would be always OK. I'm not sure what the author meant by this, but it's obviously not just a check for possible null.
int LOG(const char* func, const char* file,
int lineno, severity level, const char* msg)
{
const unsigned long u_func = (unsigned long)func;
const unsigned long u_file = (unsigned long)file;
const unsigned long u_msg = (unsigned long)msg;
if(u_func < 0x400 || u_file < 0x400 || u_msg < 0x400 ||
(unsigned)lineno > 10000 || (unsigned)level > LOG_DEBUG)
{
fprintf(stderr, "log function called with bad args");
return -1;
}
//...
}
Another possible use case for this would be storing boolean flags inside a pointer, instead in a separate member variable as an optimization. I think the C++11 small string optimization does this, but I might be wrong.
EDIT:
If it's implementation defined, as some of you have mentioned, can you detect it at compile time?
Does the standard restrict possible memory addresses (which I would interpret as possible values for a pointer)?
The C++ (nor C to my knowledge) standard does not restrict possible memory addresses.
Can your code rely on some values to be never used and still be fully portable?
A program that unconditionally relies on such implementation defined (or unspecified by standard) detail would not be fully portable to all concrete and theoretical standard implementations.
However, using platform detection macros, it may be possible to make the program portable by relying on the detail conditionally only on systems where the detail is reliable.
P.S. Another thing that you cannot technically rely on: unsigned long is not guaranteed to be able to represent all pointer values (uintptr_t is).
The standard term is 'safely derived pointer'. It is implementation defined if you can use not safely derived pointers - for example, numeric constants - in your program.
You can check pointer safety model with std::pointer_safety: https://en.cppreference.com/w/cpp/memory/gc/pointer_safety

initializer element not constant?

I am relatively knew to C and only learning pieces of it to publish a Pebble C/PebbleKitJS app to track buses. So far I have the data being processed on a Node server, and I am getting ready to have the data processed by a JS File. MY one problem however lies within the C Code.
This code process data stored in a Key Dictionary sent from JS and assigns it to a variable for use below. By using #define var 9, I can successfully have the .high value set to 9. But through an int var, it fails and throws the error:initializer element not constant?? .
What does this error mean, and what exactly is the difference between static and constant if i don't define it. apparently static vars don't return anything? Some help would be much appreciated.
UPDATE: The problem still isn't fixed. The following new error message occurs in addition to the initializer one. error: (near initialization for 's_data_points[0].high')
int key0_buffer;
void process_tuple(Tuple *t)
{
//Get key
int key = t->key;
//Get integer value, if present
int value = t->value->int32;
//Get string value, if present
char string_value[32];
strcpy(string_value, t->value->cstring);
//Decide what to do
switch(key) {
case key_0:
//Location received
key0_buffer = value;
break;
}
}
static WeatherAppDataPoint s_data_points[] = {
{
.city = "San Diego",
.description = "surfboard :)",
.icon = WEATHER_APP_ICON_GENERIC_WEATHER,
.current = 110,
.high = key0_buffer,
.low = 9,
},
};
Try this instead:
enum { key0_buffer = 9 };
C doesn't provide for runtime computations while initializing global variables. (The concept does exist as a C++ feature called "dynamic initialization.")
The execution model is that it can store all the bytes of global variables in ROM, and then copy any modifiable variables into RAM together with a single memcpy. Assigning one global to another would be more complicated.
#define allows you to substitute the text 9, which is a constant expression.
Many frown upon using text substitutions to avoid variables as primitive, unnecessarily low-level, and potentially inefficient. In this case though, the results should be the same.
In C, enum constants have type int, so they are a suitable substitute. You're out of luck for other types, though.
There is a key difference between the code in your function and the code lower down that defines the static variable. The function code is executable -- these lines are going to get run when the function is called.
Your static declaration of WeatherAppDataPoint is just telling the compiler to create the static variable. The initialization values you are placing in that declaration tell the compiler what value to initialize this data to. That is why they must be constant -- these are the values that get loaded in before anything gets executed.
The #define statement just tells the preprocessor to replace all instances of "var" with the string "9". It is literally the same as a cut and paste operation in a text editor. This gets done before the compiler ever sees the code. This is why you have no problems with that; the compiler sees a literal constant, just as if you'd manually typed the "9" directly into the source code.
In 'C', variables don't return things, they store them. Only functions return things. If you want to have an integer declared somewhere, and then assign it to the value of your static variable, that is an actual executable line of code that will need to occur inside of a function somewhere (ie inside of your process_tuple function). The lines of code below the static declaration aren't executed at run time, they just setup the initial state of the program and tell the compiler how big the variable is.

Why #defines instead of enums [duplicate]

Which one is better to use among the below statements in C?
static const int var = 5;
or
#define var 5
or
enum { var = 5 };
It depends on what you need the value for. You (and everyone else so far) omitted the third alternative:
static const int var = 5;
#define var 5
enum { var = 5 };
Ignoring issues about the choice of name, then:
If you need to pass a pointer around, you must use (1).
Since (2) is apparently an option, you don't need to pass pointers around.
Both (1) and (3) have a symbol in the debugger's symbol table - that makes debugging easier. It is more likely that (2) will not have a symbol, leaving you wondering what it is.
(1) cannot be used as a dimension for arrays at global scope; both (2) and (3) can.
(1) cannot be used as a dimension for static arrays at function scope; both (2) and (3) can.
Under C99, all of these can be used for local arrays. Technically, using (1) would imply the use of a VLA (variable-length array), though the dimension referenced by 'var' would of course be fixed at size 5.
(1) cannot be used in places like switch statements; both (2) and (3) can.
(1) cannot be used to initialize static variables; both (2) and (3) can.
(2) can change code that you didn't want changed because it is used by the preprocessor; both (1) and (3) will not have unexpected side-effects like that.
You can detect whether (2) has been set in the preprocessor; neither (1) nor (3) allows that.
So, in most contexts, prefer the 'enum' over the alternatives. Otherwise, the first and last bullet points are likely to be the controlling factors — and you have to think harder if you need to satisfy both at once.
If you were asking about C++, then you'd use option (1) — the static const — every time.
Generally speaking:
static const
Because it respects scope and is type-safe.
The only caveat I could see: if you want the variable to be possibly defined on the command line. There is still an alternative:
#ifdef VAR // Very bad name, not long enough, too general, etc..
static int const var = VAR;
#else
static int const var = 5; // default value
#endif
Whenever possible, instead of macros / ellipsis, use a type-safe alternative.
If you really NEED to go with a macro (for example, you want __FILE__ or __LINE__), then you'd better name your macro VERY carefully: in its naming convention Boost recommends all upper-case, beginning by the name of the project (here BOOST_), while perusing the library you will notice this is (generally) followed by the name of the particular area (library) then with a meaningful name.
It generally makes for lengthy names :)
In C, specifically? In C the correct answer is: use #define (or, if appropriate, enum)
While it is beneficial to have the scoping and typing properties of a const object, in reality const objects in C (as opposed to C++) are not true constants and therefore are usually useless in most practical cases.
So, in C the choice should be determined by how you plan to use your constant. For example, you can't use a const int object as a case label (while a macro will work). You can't use a const int object as a bit-field width (while a macro will work). In C89/90 you can't use a const object to specify an array size (while a macro will work). Even in C99 you can't use a const object to specify an array size when you need a non-VLA array.
If this is important for you then it will determine your choice. Most of the time, you'll have no choice but to use #define in C. And don't forget another alternative, that produces true constants in C - enum.
In C++ const objects are true constants, so in C++ it is almost always better to prefer the const variant (no need for explicit static in C++ though).
The difference between static const and #define is that the former uses the memory and the later does not use the memory for storage. Secondly, you cannot pass the address of an #define whereas you can pass the address of a static const. Actually it is depending on what circumstance we are under, we need to select one among these two. Both are at their best under different circumstances. Please don't assume that one is better than the other... :-)
If that would have been the case, Dennis Ritchie would have kept the best one alone... hahaha... :-)
In C #define is much more popular. You can use those values for declaring array sizes for example:
#define MAXLEN 5
void foo(void) {
int bar[MAXLEN];
}
ANSI C doesn't allow you to use static consts in this context as far as I know. In C++ you should avoid macros in these cases. You can write
const int maxlen = 5;
void foo() {
int bar[maxlen];
}
and even leave out static because internal linkage is implied by const already [in C++ only].
Another drawback of const in C is that you can't use the value in initializing another const.
static int const NUMBER_OF_FINGERS_PER_HAND = 5;
static int const NUMBER_OF_HANDS = 2;
// initializer element is not constant, this does not work.
static int const NUMBER_OF_FINGERS = NUMBER_OF_FINGERS_PER_HAND
* NUMBER_OF_HANDS;
Even this does not work with a const since the compiler does not see it as a constant:
static uint8_t const ARRAY_SIZE = 16;
static int8_t const lookup_table[ARRAY_SIZE] = {
1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16}; // ARRAY_SIZE not a constant!
I'd be happy to use typed const in these cases, otherwise...
If you can get away with it, static const has a lot of advantages. It obeys the normal scope principles, is visible in a debugger, and generally obeys the rules that variables obey.
However, at least in the original C standard, it isn't actually a constant. If you use #define var 5, you can write int foo[var]; as a declaration, but you can't do that (except as a compiler extension" with static const int var = 5;. This is not the case in C++, where the static const version can be used anywhere the #define version can, and I believe this is also the case with C99.
However, never name a #define constant with a lowercase name. It will override any possible use of that name until the end of the translation unit. Macro constants should be in what is effectively their own namespace, which is traditionally all capital letters, perhaps with a prefix.
#define var 5 will cause you trouble if you have things like mystruct.var.
For example,
struct mystruct {
int var;
};
#define var 5
int main() {
struct mystruct foo;
foo.var = 1;
return 0;
}
The preprocessor will replace it and the code won't compile. For this reason, traditional coding style suggest all constant #defines uses capital letters to avoid conflict.
It is ALWAYS preferable to use const, instead of #define. That's because const is treated by the compiler and #define by the preprocessor. It is like #define itself is not part of the code (roughly speaking).
Example:
#define PI 3.1416
The symbolic name PI may never be seen by compilers; it may be removed by the preprocessor before the source code even gets to a compiler. As a result, the name PI may not get entered into the symbol table. This can be confusing if you get an error during compilation involving the use of the constant, because the error message may refer to 3.1416, not PI. If PI were defined in a header file you didn’t write, you’d have no idea where that 3.1416 came from.
This problem can also crop up in a symbolic debugger, because, again, the name you’re programming with may not be in the symbol table.
Solution:
const double PI = 3.1416; //or static const...
I wrote quick test program to demonstrate one difference:
#include <stdio.h>
enum {ENUM_DEFINED=16};
enum {ENUM_DEFINED=32};
#define DEFINED_DEFINED 16
#define DEFINED_DEFINED 32
int main(int argc, char *argv[]) {
printf("%d, %d\n", DEFINED_DEFINED, ENUM_DEFINED);
return(0);
}
This compiles with these errors and warnings:
main.c:6:7: error: redefinition of enumerator 'ENUM_DEFINED'
enum {ENUM_DEFINED=32};
^
main.c:5:7: note: previous definition is here
enum {ENUM_DEFINED=16};
^
main.c:9:9: warning: 'DEFINED_DEFINED' macro redefined [-Wmacro-redefined]
#define DEFINED_DEFINED 32
^
main.c:8:9: note: previous definition is here
#define DEFINED_DEFINED 16
^
Note that enum gives an error when define gives a warning.
The definition
const int const_value = 5;
does not always define a constant value. Some compilers (for example tcc 0.9.26) just allocate memory identified with the name "const_value". Using the identifier "const_value" you can not modify this memory. But you still could modify the memory using another identifier:
const int const_value = 5;
int *mutable_value = (int*) &const_value;
*mutable_value = 3;
printf("%i", const_value); // The output may be 5 or 3, depending on the compiler.
This means the definition
#define CONST_VALUE 5
is the only way to define a constant value which can not be modified by any means.
Although the question was about integers, it's worth noting that #define and enums are useless if you need a constant structure or string. These are both usually passed to functions as pointers. (With strings it's required; with structures it's much more efficient.)
As for integers, if you're in an embedded environment with very limited memory, you might need to worry about where the constant is stored and how accesses to it are compiled. The compiler might add two consts at run time, but add two #defines at compile time. A #define constant may be converted into one or more MOV [immediate] instructions, which means the constant is effectively stored in program memory. A const constant will be stored in the .const section in data memory. In systems with a Harvard architecture, there could be differences in performance and memory usage, although they'd likely be small. They might matter for hard-core optimization of inner loops.
Don't think there's an answer for "which is always best" but, as Matthieu said
static const
is type safe. My biggest pet peeve with #define, though, is when debugging in Visual Studio you cannot watch the variable. It gives an error that the symbol cannot be found.
Incidentally, an alternative to #define, which provides proper scoping but behaves like a "real" constant, is "enum". For example:
enum {number_ten = 10;}
In many cases, it's useful to define enumerated types and create variables of those types; if that is done, debuggers may be able to display variables according to their enumeration name.
One important caveat with doing that, however: in C++, enumerated types have limited compatibility with integers. For example, by default, one cannot perform arithmetic upon them. I find that to be a curious default behavior for enums; while it would have been nice to have a "strict enum" type, given the desire to have C++ generally compatible with C, I would think the default behavior of an "enum" type should be interchangeable with integers.
A simple difference:
At pre-processing time, the constant is replaced with its value.
So you could not apply the dereference operator to a define, but you can apply the dereference operator to a variable.
As you would suppose, define is faster that static const.
For example, having:
#define mymax 100
you can not do printf("address of constant is %p",&mymax);.
But having
const int mymax_var=100
you can do printf("address of constant is %p",&mymax_var);.
To be more clear, the define is replaced by its value at the pre-processing stage, so we do not have any variable stored in the program. We have just the code from the text segment of the program where the define was used.
However, for static const we have a variable that is allocated somewhere. For gcc, static const are allocated in the text segment of the program.
Above, I wanted to tell about the reference operator so replace dereference with reference.
We looked at the produced assembler code on the MBF16X... Both variants result in the same code for arithmetic operations (ADD Immediate, for example).
So const int is preferred for the type check while #define is old style. Maybe it is compiler-specific. So check your produced assembler code.
I am not sure if I am right but in my opinion calling #defined value is much faster than calling any other normally declared variable (or const value).
It's because when program is running and it needs to use some normally declared variable it needs to jump to exact place in memory to get that variable.
In opposite when it use #defined value, the program don't need to jump to any allocated memory, it just takes the value. If #define myValue 7 and the program calling myValue, it behaves exactly the same as when it just calls 7.

Should useless type qualifiers on return types be used, for clarity?

Our static analysis tool complains about a "useless type qualifier on return type" when we have prototypes in header files such as:
const int foo();
We defined it this way because the function is returning a constant that will never change, thinking that the API seemed clearer with const in place.
I feel like this is similar to explicitly initializing global variables to zero for clarity, even though the C standard already states that all globals will be initialized to zero if not explicitly initialized. At the end of the day, it really doesn't matter. (But the static analysis tool doesn't complain about that.)
My question is, is there any reason that this could cause a problem? Should we ignore the errors generated by the tool, or should we placate the tool at the possible cost of a less clear and consistent API? (It returns other const char* constants that the tool doesn't have a problem with.)
It's usually better for your code to describe as accurately as possible what's going on. You're getting this warning because the const in const int foo(); is basically meaningless. The API only seems clearer if you don't know what the const keyword means. Don't overload meaning like that; static is bad enough as it is, and there's no reason to add the potential for more confusion.
const char * means something different than const int does, which is why your tool doesn't complain about it. The former is a pointer to a constant string, meaning any code calling the function returning that type shouldn't try to modify the contents of the string (it might be in ROM for example). In the latter case, the system has no way to enforce that you not make changes to the returned int, so the qualifier is meaningless. A closer parallel to the return types would be:
const int foo();
char * const foo2();
which will both cause your static analysis to give the warning - adding a const qualifier to a return value is a meaningless operation. It only makes sense when you have a a reference parameter (or return type), like your const char * example.
In fact, I just made a little test program, and GCC even explicitly warns about this problem:
test.c:6: warning: type qualifiers ignored on function return type
So it's not just your static analysis program that's complaining.
You can use a different technique to illustrate your intent without making the tools unhappy.
#define CONST_RETURN
CONST_RETURN int foo();
You don't have a problem with const char * because that's declaring a pointer to constant chars, not a constant pointer.
Ignoring the const for now, foo() returns a value. You can do
int x = foo();
and assign the value returned by foo() to the variable x, in much the same way you can do
int x = 42;
to assign the value 42 to variable x.
But you cannot change the 42 ... or the value returned by foo(). Saying that the value returned from foo() cannot be changed, by applying the const keyword to the type of foo() accomplishes nothing.
Values cannot be const (or restrict, or volatile). Only objects can have type qualifiers.
Contrast with
const char *foo();
In this case, foo() returns a pointer to an object. The object pointed to by the value returned can be qualified const.
The int is returned by copy. It may be a copy of a const, but when it is assigned to something else, that something by virtue of the fact that it was assignable, cannot by definition be a const.
The keyword const has specific semantics within the language, whereas here you are misusing it as essentially a comment. Rather than adding clarity, it rather suggests a misunderstanding of the language semantics.
const int foo() is very different from const char* foo(). const char* foo() returns an array (usually a string) whose content is not allowed to change. Think about the difference between:
const char* a = "Hello World";
and
const int b = 1;
a is still a variable and can be assigned to other strings that can't change whereas b is not a variable. So
const char* foo();
const char* a = "Hello World\n";
a = foo();
is allowed but
const int bar();
const int b = 0;
b = bar();
is not allowed, even with the const declaration of bar().
Yes. I would advise writing code "explicitly", because it makes it clear to anyone (including yourself) when reading the code what you meant. You are writing code for other programmers to read, not to please the whims of the compiler and static analysis tools!
(However, you do have to be careful that any such "unnecessary code" does not cause different code to be generated!)
Some examples of explicit coding improving readability/maintainability:
I place brackets around portions of arithmetic expressions to explicitly specify what I want to happen. This makes it clear to any reader what I meant, and saves me having to worry about (or make ay mistakes with) precedence rules:
int a = b + c * d / e + f; // Hard to read- need to know precedence
int a = b + ((c * d) / e) + f; // Easy to read- clear explicit calculations
In C++, if you override a virtual function, then in the derived class you can declare it without mentioning "virtual" at all. Anyone reading the code can't tell that it's a virtual function, which can be disastrously misleading! However you can safely use the virtual keyword: virtual int MyFunc() and this makes it clear to anyone reading your class header that this method is virtual. (This "C++ syntax bug" is fixed in C# by requiring the use of the "override" keyword in this case - more proof if anyone needed it that missing out the "unnecessary virtual" is a really bad idea)
These are both clear examples where adding "unnecessary" code will make the code more readable and less prone to bugs.

Variables defined and assigned at the same time

A coding style presentation that I attended lately in office advocated that variables should NOT be assigned (to a default value) when they are defined. Instead, they should be assigned a default value just before their use.
So, something like
int a = 0;
should be frowned upon.
Obviously, an example of 'int' is simplistic but the same follows for other types also like pointers etc.
Further, it was also mentioned that the C99 compatible compilers now throw up a warning in the above mentioned case.
The above approach looks useful to me only for structures i.e. you memset them only before use. This would be efficient if the structure is used (or filled) only in an error leg.
For all other cases, I find defining and assigning to a default value a prudent exercise as I have encountered a lot of bugs because of un-initialized pointers both while writing and maintaining code. Further, I believe C++ via constructors also advocates the same approach i.e. define and assign.
I am wondering why(if) C99 standard does not like defining & assigning. Is their any considerable merit in doing what the coding style presentation advocated?
Usually I'd recommend initialising variables when they are defined if the value they should have is known, and leave variables uninitialised if the value isn't. Either way, put them as close to their use as scoping rules allow.
Instead, they should be assigned a default value just before their use.
Usually you shouldn't use a default value at all. In C99 you can mix code and declarations, so there's no point defining the variable before you assign a value to it. If you know the value it's supposed to take, then there is no point in having a default value.
Further, it was also mentioned that the C99 compatible compilers now throw up a warning in the above mentioned case.
Not for the case you show - you don't get a warning for having int x = 0;. I strongly suspect that someone got this mixed up. Compilers warn if you use a variable without assigning a value to it, and if you have:
... some code ...
int x;
if ( a )
x = 1;
else if ( b )
x = 2;
// oops, forgot the last case else x = 3;
return x * y;
then you will get a warning that x may be used without being initialised, at least with gcc.
You won't get a warning if you assign a value to x before the if, but it is irrelevant whether the assignment is done as an initialiser or as a separate statement.
Unless you have a particular reason to assign the value twice for two of the branches, there's no point assigning the default value to x first, as it stops the compiler warning you that you've covered every branch.
There's no such requirement (or even guideline that I'm aware of) in C99, nor does the compiler warn you about it. It's simply a matter of style.
As far as coding style is concerned, I think you took things too literally. For example, your statement is right in the following case...
int i = 0;
for (; i < n; i++)
do_something(i);
... or even in ...
int i = 1;
[some code follows here]
while (i < a)
do_something(i);
... but there are other cases that, in my mind, are better handled with an early "declare and assign". Consider structures constructed on the stack or various OOP constructs, like in:
struct foo {
int bar;
void *private;
};
int my_callback(struct foo *foo)
{
struct my_struct *my_struct = foo->private;
[do something with my_struct]
return 0;
}
Or like in (C99 struct initializers):
void do_something(int a, int b, int c)
{
struct foo foo = {
.a = a,
.b = b + 1,
.c = c / 2,
};
write_foo(&foo);
}
I sort of concur with the advice, even though I'm not altogether sure the standard says anything about it, and I very much doubt the bit about compiler warnings is true.
The thing is, modern compilers can and do detect the use of uninitialised variables. If you set your variables to default values at initialisation, you lose that detection. And default values can cause bugs too; certainly in the case of your example, int a = 0;. Who says 0 is an appropriate value for a?
In the 1990s, the advice would've been wrong. Nowadays, it's correct.
I find it highly useful to pre-assign some default data to variables so that i don't have to do (as many) null checks in code.
I have seen so many bugs due to uninitialized pointers that I always advocated to declare each variable with NULL_PTR and each primitivewith some invalid/default value.
Since I work on RTOS and high performance but low resource systems, it is possible that the compilers we use do not catch non-initialized usage. Though I doubt modern compilers can also be relied on 100%.
In large projects where Macro's are extensively used, I have seen rare scenarios where even Kloclwork /Purify have failed to find non-initialized usage.
So I say stick with it as long as you are using plain old C/C++.
Modern languages like .Net can guarantee to initialize varaibles, or give a compiler error for uninitialized variable usage. Following link does a performance analysis and validates that there is a 10-20% performance hit for .NET. The analysis is in quite detail and is explained well.
http://www.codeproject.com/KB/dotnet/DontInitializeVariables.aspx

Resources