What other definitions of NULL were there on older platforms? [duplicate] - c

This question already has answers here:
When was the NULL macro not 0?
(7 answers)
Closed 9 years ago.
Occasionally, one reads that older C compilers had definitions of NULL that were not 0 or (void *)0. My understanding of the C standard was that even if the platform's bit pattern for a null pointer is nonzero, an integer 0 cast to a pointer (either implicitly or explicitly) is still a null pointer, and is stored internally as the platform's null pointer bit pattern.
But for example, here it is written:
In some older C compilers, NULL is variously defined to some weird things, so you have to be more careful with it.
I remember reading this in various other places from time to time. Unless this is a persistent urban legend, what other definitions of NULL have been in use?

You are absolutely right when you are saying that null-pointer constant (constant integral zero or constant integral zero cast to void *) is guaranteed to be properly converted to the appropriate internal representation of null pointer for the target type. Which means that there's no need for any other definition of NULL. 0 or (void *) 0 will work everywhere.
However, at the same time, very early versions of C language did not make such guarantee and did not have a standard NULL macro. Assigning an integral value to a pointer variable caused the pointer to literally point to the address represented by that integral value. Assigning constant 0 to a pointer simply made it to point to address 0. If users wanted to have a reserved pointer value in their program, they had to manually choose a "throwaway" address to use for that purpose.
It is quite possible that macro NULL came into informal usage before the standardization of the language and before the aforementioned zero-to-pointer conversion rule came into existence. At that time NULL would have to be defined as an integral value representing that exact reserved address. E.g. a pre-standard C implementation that wanted to use address 0xBAADFOOD for null pointers would define NULL as 0xBAADFOOD. I can't confirm that though, since I don't know when exactly macro NULL first appeared "in the wild".

You're absolutely correct.
One of the "weird things" you might encounter is #define'ing NULL to ((void *)0), which would cause code that used NULL as anything but a pointer to fail.
Here are a few other variations:
http://www.tutorialspoint.com/c_standard_library/c_macro_null.htm
#define NULL ((char *)0)
or
#define NULL 0L
or
#define NULL 0
It's also worth noting that C++ 11 introduces nullptr to provide a type-safe disambiguation of NULL, 0, etc. Refer to Much Ado about Nothing, or (http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2004/n1601.pdf‎

Related

Getting the offset of a variable inside a struct is based on the NULL pointer, but why?

I found a trick on a youtube video explaining how you can get the offset of a struct member by using a NULL pointer. I understand the code snippit below (the casts, the ampersand, and so on), but I do not understand why this works with the NULL pointer. I thought that the NULL pointer could not point to anything. So I cannot mentally visualize how it works. Second, the NULL pointer is not always represented by the compiler as being 0, somtimes it is a non-zero value. But than how could this piece of code work correctly ? Or wouldn't it work correctly anymore ?
#include <stdio.h>
int main(void)
{
/* Getting the offset of a variable inside a struct */
typedef struct {
int a;
char b[23];
float c;
} MyStructType;
unsigned offset = (unsigned)(&((MyStructType * )NULL)->c);
printf("offset = %u\n", offset);
return 0;
}
I found a trick on a youtube video explaining how you can get the
offset of a struct member by using a NULL pointer.
Well, at least you came here to ask about the random Internet advice you turned up. We're an Internet resource ourselves, of course, but I like to think that our structure and reputation gives you a basis for estimating the reliability of what we have to say.
I understand the
code snippit below (the casts, the ampersand, and so on), but I do not
understand why this works with the NULL pointer. I thought that the
NULL pointer could not point to anything.
Yes, from the perspective of C semantics, a null pointer definitely does not point to anything, and NULL is a null pointer constant.
So I cannot mentally
visualize how it works.
The (flawed) idea is that
NULL is equivalent to a pointer to address 0 in a flat address space (unsafe assumption);
((MyStructType * )NULL)->c designates the member c of an altogether hypothetical object of type MyStructType residing at that address (not supported by the standard);
applying the & operator yields the address that such a member would have if it in fact existed (not supported by the standard); and
converting the resulting address to an integer yields an address in the assumed flat address space, expressed in units the size of a C char (in no way guaranteed);
so that the resulting integer simultaneously represents both an absolute address and an offset (follows from the previous assumptions, because the supposed base address of the hypothetical structure is 0).
Second, the NULL pointer is not always
represented by the compiler as being 0, somtimes it is a non-zero
value.
Quite right, that is one of the flaws in the scheme presented.
But than how could this piece of code work correctly ? Or
wouldn't it work correctly anymore ?
Although the Standard provides no basis to justify relying on the code to behave as advertised, that does not mean that it must necessarily fail. C implementations do need to be internally consistent about how they represent null pointers, and -- to a certain degree -- about how they convert between pointers and integer. It turns out to be fairly common that the code's assumptions about those things are in fact satisfied by implementations.
So in practice, the code does work with many C implementations. But it systematically produces the wrong answer with some others, and there may be some in which it produces the right answer some appreciable fraction of the time, but the wrong answer the rest of the time.
Note that this code is actually undefined behaviour. Dereferencing a NULL pointer is never allowed, even if no value is accessed, only the address (this was a root cause for a linux kernel exploit)
Use offsetof instead for a save alternative.
As to why it seems works with a NULL pointer: it assumes that NULL is 0. Basically you could use any pointer and calculate:
MyStructType t;
unsigned off = (unsigned)(&(&t)->c) - (unsigned)&t;
if &t == 0, this becomes:
unsigned off = (unsigned)(&(0)->c) - 0;
Substracting 0 is a no-op
This code is platform specific. This code might cause undefined behaviour on one platform and it might work on others.
That's why the C standard requires every library to implement the offsetof macro which could expand to code like derefering the NULL pointer, at least you can be sure the code will not crash on any platform
typedef struct Struct
{
double d;
} Struct;
offsetof(Struct, d)
This question resembles me to something seen more than 30 years ago:
#define XtOffset(p_type,field) \
((Cardinal) (((char *) (&(((p_type)NULL)->field))) - ((char *) NULL)))
#ifdef offsetof
#define XtOffsetOf(s_type,field) offsetof(s_type,field)
#else
#define XtOffsetOf(s_type,field) XtOffset(s_type*,field)
#endif
from xorg-libXt/include/X11/Intrinsic.h X11R4.
They took into account that a NULL Pointer could be different to 0x0 and included that in the definition of the XtOffsetOf macro.
This is a dirty hack and might not necessarily work.
(MyStructType * )NULL creates a null pointer. Null pointer and null pointer constant are two different terms. NULL is guaranteed to be a null pointer constant equivalent to 0, but the obtained null pointer we get when casting it to another type can be any implementation-defined value.
So it happened to work by luck on your specific system, you could as well have gotten any strange value.
The offsetof macro has been standard C since 1989 so maybe your Youtube hacker is still stuck in the early 1980s.

Why NULL is not predefined by the compiler

This issue bothered me for a while. I never saw a different definition of NULL, it's always
#define NULL ((void *) 0)
is there any architecture where NULL is defined diferently, and if so, why the compiler don't declare this for us ?
C 2011 Standard, online draft
6.3.2.3 Pointers
...
3 An integer constant expression with the value 0, or such an expression cast to type
void *, is called a null pointer constant.66) If a null pointer constant is converted to a
pointer type, the resulting pointer, called a null pointer, is guaranteed to compare unequal to a pointer to any object or function.
66) The macro NULL is defined in <stddef.h> (and other headers) as a null pointer constant; see 7.19.
The macro NULL is always defined as a zero-valued constant expression; it can be a naked 0, or 0 cast to void *, or some other integral expression that evaluates to 0. As far as your source code is concerned, NULL will always evaluate to 0.
Once the code has been translated, any occurrence of the null pointer constant (0, NULL, etc.) will be replaced with whatever the underlying architecture uses for a null pointer, which may or may not be 0-valued.
WhozCraig wrote these comments to a now-deleted answer, but it could be promoted to a full answer (and that's what I've done here). He notes:
Interesting note: AS/400 is a very unique platform where any non-valid pointer is considered equivalent to NULL. The mechanics which they employ to do this are simply amazing. "Valid" in this sense is any 128-bit pointer (the platform uses a 128bit linear address space for everything) containing a "value" obtained by a known-trusted instruction set. Hard as it is to believe, int *p = (int *)1; if (p) { printf("foo"); } will not print "foo" on that platform. The value assigned to p is not source-trusted, and thus, considered "invalid" and thereby equivalent to NULL.
It's frankly startling how it works. Each 16-byte paragraph in the mapped virtual address space of a process has a corresponding "bit" in a process-wide bitmap. All pointers must reside on one of these paragraph boundaries. If the bit is "lit", the corresponding pointer was stored from a trusted source, otherwise it is invalid and equivalent to NULL. Calls to malloc, pointer math, etc, are all scrutinized in determining whether that bit gets lit or not. And as you can imagine, putting pointers in structures brings a whole new world of hurt on the idea of structure packing.
This is marked community-wiki (it's not my answer — I shouldn't get the credit) but it can be deleted if WhozCraig writes his own answer.
What this shows is that there are real platforms with interesting pointer properties.
There have been platforms where #define NULL ((void *)0) is not the usual definition; on some platforms it can be just 0, on others, 0L or 0ULL or other appropriate values as long as the compiler understands it. C++ does not like ((void *)0) as a definition; systems where the headers interwork with C++ may well not use the void pointer version.
I learned C on a machine where the representation for the char * address for a given memory location was different from the int * address for the same memory location. This was in the days before void *, but it meant that you had to have malloc() properly declared (char *malloc(); — no prototypes either), and you had to explicitly cast the return value to the correct type or you got core dumps. Be grateful for the C standard (though the machine in question, an ICL Perq — badged hardware from Three Rivers — was largely superseded by the time the standard was defined).
In the dark ages before ANSI-C the old K&R C had many different implementations on hardware that would be considered bizarre today. This was before the days of VM when machines were very "real". Addresses of zero were not only just fine on these machines, an address of zero could be popular... I think it was CDC that sometimes stored the system constant of zero at zero (and did strange things happen if this was set non-zero).
if ( NULL != ptr ) /* like this */
if ( ptr ) /* never like this */
The trick was finding address you could safely use to indicate "nothing" as storing things at the end of memory was also popular, which ruled out 0xFFFF on some architectures. And these architectures tended to use word addresses rather than byte addresses.
I don't know the answer to this but I'm making a guess. In C you usually do a lot of mallocs, and consequently many tests for returned pointers. Since malloc returns void *, and especially (void *)0 upon failure, NULL is a natrual thing to define in order to test malloc success. Since this is so essential, other library functions use NULL (or (void *)0) too, like fopen. Actually, everything that returns a pointer.
Hence here is no reason the define this at the language level - it's just a special pointer value that can be returned by so many functions.

When did constant 0 in pointer context acquire its special status?

As you know, in standard modern C language the constant 0 value used in pointer context acts as a null-pointer constant, which gets converted to a platform-specific (and possibly even type-specific) null-pointer value.
Meanwhile, the early versions of C language, as the one described in C Reference Manual, did not make much of a distinction between pointer and integer contexts, allowing one to freely compare and assign integers to pointers. If am not mistaken, in that version of C the constant 0 had no special status, meaning that assigning the value of constant 0 to a pointer would simply make it point to physical address 0 (just like assigning the value of 42 to a pointer would make it point to physical address 42).
In ANSI C things have changed significantly. Now assigning the constant 0 to a pointer will place some platform-specific null-pointer value into that pointer. Null-pointer value is not required to be represented by physical 0 value.
So, at what point in the history of C language did it change from one to another? Did K&R C already incorporate the higher-level concept of null-pointer with constant 0 given its special status? Or did the K&R C still guarantee physical assignment of integers to pointers even for constant 0?
It goes back to nearly the beginning of C (if not the very beginning). If you look on page 21 of the January 1974 C reference manual, it's more or less directly stated in some sample code:
/* is pointer null? */
if (p == 0) {
Going back still a bit further, to ca. 1972-73 PDP-11/20 compiler, we find:
match(tree, table, nreg)
int tree[], table[]; {
extern opdope[], dcalc, notcompat;
int op, d1, d2, t1, t2, p1[], p2[];
char mp[];
if (tree==0)
return(0);
op = *tree;
At least if I'm reading this correctly, the if (tree==0) line is checking that tree is a non-null pointer before attempting to dereference it.
Unfortunately, Dennis says he can't be much more certain about the date than "1972-73".
There isn't much history of C before that. Nonetheless, there does seem to be a bit of history of 0 being treated as a null pointer. It looks to me like use of 0 as a null pointer is something that C "inherited" from Unix. The entry for exec in the November 1971 1st Edition Unix programmer's manual shows a pointer with the value 0 to signal the end of the list of arguments. According to Dennis' description, at this point "C was still to come."
Based on all this, I'd tentatively conclude that C treated 0 as a null pointer from the very beginning, or at least so early on that there's probably no longer any record of a version of the language that was otherwise.
I haven't been nearly as successful at tracking down documentation about the first point at which a null pointer might have had non-zero bits. From the viewpoint of the language, this has never been relevant. I suspect it happened fairly early on, but finding documentation to support that would be difficult. One of the earliest ports of C was to IBM System/360 mainframes, and although I can't find direct documentation of it, my guess would be that internally the null pointer value used on these machines was probably non-zero. I don't have the exact number handy, but I know that PL/I on these machines used a non-zero value for its equivalent of a null pointer; I'd guess that when they ported C to these machines, they probably used the same value.
See the C-faq question 5.4
As a matter of style, many programmers prefer not to have unadorned 0's scattered through their programs, some representing numbers and some representing pointers. Therefore, the preprocessor macro NULL is defined (by several headers, including and ) as a null pointer constant, typically 0 or ((void *)0) (see also question 5.6). A programmer who wishes to make explicit the distinction between 0 the integer and 0 the null pointer constant can then use NULL whenever a null pointer is required.
Using NULL is a stylistic convention only; the preprocessor turns NULL back into 0 which is then recognized by the compiler, in pointer contexts, as before. In particular, a cast may still be necessary before NULL (as before 0) in a function call argument. The table under question 5.2 above applies for NULL as well as 0 (an unadorned NULL is equivalent to an unadorned 0).
NULL should be used only as a pointer constant; see question 5.9.
References: K&R1 Sec. 5.4 pp. 97-8
K&R2 Sec. 5.4 p. 102
ISO Sec. 7.1.6, Sec. 6.2.2.3
Rationale Sec. 4.1.5
H&S Sec. 5.3.2 p. 122, Sec. 11.1 p. 292
What is this infamous null pointer anyways?
The language definition states that for each pointer type, there is a special value--the "null pointer"--which is distinguishable from all other pointer values and which is "guaranteed to compare unequal to a pointer to any object or function." That is, a null pointer points definitively nowhere; it is not the address of any object or function. The address-of operator & will never yield a null pointer, nor will a successful call to malloc.[footnote] (malloc does return a null pointer when it fails, and this is a typical use of null pointers: as a "special" pointer value with some other meaning, usually "not allocated" or "not pointing anywhere yet.")
A null pointer is conceptually different from an uninitialized pointer. A null pointer is known not to point to any object or function; an uninitialized pointer might point anywhere. See also questions 1.30, 7.1, and 7.31.
As mentioned above, there is a null pointer for each pointer type, and the internal values of null pointers for different types may be different. Although programmers need not know the internal values, the compiler must always be informed which type of null pointer is required, so that it can make the distinction if necessary (see questions 5.2, 5.5, and 5.6).
References: K&R1 Sec. 5.4 pp. 97-8
K&R2 Sec. 5.4 p. 102
ISO Sec. 6.2.2.3
Rationale Sec. 3.2.2.3
H&S Sec. 5.3.2 pp. 121-3
Finally, only constant integral expressions with value 0 are guaranteed to indicate null pointers.

Pointer comparisons in C. Are they signed or unsigned?

Hi I'm sure this must be a common question but I can't find the answer when I search for it. My question basically concerns two pointers. I want to compare their addresses and determine if one is bigger than the other. I would expect all addresses to be unsigned during comparison. Is this true, and does it vary between C89, C99 and C++? When I compile with gcc the comparison is unsigned.
If I have two pointers that I'm comparing like this:
char *a = (char *) 0x80000000; //-2147483648 or 2147483648 ?
char *b = (char *) 0x1;
Then a is greater. Is this guaranteed by a standard?
Edit to update on what I am trying to do. I have a situation where I would like to determine that if there's an arithmetic error it will not cause a pointer to go out of bounds. Right now I have the start address of the array and the end address. And if there's an error and the pointer calculation is wrong, and outside of the valid addresses of memory for the array, I would like to make sure no access violation occurs. I believe I can prevent this by comparing the suspect pointer, which has been returned by another function, and determining if it is within the acceptable range of the array. The question of negative and positive addresses has to do with whether I can make the comparisons, as discussed above in my original question.
I appreciate the answers so far. Based on my edit would you say that what I'm doing is undefined behavior in gcc and msvc? This is a program that will run on Microsoft Windows only.
Here's an over simplified example:
char letters[26];
char *do_not_read = &letters[26];
char *suspect = somefunction_i_dont_control(letters,26);
if( (suspect >= letters) && (suspect < do_not_read) )
printf("%c", suspect);
Another edit, after reading AndreyT's answer it appears to be correct. Therefore I will do something like this:
char letters[26];
uintptr_t begin = letters;
uintptr_t toofar = begin + sizeof(letters);
char *suspect = somefunction_i_dont_control(letters,26);
if( ((uintptr_t)suspect >= begin) && ((uintptr_t)suspect < toofar ) )
printf("%c", suspect);
Thanks everyone!
Pointer comparisons cannot be signed or unsigned. Pointers are not integers.
C language (as well as C++) defines relative pointer comparisons only for pointers that point into the same aggregate (struct or array). The ordering is natural: the pointer that points to an element with smaller index in an array is smaller. The pointer that points to a struct member declared earlier is smaller. That's it.
You can't legally compare arbitrary pointers in C/C++. The result of such comparison is not defined. If you are interested in comparing the numerical values of the addresses stored in the pointers, it is your responsibility to manually convert the pointers to integer values first. In that case, you will have to decide whether to use a signed or unsigned integer type (intptr_t or uintptr_t). Depending on which type you choose, the comparison will be "signed" or "unsigned".
The integer-to-pointer conversion is wholly implementation defined, so it depends on the implementation you are using.
That said, you are only allowed to relationally compare pointers that point to parts of the same object (basically, to subobjects of the same struct or elements of the same array). You aren't allowed to compare two pointers to arbitrary, wholly unrelated objects.
From a draft C++ Standard 5.9:
If two pointers p and q of the same type point to different objects
that are not members of the same object or elements of the same array
or to different functions, or if only one of them is null, the results
of p<q, p>q, p<=q, and p>=q are unspecified.
So, if you cast numbers to pointers and compare them, C++ gives you unspecified results. If you take the address of elements you can validly compare, the results of comparison operations are specified independently of the signed-ness of the pointer types.
Note unspecified is not undefined: it's quite possible to compare pointers to different objects of the same type that aren't in the same structure or array, and you can expect some self-consistent result (otherwise it'd be impossible to use such pointers as keys in trees, or to sort a vector of such pointers, binary search the vector etc., where a consistent intuitive overall < ordering is needed).
Note that in very old C++ Standards the behaviour was undefined - like the 2005 WG14/N1124 draft andrewdski links to under James McNellis's answer -
To complement the other answers, comparison between pointers that point to different objects depends on the standard.
In C99 (ISO/IEC 9899:1999 (E)), §6.5.8:
5 [...] In all other cases, the behavior is undefined.
In C++03 (ISO/IEC 14882:2003(E)), §5.9:
-Other pointer comparisons are unspecified.
I know several of the answers here say you cannot compare pointers unless they point to within the same structure, but that's a red herring and I'll try to explain why. One of your pointers points to the start of your array, the other to the end, so they are pointing to the same structure. A language lawyer could say that if your third pointer points outside of the object, the comparison is undefined, so x >= array.start might be true for all x. But this is no issue, since at the point of comparison C++ cannot know if the array isn't embedded in an even bigger structure. Furthermore, if your address space is linear, like it's bound to be these days, your pointer comparison will be implemented as an (un)signed integer comparison, since any other implementation would be slower. Even in the times of segments and offsets, (far) pointer comparison was implemented by first normalising the pointer and then comparing them as integers.
What this all boils down to then, is that if your compiler is okay, comparing the pointers without worrying about the signs should work, if all you care about is that the pointer points within the array, since the compiler should make the pointers signed or unsigned depending on which of the two boundaries a C++ object may straddle.
Different platforms behave differently in this matter, which is why C++ has to leave it up to the platform. There are even platforms in which both addresses near 0 and 80..00h are not mappable or already taken at process start-up. In that case, it doesn't matter, as long as you're consistent about it.
Sometimes this can cause compatibility issues. As an example, in Win32 pointers are unsigned. Now, it used to be the case that of the 4GB address space only the lower half (more precisely 10000h ... 7FFFFFFFh, because of the NULL-Pointer Assignment Partition) was available to applications; high addresses were only available to the kernel. This caused some people to put addresses in signed variables, and their programs would keep working since the high bit was always 0. But then came /3GB switch, which made almost 3 GB available to applications (more precisely 10000h ... BFFFFFFFh) and the application would crash or behave erratically.
You explicitly state your program will be Windows-only, which uses unsigned pointers. However, maybe you'll change your mind in the future, and using intptr_t or uintptr_t is bad for portability. I also wonder if you should be doing this at all... if you're indexing into an array it might be safer to compare indices instead. Suppose for example that you have a 1 GB array at 1500000h ... 41500000h, consisting of 16,384 elements of 64 kB each. Suppose you accidentally look up index 80,000 – clearly out of range. The pointer calculation will yield 39D00000h, so your pointer check will allow it, even though it shouldn't.

Difficulty in NULL concept in C?

NULL in C programming can anyone tell me how NULL is handled in C?
And the output of this program is 3, how with NULL concept?
#include <stdio.h>
int main(void) {
int i;
static int count;
for(i = NULL; i <= 5;) {
count++;
i += 2;
}
printf("%d\n",count);
return 0;
}
For C, "NULL" is traditionally defined to be (void *)0 - in other words, it's a pointer alias to address 0. For C++, "NULL" is typically defined to be "0". The problem with NULL in C++ and C is that it's not type safe - you can build bizarre constructs like the one you included in your code sample.
For C++, the language designers fixed this in C++0x by adding a new "nullptr" type which is implicitly convertable to any pointer type but which cannot be converted to an integer type.
NULL is just a macro defined in stdio.h or a file that stdio includes. It can variously be defined as some variation of zero.
If you run your code through the C pre-processor (usually cc -E) you can see what it translates to on your implemnentation:
void main(){
int i;
static int count;
for(i=((void *)0); i<=5 ;){
count++;
i+=2;
}
printf("%d",count);
}
which is not only an unnecessary use of NULL but is wildly un-idiomatic C code, more ordinary would be:
int main(){
int i;
int count = 0;
for(i = 0; i <= 5; i += 2){
count++;
}
printf("%d\n",count);
return 0;
}
There's a whole chapter devoted to the Null pointer in the C FAQ that should answer all your questions.
5.1 What is this infamous null pointer, anyway?
5.2 How do I get a null pointer in my programs?
5.3 Is the abbreviated pointer comparison if(p) to test for non-null pointers valid? What if the internal representation for null pointers is nonzero?
5.4 What is NULL and how is it defined?
5.5 How should NULL be defined on a machine which uses a nonzero bit pattern as the internal representation of a null pointer?
5.6 If NULL were defined as follows: #define NULL ((char *)0). Wouldn't that make function calls which pass an uncast NULL work?
5.7 My vendor provides header files that #define NULL as 0L. Why?
5.8 Is NULL valid for pointers to functions?
5.9 If NULL and 0 are equivalent as null pointer constants, which should I use?
5.10 But wouldn't it be better to use NULL (rather than 0), in case the value of NULL changes, perhaps on a machine with nonzero internal null pointers?
5.11 I once used a compiler that wouldn't work unless NULL was used.
5.12 I use the preprocessor macro #define Nullptr(type) (type *)0 to help me build null pointers of the correct type.
5.13 This is strange. NULL is guaranteed to be 0, but the null pointer is not?
5.14 Why is there so much confusion surrounding null pointers? Why do these questions come up so often?
5.15 I'm confused. I just can't understand all this null pointer stuff.
5.16 Given all the confusion surrounding null pointers, wouldn't it be easier simply to require them to be represented internally by zeroes?
5.17 Seriously, have any actual machines really used nonzero null pointers, or different representations for pointers to different types?
5.18 Is a run-time integral value of 0, cast to a pointer, guaranteed to be a null pointer?
5.19 How can I access an interrupt vector located at the machine's location 0? If I set a pointer to 0, the compiler might translate it to some nonzero internal null pointer value.
5.20 What does a run-time ``null pointer assignment'' error mean? How can I track it down?
NULL has most of its meaning when dealing with pointers. When dealing with integers, you would be better off using simply zero.
Strictly speaking, NULL is simply the value zero with a fancy name, but the most important part about it is indeed its fancy name. It exists because it's less ambiguous to write int* p = NULL; than int* p = 0;. Since we know NULL is a pointer, we're sure that I really meant p to be a pointer.
So, when you deal with pointers and want to represent the address 0, use NULL. And when you deal with numbers and want to represent the number 0, use 0. (In your example, you should use 0 instead of NULL.)
NULL is should be synonymous with 0. It's more correctly used to indicate a null pointer.
In this case the code will actually be:
for (i = 0; i <= 5)
{
count++;
i += 2;
}
NULL is defined as 0 by C. In your program, it counts from 0 to 5 (counting every 2nd number). That's why count is 3 (i = 0, 2, 4).
NULL shouldn't be used this way. NULL should be used with pointers.
The type of NULL and the type of i are different, but C forgives you :)
C is (mostly) not a "type-safe" language. I mean: it lets you mix types which other languages would complain.
With C you can add chars to doubles, you can multiply chars and ints, ...
What you are doing is assigning a null pointer constant (of type void *) to an int. C allows that, but you need to be very careful when mixing types like this.
The result of assigning the null pointer constant to an int is the same as assigning 0 to it. So you start your loop with i being 0.
A static auto variable, in the absence of an initializer, is initialized to 0. This happens to the variable count in your program.
So, the loop goes first (count becomes 1) with i being 0, then i becomes 2.
Now the loop goes (count becomes 2) with i being 2, then i becomes 4.
Now the loop goes (count becomes 3) with i being 4, then i becomes 6.
And the loop terminates ... and you print the value of count.
Notes
to be Standard compliant main should be declared int main(void)
you should output a newline after every complete line ( printf("%d\n", count); )
and you should return 0; to indicate successful completion of your program to the Operating System
Don't confuse the idea of a NULL pointer in c with that of a nullable type or quantities in some dynamic languages. There are similar, but have distinct uses and meaning.
In c the concept of NULL is only used natively in the context of pointer where it is a pointer to a known invalid memory location (often, but not always, represented by zero).
This is not (quite) the same as a type or variable which can represent non-setness. Note that types like this can be defined-and-managed by the user in c, but they are not supplied by the language.

Resources