when I create a pointer to certain struct, do I have to set it to NULL, then alloc it then use it? and why?
No, you don't have to set it to NULL, but some consider it good practice as it gives a new pointer a value that makes it explicit it's not pointing at anything (yet).
If you are creating a pointer and then immediately assigning another value to it, then there's really not much value in setting it to NULL.
It is a good idea to set a pointer to NULL after you free the memory it was pointing to, though.
No, there is no requirement (as far as the language is concerned) to initialize a pointer variable to anything when declaring it. Thus
T* ptr;
is a valid declaration that introduces a variable named ptr with an indeterminate value. You can even use the variable in certain ways without first allocating anything or setting it to any specific value:
func(&ptr);
According to the C standard, not initializing an automatic storage variable leaves its value indeterminate.
You are definitely encouraged to set pointers to NULL whenever the alternative is your pointer having an indeterminate value. Usage of pointers with an indeterminate value is undefined behavior, a concept that you might not grasp properly if you're used to C programming and come from higher level languages.
Undefined behavior means that anything could happen, and this is a part of the C standard since the philosophy of C is letting the programmer have the burden to control things, instead of protecting him from his mistakes.
You want to avoid undefined behaviors in code: it is very hard to debug stuff when any behavior can occurr. It is not caught by a compiler, and your tests may always pass, leaving the bug unnoticed until it is very expensive to correct it.
If you do something like this:
char *p;
if(!p) puts("Pointer is NULL");
You don't actually know whether that if control will be true or false. This is extremely dangerous in large programs where you may declare your variable and then use it somewhere very far in space and time.
Same concept when you reference freed memory.
free(p);
printf("%s\n", p);
p = q;
You don't actually know what you're going to get with the last two statements. You can't test it properly, you cannot be sure of your outcome. Your code may seem to work because freed memory is recycled even if it may have been altered... you could even overwrite memory, or corrupt other variables that share the same memory. So you still have a bug that sooner or later will manifest itself and could be very hard to debug it.
If you set the pointer to NULL, you protect yourself from these dangerous situations: during your tests, your program will have a deterministic, predictable behavior that will fail fast and cheaply. You get a segfault, you catch the bug, you fix the bug. Clean and simple:
#include <stdio.h>
#include <stdlib.h>
int main(){
char* q;
if(!q) puts("This may or may not happen, who knows");
q = malloc(10);
free(q);
printf("%s\n", q); // this is probably don't going to crash, but you still have a bug
char* p = NULL;
if(!p) puts("This is always going to happen");
p = malloc(10);
free(p);
p = NULL;
printf("%s\n", p); // this is always going to crash
return 0;
}
So, when you initialize a pointer, especially in large programs, you want them to be explicitely initliazed or set to NULL. You don't want them to get an indeterminate value, never. I should say unless you know what you are doing, but I prefer never instead.
No, don't forget that initialization has to be to a null pointer at all. A very convenient idiom in modern C is to declare variables at their first use
T * ptr = malloc(sizeof *ptr);
This avoids you a lot of hussle of remembering the type and whether or not a variable is already initialized. Only if you don't know where (or even if) it is later initialized, then you definitively should initialize it to a null pointer.
So as a rule of thumb, always initialize variables to the appropriate value. The "proper 0 for the type" is always a good choice if you don't have a better one at hand. For all types, C is made like that, such that it works.
Not initializing a variable is premature optimization in most cases. Only go through your variables when you see that there is a real performance bottleneck, there. In particular if there is an assignment inside the same function before a use of the initial value, andy modern compiler will optimize you the initialization out. Think correctness of your program first.
You dont have to unless you dont want it to dangle a while. You know you should after freeing(defensive style).
You do not have to initialize to NULL. If you intend to allocate it immediately, you can skip it.
At can be useful for error handling like in the following example. In this case breaking to the end after p allocation would leave q uninitialized.
int func(void)
{
char *p;
char *q;
p = malloc(100);
if (!p)
goto end;
q = malloc(100);
if (!q)
goto end;
/* Do something */
end:
free(p);
free(q);
return 0;
}
Also, and this is my personal taste, allocate structs always with calloc. Strings may be allocated uninitialized with malloc
Related
whenever I read dangling pointers article. I can not anticipate the potential risk here.
What is the issue in below code if I do not initialize the pointer, or I do not assign ptr to NULL after deal locating malloc? Why it becomes dangling, what is the risk here?
int main()
{
int *ptr; //what is risk here?
ptr = (int *)malloc(sizeof(int));
// After below free call, ptr becomes a
// dangling pointer
free(ptr);
//ptr = NULL; //what is risk here if i do not assign NULL?
}
There is only a dangling pointer risk if you use a pointer when the data it points to is no longer valid. That is not the case in your code snippet provided.
If you do use it while the data it points to is invalid (uninitialised before the malloc or dangling after the free), all bets are off, that's undefined behaviour (UB).
It's also UB if you use the NULL that may be returned if your malloc failed, you should always check for that. And, because of that, setting it to NULL after the memory it points to ceases to be valid will not help you at all. It's just as much UB to dereference NULL as it is to dereference memory that's been freed.
You can make your code a lot safer with something like:
int main(void) {
int *ptr = malloc(sizeof(int));
if (ptr == NULL) {
puts("Could not allocate");
return 1;
}
// safe to use pointer here.
free(ptr);
// but not here.
}
The risk is not in the code shown, the risk is in the code after maintenance, modification and reuse. Or in "cargo-cult" copying of the anti-pattern. If for example you added a great deal of code between the initialised ptr declaration and the malloc(), it may no longer be clear that the pointer is not yet valid.
Dereferencing a null pointer "by accident" will normally be trapped as a run-time error, so likely to be detected during testing and development and reported at the point of error. Dereferencing a pointer that is a valid address but say refers to a block returned to the heap for re-use may have no immediate observable impact, but may later cause unrelated code to fail, possibly after deployment and in unpredictable ways - those bugs are very difficult to track down.
Dereferencing a pointer that simply has not been initialised, will have undefined behaviour, it may appear to work, it may trigger an exception, it may cause the code to appear to fail at an unrelated location in the code - again such bugs are hard to track down unless you were lucky enough for it to fail immediately - if it happened to be NULL or an invalid address triggering an exception for example.
Besides that, there are no advantages to the "unsafe" code:
int *ptr; //what is risk here?
ptr = (int *)malloc(sizeof(int));
over both safer and simpler code thus:
int* ptr = malloc( sizeof(*ptr) ) ;
Note other improvements in this:
do not cast a void-pointer to the destination type,
use the sizeof the object pointed to (or a multiple thereof) rather than an explicit type.
Remember in modern C, is is no longer necessary to declare all variables at the start of a brace-block ({...}) - though you will still see that anti-pattern too - so there are few excuses for not initialising a pointer with a useful value on instantiation.
int main()
{
int *ptr; //what is risk here?
ptr = (int *)malloc(sizeof(int));
Here you have 2 risks. The first one is that in real project between these 2 lines of code it could be a lot of another code. And there is a chance that you (or somebody else who maintains this code) will try to dereference this pointer before it's initialised.
The second one is that the return value of malloc is not checked, so in case if malloc failed, you will not know that it failed.
free(ptr);
//ptr = NULL; //what is risk here if i do not assign NULL?
When you make free() you don't really "free" the memory (you don't set all bits in allocated chunk of memory to '0') you just mark this memory as "it could be allocated again in future allocations). But your pointer is still points to this chunk of memory. So after some period of time this chunk of memory could be used for another allocation. Some new data will be stored to this chunk (data will be overwritten). And your pointer will point to the memory that stores not relevant data. And you (or worth - somebody else who maintains your project) could use this pointer without knowing that he points to non-relevant data.
So my programming teacher told us, that if you don't use a pointer but like to declare it, it is always better to initialize it with NULL. How does it prevent any errors if i don't even use it?
Or if I am wrong, what are the benefits of it?
De-referencing NULL is likely (guaranteed?) to cause a segmentation fault, immediately crashing your app and alerting you to the unsafe memory access you just preformed.
Leaving your pointer uninitialized will mean it still has whatever junk was left over from the previous user of that memory. It's entirely possible for that to be a pointer to a real memory region in your app. Dereferencing it will be undefined behavior, might cause a seg fault, or it might not. The latter is the worst case, which you should fear. Your app will just keep chugging along with whatever non-sense behavior resulted from that.
Here's a demonstration:
#include <stdio.h>
#include <stdlib.h>
void i_segfault() {
fflush(stdout); // Flush whatever is left of STDOUT before we blow up
int *i = NULL;
printf("%i", *i); // Boom
}
void use_some_memory() {
int *some_pointer = malloc(sizeof(int));
*some_pointer = 123;
printf(
"I used the address %p to store a pointer to %p, which contains %i\n",
&some_pointer, some_pointer, *some_pointer
);
}
void i_dont_segfault() {
int *my_new_pointer; // uninitialized
printf(
"I re-used the address %p, which still has a lingering value %p, which still points to %i\n",
&my_new_pointer, my_new_pointer, *my_new_pointer
);
}
int main(int argc, char *argv[]) {
use_some_memory();
i_dont_segfault();
i_segfault();
}
There are at least two ways to make sure your program does not use an uninitialized pointer:
You can design and write your program carefully so that you are sure that program control never flows through a path that uses the pointer without initializing it first.
You can initialize the pointer to NULL.
The first can be hard, depending on the program, and humans keep making mistakes with it. The second is easy.
On the other hand, the second only solves one problem: It ensures that if you use the pointer without otherwise initializing it, it will have an assigned value. Further, in many systems, it ensures that if you attempt to use that value to access a pointed-to object, your program will crash rather than do something worse, like corrupt data and produce wrong results or erase valuable information.
That is a useful problem to solve, because it means this bug of failing to assign the desired value is likely to be caught during testing and, even if it is not, the damage is may due is likely to be limited. However, this does not solve the problem of ensuring that the pointer is assigned the desired value before it is used. So, like many of these code recommendations, it is a useful tip to help limit human errors, but it is not a complete solution.
If you don't initialize a variable (by mistake) the problem is that your program can be even more anomalous than normal. Imagine you have declared an enum with the values ONE, TWO, THREE, and you forget to initialize it, and at some point you include the following code:
switch(my_var) {
case ONE: /* do one */
...
break;
case TWO: /* do two */
...
break;
case THREE: /* do three */
...
break;
}
and you can get nuts because you assume that, at least one of the possible values of the type should be executed... but that's simply not true, as the variable has not been initialized and the value doesn't need even to comply with the constraints imposed to the data type it represents.
Initializing a variable can introduce a short couple of instructions in your code, but it saves a lot of nightmares when searching for errors.
In the case of an uninitialized pointer, the case is worse, as many programmers free() memory allocated from the heap based on the test
if (var) free(var);
and most probably an uninitialized automatic variable of pointer type will point somewhere, and sill be tried to be free()d.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
In case you don't want (or can not) init a pointer with an address, I often hear people say that you should init it with NULL and it's a good practice.
You can find people also say something like that in SO, for example here.
Working in many C projects, I don't think it is a good practice or at least somehow better than not init the pointer with anything.
One of my biggest reason is: init a pointer with NULL increase the chance of null pointer derefence which may crash the whole software, and it's terrible.
So, could you tell me what are the reasons if you say that it is a good pratice or people just say it for granted (just like you should always init an variable) ?
Note that, I tried to find in Misra 2004 and also did not find any rule or recommendation for that.
Update:
So most of the comments and answers give the main reason that the pointer could have an random address before being used, so it's better to have it as null so you could figure out the problem faster.
To make my point clearer, I think that it doesn't make senses nowadays in commercial C softwares, in a practical point of view. The unassigned pointer that is used will be detected right the way by most of static analyzer, that's why I prefer to let it be un-init because if I init it with NULL then when developers forget to assign it to a "real" address, it passes the static analyzer and will cause runtime problems with null pointer.
You said
One of my biggest reason is: init a pointer with NULL increase the chance of null pointer derefence which may crash the whole software, and it's terrible.
I would argue the main reason is actually due to exactly this. If you don't init pointers to NULL, then if there is a dereferecing error it's going to be a lot harder to find the problem because the pointer is not going to be set to NULL, it's going to be a most likely garbage value that may look exactly like a valid pointer.
C has very little runtime error checking, but NULL is guaranteed not to refer to a valid address, so a runtime environment (typically an operating system) is able to trap any attempt to de-refernce a NULL. The trap will identify the point at which the de-reference occurs rather then the point the program may eventually fail, making identification of the bug far easier.
Moreover when debugging, and unitialised pointer with random content may not be easily distinguishable from a valid pointer - it may refer to a plausible address, whereas NULL is always an invalid address.
If you de-reference an uninitialised pointer the result is non-deterministic - it may crash, it may not, but it will still be wrong.
If it does crash you cannot tell how or even when, since it may result in corruption of data, or reading of invalid data that may have no effect until that corrupted data is later used. The point of failure will not necessarily be the point of error.
So the purpose is that you will get deterministic failure, whereas without initialising, anything could happen - including nothing, leaving you with a latent undetected bug.
One of my biggest reason is: init a pointer with NULL increase the chance of null pointer derefence which may crash the whole software, and it's terrible.
Deterministic failure is not "terrible" - it increases your chance of finding the error during development, rather then having your users finding the error after deployment. What you are effectively suggesting is that it is better to leave the bugs in and hide them. The dereference on null is guaranteed to be trapped, de-referencing an unitialised pointer is not.
That said initialising with null, should only be done if at the point of declaration you cannot directly assign an otherwise valid value. That is to say, for example:
char* x = malloc( y ) ;
is much preferable to:
char* x = NULL ;
...
x = malloc( y ) ;
which is in turn preferable to:
char* x ;
...
x = malloc( y ) ;
Note that, I tried to find in Misra 2004 and also did not find any
rule or recommendation for that.
MISRA C:2004, 9.1 - All automatic variables shall have been assigned a value before being used.
That is to say, there is no guideline to initialise to NULL, simply that initialisation is required. As I said initialisation to NULL is not preferable to initialising to a valid pointer. Don't blindly follow the "must initialise to NULL advice", because the rule is simply "must initialise", and sometimes the appropriate initialisation value is NULL.
If you don't initialize the pointer, it can have any value, including possibly NULL. It's hard to imagine a scenario where having any value including NULL is preferable to definitely having a NULL value. The logic is that it's better to at least know its value than have its value unpredictably depend on who knows what, possibly resulting in the code behaving differently on different platforms, with different compilers, and so on.
I strongly disagree with any answer or argument based on the idea that you can reliably use a test for NULL to tell if a pointer is valid or not. You can set a pointer to NULL and then test it for NULL within a limited context where that is known to be safe. But there will always be contexts where more than one pointer points to the same thing and you cannot ensure that every possible pointer to an object will be set to NULL at the very place the object is freed. It is simply an essential C programming discipline to understand that a pointer may or may not point to a valid object depending on what is going on in the code.
One issue is that given a pointer variable p, there is no way defined by the C language to ask, "does this pointer point to valid memory or not?" The pointer might point to valid memory. It might point to memory that (once upon a time) was allocated by malloc, but that has since been freed (meaning that the pointer is invalid). It might be an uninitialized pointer, meaning that it's not even meaningful to ask where it points (although it is definitely invalid). But, again, there's no way to know.
So if you're bopping along in some far-off corner of a large program, and you want to say
if(p is valid) {
do something with p;
} else {
fprintf(stderr, "invalid pointer!\n");
}
you can't do this. Once again, the C language gives you no way of writing if(p is valid).
So that's where the rule to always initialize pointers to NULL comes in. If you adopt this rule and follow it faithfully, if you initialize every pointer to NULL or as a pointer to valid memory, if whenever you call free(p) you always immediately follow it with p = NULL;, then if you follow all these rules, you can achieve a decent way of asking "is p valid?", namely:
if(p != NULL) {
do something with p;
} else {
fprintf(stderr, "invalid pointer!\n");
}
And of course it's very common to use an abbreviation:
if(p) {
do something with p;
} else {
fprintf(stderr, "invalid pointer!\n");
}
Here most people would read if(p) as "if p is valid" or "if p is allocated".
Addendum: This answer has attracted some criticism, and I suppose that's because, to make a point, I wrote some unrealistic code which some people are reading more into than I'd intended. The idiom I'm advocating here is not so much valid pointers versus invalid pointers, but rather, pointers I have allocated versus pointers I have not allocated (yet). No one writes code that simply detects and prints "invalid pointer!" as if to say "I don't know where this pointer points; it might be uninitialized or stale". The more realistic way of using the idiom is do do something like
/* ensure allocation before proceeding */
if(p == NULL)
p = malloc(...);
or
if(p == NULL) {
/* nothing to do */
return;
}
or
if(p == NULL) {
fprintf(stderr, "null pointer detected\n");
exit(0);
}
(And in all three cases the abbreviation if(!p) is popular as well.)
But, of course, if what you're trying to discriminate is pointers I have allocated versus pointers I have not allocated (yet), it is vital that you initialize all your un-allocated pointers with the explicit marker you're using to record that they're un-allocated, namely NULL.
One of my biggest reason is: init a pointer with NULL increase the chance of null pointer derefence which may crash the whole software, and it's terrible.
Which is why you add a check against NULL before using that pointer value:
if ( p ) // p != NULL
{
// do something with p
}
NULL is a well-defined invalid pointer value, guaranteed to compare unequal to any object or function pointer value. It's a well-defined "nowhere" that's easy to check against.
Compare that to the indeterminate value that the uninitialized pointer1 may have - most likely, it will also be an invalid pointer value that will lead to a crash as soon as you try to use it, but it's almost impossible to determine that beforehand. Is 0xfff78567abcd2220 a valid or invalid pointer value? How would you check that?
Obviously, you should do some analysis to see if an initialization is required. Is there a risk of that pointer being dereferenced before you assign a valid pointer value to it? If not, then you don't need to initialize it beforehand.
Since C99, the proper answer has been to defer instantiating a pointer (or any other type of object, really) until you have a valid value to initialize it with:
void foo( void )
{
printf( "Gimme a length: " );
int length;
scanf( "%d", &length );
char *buf = malloc( sizeof *buf * length );
...
}
ETA
I added a comment to Steve's answer that I think needs to be emphasized:
There's no way to determine if a pointer is valid - if you receive a pointer argument in a function like
void foo( int *ptr )
{
...
}
there is no test you can run on ptr to indicate that yes, it definitely points to an object within that object's lifetime and is safe to use.
By contrast, there is an easy, standard test to indicate that a pointer is definitely invalid and unsafe to use, and that's by checking that its value is NULL. So you can at least avoid using pointers that are definitely invalid with the
if ( p )
{
// do something with p
}
idiom.
Now, just because p isn't NULL doesn't automatically mean it's valid, but if you're consistent and disciplined about setting unused pointers to NULL, then the odds are pretty high that it is.
This is one of those areas where C doesn't protect you, and you have to devote non-trivial amounts of effort to make sure your code is safe and robust. Frankly, it's a pain in the ass more often than not. But being disciplined with using NULL for inactive pointers makes things a little easier.
Again, you have to do some analysis and think about how pointers are being used in your code. If you know you're going to set a pointer to valid value before it's ever read, then it's not critical to initialize it to anything in particular. If you have multiple pointers pointing to the same object, then you need to make sure if that object ever goes away that all those pointers are updated appropriately.
This assumes that the pointer in question has auto storage duration - if the pointer is declared with the static keyword or at file scope, then it is implicitly initialized to NULL.
I'm new to C and haven't really grasped when C decides to free an object and when it decides to keep an object.
heap_t is pointer to a struct heap.
heap_t create_heap(){
heap_t h_t = (heap_t)malloc(sizeof(heap));
h_t->it = 0;
h_t->len = 10;
h_t->arr = (token_t)calloc(10, sizeof(token));
//call below a couple of times to fill up arr
app_heap(h_t, ENUM, "enum", 1);
return h_t;
}
putting h_t through
int app_heap(heap_t h, enum symbol s, char* word, int line){
int it = h->it;
int len = h->len;
if (it + 1 < len ){
token temp;
h->arr[it] = temp;
h->arr[it].sym = s;
h->arr[it].word = word;
h->arr[it].line = line;
h->it = it + 1;
printf(h->arr[it].word);
return 1;
} else {
h->len = len*2;
h->arr = realloc(h->arr, len*2);
return app_heap(h, s, word, line);
}
}
Why does my h_t->arr fill up with junk and eventually I get a segmentation fault? How do I fix this? Any C coding tips/styles to avoid stuff like this?
First, to answer your question about the crash, I think the reason you are getting segmentation fault is that you fail to multiply len by sizeof(token) in the call to realloc. You end up writing past the end of the block that has been allocated, eventually triggering a segfault.
As far as "deciding to free an object and when [...] to keep an object" goes, C does not decide any of it for you: it simply does it when you tell it to by calling free, without asking you any further questions. This "obedience" ends up costing you sometimes, because you can accidentally free something you still need. It is a good idea to NULL out the pointer, to improve your chance of catching the issue faster (unfortunately, this is not enough to eliminate the problem altogether, because of shared pointers).
free(h->arr);
h -> arr = NULL; // Doing this is a good practice
To summarize, managing memory in C is a tedious task that requires a lot of thinking and discipline. You need to check the result of every allocation call to see if it has failed, and perform many auxiliary tasks when it does.
C does not "decide" anything, if you have allocated something yourself with an explicit call to e.g. malloc(), it will stay allocated until you free() it (or until the program terminates, typically).
I think this:
token temp;
h->arr[it] = temp;
h->arr[it].sym = s;
/* more accesses */
is very weird, the first two lines don't do anything sensible.
As pointed out by dasblinkenlight, you're failing to scale the re-allocation into bytes, which will cause dramatic shrinkage of the array when it tries to grow, and corrupt it totally.
You shouldn't cast the return values of malloc() and realloc(), in C.
Remember that realloc() might fail, in which case you will lose your pointer if you overwrite it like you do.
Lots of repetition in your code, i.e. realloc(h->arr, len*2) instead of realloc(h->arr, h->len * sizeof *h->arr) and so on.
Note how the last bullet point also fixes the realloc() scaling bug mentioned above.
You're not reallocating to the proper size, the realloc statement needs to be:
realloc(h->arr, sizeof(token) * len*2);
^^^^^^^^^^^^
(Or perhaps better realloc(h->arr, sizeof *h->arr * h->h_len);)
In C, you are responsible to free the memory you allocate. You have to free() the memory you've malloc/calloc/realloc'ed when it's suitable to do so. The C runtime never frees anything, except when the program has terminated(some more esoteric systems might not release the memory even then).
Also, try to be consistent, the general form for allocating is always T *foo = malloc(sizeof *foo), and dont duplicate stuff.
e.g.
h_t->arr = (token_t)calloc(10, sizeof(token));
^^^^^^^^ ^^ ^^^^^^^^^^^^^
Don't cast the return value of malloc in C. It's unncessesary and might hide a serious compiler warning and bug if you forget to include stdlib.h
the cast is token_t but the sizeof applies to token, why are they different, and are they the same type as *h_t->arr ?
You already have the magic 10 value, use h_t->len
If you ever change the type of h_t->arr, you have to remember to change the sizeof(..)
So make this
h_t->arr = calloc(h_t->len, sizeof *h_t->arr);
Two main problems in creating dangling pointers in C are the not assigning
NULL to a pointer after freeing its allocated memory, and shared pointers.
There is a solution to the first problem, of automatically nulling out the pointer.
void SaferFree(void *AFree[])
{
free(AFree[0]);
AFree[0] = NULL;
}
The caller, instead calling
free(p);
will call
SaferFree(&p);
In respect to the second and harder to be siolved issue:
The rule of three says:
If you need to explicitly declare either the destructor, copy constructor or copy assignment operator yourself, you probably need to explicitly declare all three of them.
Sharing a pointer in C is simply copying it (copy assignment). It means that using the rule of three (or the general rule of 0)
when programming in C obliges the programmer to supply a way to construct and especially destruct such an assignment, which is possible, but not an
easy task especially when C does not supply a descructor that is implicitly activated as in C++.
I have a char pointer which would be used to store a string. It is used later in the program.
I have declared and initialized like this:
char * p = NULL;
I am just wondering if this is good practice. I'm using gcc 4.3.3.
Yes, it's good idea.
Google Code Style recommends:
To initialize all your variables even if you don't need them right now.
Initialize pointers by NULL, int's by 0 and float's by 0.0 -- just for better readability.
int i = 0;
double x = 0.0;
char* c = NULL;
You cannot store a string in a pointer.
Your definition of mgt_dev_name is good, but you need to point it somewhere with space for your string. Either malloc() that space or use a previously defined array of characters.
char *mgt_dev_name = NULL;
char data[4200];
/* ... */
mgt_dev_name = data; /* use array */
/* ... */
mgt_dev_name = malloc(4200);
if (mgt_dev_name != NULL) {
/* use malloc'd space */
free(mgt_dev_name);
} else {
/* error: not enough memory */
}
It is good practice to initialize all variables.
If you're asking whether it's necessary, or whether it's a good idea to initialize the variable to NULL before you set it to something else later on: It's not necessary to initialize it to NULL, it won't make any difference for the functionality of your program.
Note that in programming, it's important to understand every line of code - why it's there and what exactly it's doing. Don't do things without knowing what they mean or without understanding why you're doing them.
Another option is to not define the variable until the place in your code where you have access to it's initial value. So rather then doing:
char *name = NULL;
...
name = initial_value;
I would change that to:
...
char *name = initial_value;
The compiler will then prevent you from referencing the variable in the part of the code where it has no value. Depending on the specifics of your code this may not always be possible (for example, the initial value is set in an inner scope but the variable has a different lifetime), moving the definition as late as possible in the code prevents errors.
That said, this is only allowed starting with the c99 standard (it's also valid C++). To enable c99 features in gcc, you'll need to either do:
gcc -std=gnu99
or if you don't want gcc extensions to the standard:
gcc -std=c99
No, it is not a good practice, if I understood your context correctly.
If your code actually depends on the mgt_dev_name having the initial value of a null-pointer, then, of course, including the initializer into the declaration is a very good idea. I.e. if you'd have to do this anyway
char *mgt_dev_name;
/* ... and soon after */
mgt_dev_name = NULL;
then it is always a better idea to use initialization instead of assignment
char *mgt_dev_name = NULL;
However, initialization is only good when you can initialize your object with a meaningful useful value. A value that you will actually need. In general case, this is only possible in languages that allow declarations at any point in the code, C99 and C++ being good examples of such languages. By the time you need your object, you usually already know the appropriate initializer for that object, and so can easily come up with an elegant declaration with a good initializer.
In C89/90 on the other hand, declarations can only be placed at the beginning of the block. At that point, in general case, you won't have meaningful initializers for all of your objects. Should you just initialize them with something, anything (like 0 or NULL) just to have them initialized? No!!! Never do meaningless things in your code. It will not improve anything, regardless of what various "style guides" might tell you. In reality, meaningless initialization might actually cover bugs in your code, making it the harder to discover and fix them.
Note, that even in C89/90 it is always beneficial to strive for better locality of declarations. I.e. a well-known good practice guideline states: always make your variables as local as they can be. Don't pile up all your local object declarations at the very beginning of the function, but rather move them to the beginning of the smallest block that envelopes the entire lifetime of the object as tightly as possible. Sometimes it might even be a good idea to introduce a fictive, otherwise unnecessary block just to improve the locality of declarations. Following this practice will help you to provide good useful initializers to your objects in many (if not most) cases. But some objects will remain uninitialized in C89/90 just because you won't have a good initializer for them at the point of declaration. Don't try to initialize them with "something" just for the sake of having them initialized. This will achieve absolutely nothing good, and might actually have negative consequences.
Note that some modern development tools (like MS Visual Studio 2005, for example) will catch run-time access to uninitialized variables in debug version of the code. I.e these tools can help you to detect situations when you access a variable before it had a chance to acquire a meaningful value, indicating a bug in the code. But performing unconditional premature initialization of your variables you essentially kill that capability of the tool and sweep these bugs under the carpet.
This topic has already been discussed here:
http://www.velocityreviews.com/forums/t282290-how-to-initialize-a-char.html
It refers to C++, but it might be useful for you, too.
There are several good answers to this question, one of them has been accepted. I'm going to answer anyway in order to expand on practicalities.
Yes, it is good practice to initialize pointers to NULL, as well as set pointers to NULL after they are no longer needed (i.e. freed).
In either case, its very practical to be able to test a pointer prior to dereferencing it. Lets say you have a structure that looks like this:
struct foo {
int counter;
unsigned char ch;
char *context;
};
You then write an application that spawns several threads, all of which operate on a single allocated foo structure (safely) through the use of mutual exclusion.
Thread A gets a lock on foo, increments counter and checks for a value in ch. It does not find one, so it does not allocate (or modify) context. Instead, it stores a value in ch so that thread B can do this work instead.
Thread B Sees that counter has been incremented, notes a value in ch but isn't sure if thread A has done anything with context. If context was initialized as NULL, thread B no longer has to care what thread A did, it knows context is safe to dereference (if not NULL) or allocate (if NULL) without leaking.
Thread B does its business, thread A reads its context, frees it, then re-initializes it to NULL.
The same reasoning applies to global variables, without the use of threads. Its good to be able to test them in various functions prior to dereferencing them (or attempting to allocate them thus causing a leak and undefined behavior in your program).
When it gets silly is when the scope of the pointer does not go beyond a single function. If you have a single function and can't keep track of the pointers within it, usually this means the function should be re-factored. However, there is nothing wrong with initializing a pointer in a single function, if only to keep uniform habits.
The only time I've ever seen an 'ugly' case of relying on an initialized pointer (before and after use) is in something like this:
void my_free(void **p)
{
if (*p != NULL) {
free(*p);
*p = NULL;
}
}
Not only is dereferencing a type punned pointer frowned upon on strict platforms, the above code makes free() even more dangerous, because callers will have some delusion of safety. You can't rely on a practice 'wholesale' unless you are sure every operation is in agreement.
Probably a lot more information than you actually wanted.
Preferred styles:
in C: char * c = NULL;
in C++: char * c = 0;
My rationale is that if you don't initialize with NULL, and then forget to initialize altogether, the kinds of bugs you will get in your code when dereferencing are much more difficult to trace due to the potential garbage held in memory at that point. On the other hand, if you do initialize to NULL, most of the time you will only get a segmentation fault, which is better, considering the alternative.
Initializing variables even when you don't need them initialized right away is a good practice. Usually, we initialize pointers to NULL, int to 0 and floats to 0.0 as a convention.
int* ptr = NULL;
int i = 0;
float r = 0.0;
It is always good to initialize pointer variables in C++ as shown below:
int *iPtr = nullptr;
char *cPtr = nullptr;
Because initializing as above will help in condition like below since nullptr is convertible to bool, else your code will end up throwing some compilation warnings or undefined behaviour:
if(iPtr){
//then do something.
}
if(cPtr){
//then do something.
}