For example:
FILE* file_name;
file_name = fopen("some.txt", "r"); // some.txt isn't exist
if (file_name !=NULL)
printf("nice");
fclose(file_name);
What happens in fclose?
Passing a NULL pointer to fclose triggers undefined behavior.
The fclose function is documented as a library function in section 7.21.5.1 of the C standard, and section 7.1.4p1 states the following regarding library functions:
Each of the following statements applies unless explicitly stated
otherwise in the detailed descriptions that follow: If an argument to
a function has an invalid value (such as a value outside the domain of
the function, or a pointer outside the address space of the program,
or a null pointer, or a pointer to non-modifiable storage when the
corresponding parameter is not const-qualified) or a type (after
promotion) not expected by a function with variable number of
arguments, the behavior is undefined.
Section 7.21.5.1 makes no explicit mention of a NULL pointer being passed to fclose, so the above statement applies.
The C standard does not define the behavior.1 Some implementations may test the passed pointer and disregard a null pointer and may return success or may return failure. Other implementations may crash. You should not do this without a special purpose such as aiding diagnosis of a bug or investigating how it affects program vulnerabilities.
Footnote
1 The behavior is undefined becaue the specification for fclose in C 2018 7.21.5.1 specifies what fclose does when passed a pointer to a stream and does not specify what it does when passed a null pointer, and 7.1.4 1 says “… If an argument to a [standard library] function has an invalid value (such as… a null pointer…)…, the behavior is undefined.”
Related
Answers to this and this question say that function pointers of the form return-type (*pointer)() are pointers to a function which takes any number of arguments, though the latter says they obsolesced in C11.
On an i386 system with GCC, “extra” arguments passed in a call to an empty-parentheses-type’d function pointer are ignored, because of how stack frames work; e.g.,
/* test.c */
#include <stdio.h>
int foo(int arg) { return arg; }
int main(void)
{
int (*fp)() = foo;
printf("%d\n", fp(267239151, 42, (struct { int x, y, z; }){ 1, 2, 3 }));
return 0;
}
$ gcc -o test test.c && ./test
267239151
$
In which C standards are empty-parentheses’d function pointers allowed? and wherever so, what are they specified to mean?
N1570 6.11.6:
The use of function declarators with empty parentheses (not
prototype-format parameter type declarators) is an obsolescent
feature.
This same wording appears in the 1990, 1999, and 2011 editions of the ISO C standard. There has been no change. The word obsolescent says that the feature may be removed in a future edition of the Standard, but so far the committee has not done so. (Function pointer declarations are just one of several contexts where function declarators can appear.)
The Introduction section of the C standard explains what obsolescent means:
Certain features are obsolescent, which means that they may be
considered for withdrawal in future revisions of this International
Standard. They are retained because of their widespread use, but their
use in new implementations (for implementation features) or new
programs (for language [6.11] or library features [7.31]) is
discouraged.
A call to a function declared with an old-style declarator is still required to pass the correct number and type(s) of arguments (after promotion) as defined by the function's actual definition. A call with incorrect arguments has undefined behavior, which means that the compiler is not required to diagnose the error; the burden is entirely on the programmer.
This is why prototypes were introduced, so that the compiler could check correctness of arguments.
On an i386 system with GCC, “extra” arguments passed in a call to an
empty-parentheses-type’d function pointer are ignored, because of how
stack frames work ...
Yes, that's well within the bounds of undefined behavior. The worst symptom of undefined behavior is having the program work exactly as you expect it to. It means that you have a bug that hasn't exhibited itself yet, and it will be difficult to track it down.
You should not depend on that unless you have a very good reason to do so.
If you change
int (*fp)() = foo;
to
int (*fp)(int) = foo;
the compiler will diagnose the incorrect call.
Any function declarator can have empty parentheses (unless it's a function declaration where there is already a non-void prototype in scope). This isn't deprecated, although it is "obsolescent".
In a function pointer, it means the pointer can point to a function with any argument list.
Note that when actually calling a function through the pointer, the arguments must be of correct type and number according to the function definition, otherwise the behaviour is undefined.
Although C allows you to declare a function (or pointer to function) with an empty parameter list, that does not change the fact that the function must be defined with a precise of parameters, each with a precise type. [Note 1]
If the parameter declaration is not visible at a call site, the compiler will obviously not be able to perform appropriate conversions to the provided arguments. It is, therefore, the programmer's responsibility to ensure that there are a correct number of arguments, all of them with the correct type. For some parameter types, this will not be possible because the compiler will apply the default argument promotions. [Note 2]
Calling a function with an incorrect number of arguments or with an argument whose type is not compatible with the corresponding parameter type is Undefined Behaviour.
The fact that the visible declaration has an empty parameter list does not change the way the function is called. It just puts more burden on the programmer to ensure that the call is well-defined.
This is equally true of pointer to function declarations.
In short, the sample code in the question is Undefined Behaviour. It happens to "work" on certain platforms, but it is neither portable nor is it guaranteed to keep working if you recompile. So the only possible advice is "Don't do that."
If you want to create a function which can accept extra arguments, use a varargs declaration. (See open for an example.) But be aware of the limitations: the called function must have some way of knowing the precise number and types of the provided arguments.
Notes
With the the exception of varargs functions, whose prototypes end with .... But a declaration with an empty parameter list cannot be used to call a varargs function.
Integer types narrower than int are converted to int and float values to double.
I am trying to use vsnprintf for formatting the log data on embedded board(Its arm based board).
below is the code for my log print
#define Max_log_len 1024
char logBuf[Max_log_len+1] = { 0 };
printMessage(const char* Format,...)
{
va_list logList;
va_start(logList, Format);
vsnprintf(logBuf , Max_log_len,Format, logList);
va_end(logList);
sendMessageto(logBuf);
}
if my data is NULL for string formatting, my program crashes at vsnprintf
below is sample for case.
char *dData = NULL;
printMessage("The Obtained data is [%s]",dData);
where as on linux(my PC) this properly prints "The Obtained data is null" but on my device it crashes.
any help is appreciated
The C standard from 1999 says:
7.1.4 Use of library functions
1 Each of the following statements applies unless explicitly stated
otherwise in the detailed descriptions that follow: If an argument to
a function has an invalid value (such as a value outside the domain of
the function, or a pointer outside the address space of the program,
or a null pointer,or a pointer to non-modifiable storage when the
corresponding parameter is not const-qualified) or a type (after
promotion) not expected by a function with variable number of
arguments, the behavior is undefined.
This is the case. There's no surprise that the embedded C library chooses not to detect all possible error cases in order to save memory.
I made a small program that sigsegved on strcasecmp, and had no idea why until I made this test case:
strcasecmp(NULL, "TEST");
which, when compiled, got me the following warning:
test.c:9:4: warning: null argument where non-null required (argument 1) [-Wnonnull]
However, man strcasecmp doesn't say anything about NULL arguments, could someone please explain how I could deduce this theoretically from reading documentation, as opposed to empirically writing test cases? Is it a deeper-rooted standard? Or maybe const char * doesn't have the right to be NULL, for some reason I don't know?
Nobody has actually explained why this is the case yet.
It it undefined behaviour. Why?
Each of the following statements applies unless explicitly stated otherwise
in the detailed descriptions that follow: If an argument to a function has an
invalid value (such as a value outside the domain of the function, or a
pointer outside the address space of the program, or a null pointer, or a
pointer to non-modifiable storage when the corresponding parameter is not
const-qualified) or a type (after promotion) not expected by a function with
variable number of arguments, the behavior is undefined.
(C standard, 7.1.4)
Since strcasecmp never mentions NULL, it is outside the domain of the function. So the function is free to do as it pleases. Including crash.
(Note: my source here is this related answer: https://stackoverflow.com/a/12621203/1180785)
I think it's assumed you have a rough idea of the consequences. You will have a similar experience if you try this, which uses a non-NULL argument:
strcasecmp((const char *)1, "A string");
String compare strcasecmp requires 2 parameters, both with non-null strings. Since both are pointers, NULL is also valid. However strcasecmp could have been defined as:
int strcasecmp(const char *s1, const char *s2) __attribute__((nonnull));
This attribute will produce warnings if at least one argument is NULL.
For one reason or another, I want to hand-roll a zeroing version of malloc(). To minimize algorithmic complexity, I want to write:
void * my_calloc(size_t size)
{
return memset(malloc(size), 0, size);
}
Is this well-defined when size == 0? It is fine to call malloc() with a zero size, but that allows it to return a null pointer. Will the subsequent invocation of memset be OK, or is this undefined behaviour and I need to add a conditional if (size)?
I would very much want to avoid redundant conditional checks!
Assume for the moment that malloc() doesn't fail. In reality there'll be a hand-rolled version of malloc() there, too, which will terminate on failure.
Something like this:
void * my_malloc(size_t size)
{
void * const p = malloc(size);
if (p || 0 == size) return p;
terminate();
}
Here is the glibc declaration:
extern void *memset (void *__s, int __c, size_t __n) __THROW __nonnull ((1));
The __nonnull shows that it expects the pointer to be non-null.
Here's what the C99 standard says about this:
7.1.4 "Use of library functions"
If an argument to a function has an invalid value (such as a value outside the domain of the function, or a pointer outside the address space of the program, or a null pointer, or a pointer to non-modifiable storage when the corresponding parameter is not const-qualified) or a type (after promotion) not expected by a function with variable number of arguments, the behavior is undefined.
7.21.1 "String function conventions" (remember that memset() is in string.h)
Where an argument declared as size_t n specifies the length of the array for a function, n can have the value zero on a call to that function. Unless explicitly stated otherwise in the description of a particular function in this subclause, pointer arguments on such a call shall still have valid values, as described in 7.1.4.
7.21.6.1 "The memset function"
The memset function copies the value of c (converted to an unsigned char) into each of the first n characters of the object pointed to by s.
So strictly speaking, since the standard specifies that s must point to an object, passing in a null pointer would be UB. Add the check (the cost compared to the malloc() will be vanishingly small). On the other hand, if you know the malloc() cannot fail (because you have a custom one that terminates), then obviously you don't need to perform the check before calling memset().
Edit Re:
I added this later: Assume that malloc() never fails. The question is only if size can be 0
I see. So you only want things to be secure if the pointer is null and the size is 0.
Referring to the POSIX docs
http://pubs.opengroup.org/onlinepubs/009695399/functions/memset.html
http://pubs.opengroup.org/onlinepubs/7908799/xsh/memset.html
No, it is not specified that it should be safe to call memset with a null pointer (if you called it with zero count or size... that'd be even more 'interesting', but also not specified).
Nothing about it is even mentioned in the 'informative' sections.
Note that the first link mentions
The functionality described on this reference page is aligned with the ISO C standard. Any conflict between the requirements described here and the ISO C standard is unintentional. This volume of IEEE Std 1003.1-2001 defers to the ISO C standard
Update I can confirm that the ISO C99 standard (n1256.pdf) is equally brief as the POSIX docs and the C++11 spec just refer to the ANSI C standard for memset and friends. N1256 states:
The memset function copies the value of c (converted to an unsigned char) into each of the first n characters of the object pointed to by s.
and says nothing about the situation where s is null (but note that a null pointer does not point to an object).
I was just skimming the C99 standard, looking for something that I don't remember now, when I noticed that the pointer returned from the strerror() function (section 7.12.6.2) isn't const-qualified, even though the standard says:
The strerror function returns a pointer to the string, the contents of which are
locale-specific. The array pointed to shall not be modified by the program,
but may be overwritten by a subsequent call to the strerror function.
Is there an obvious reason for this function to return a modifiable string instead of something like:
char const * const strerror(int errnum);
or at the very least
char const * strerror(int errnum);
Same as for the type of string literals: It was already like that in C89, describing a practice dating back to before the introduction of const in the language. Changing it would make current valid program invalid.
Response about static buffer is wrong; whether the pointer type returned is const or not has nothing to do with the buffer. The return type is completely about API compatibility with historic code which does not use const, and there's no harm in it. Someone writing modern const-aware code will simply use the return value immediately or store it into a pointer-to-const variable.
This is probably so because many historical implementations use a static buffer into which they "print" the error string.