We recently took an exam were we got this question:
Consider this fragment of code:
FILE * pFile;
pFile = open ("myfile.txt","w+");
fprintf (pFile, "%f %s", 3.1416, "PI");
Which of the following statements are true?
A)The program generates error at compile time
B)The program generates error at runtime
C) . . .
D) . . .
We couldn't use the compiler, and the information wrote is the only thing we had.
The correct answer ended up being B, error at runtime, can someone explain me thoroughly why that is?
I know the compiler generates a warning, but the point here is to understand why the compiler let us compile this code in the first place instead of giving us an error.
My guess is that open, even though doesn't give a pointer back(which is an address), gives the fp which is still an int, so in the eyes of the compiler isn't wrong syntax, but at runtime it probably tries to access a private memory address which leads to an error.
p.s.
I know that the correct function should have been fopen, but that still doesn't explain why
This code has two specific issues:
The second parameter to open expects an int but a char * is passed instead
open returns an int but that value is assigned to a FILE *.
The compiler flags these as warnings instead of errors because the language allows a pointer to be converted to an integer, and for an integer to be converted to a pointer. However, doing so is generally implementation defined and usually (as in this case) but not always indicates a problem.
Section 6.3.2.3 of the C standard describes these conversions:
5 An integer may be converted to any pointer type. Except as previously specified, the result is implementation-defined,
might not be correctly aligned, might not point to an entity
of the referenced type, and might be a trap representation.
6 Any pointer type may be converted to an integer type. Except as previously specified, the result is implementation-defined. If the
result cannot be represented in the integer type,the behavior
is undefined. The result need not be in the range of values
of any integer type.
Related
I wrote this till now:
int *p;
p = (int*)malloc(sizeof(int[]));
did I wrong?
I was expecting write a size of the array but without that the programme functions, right?
int *p;
p = (int*)malloc(sizeof(int[]));
did I wrong?
The code is not valid C. int[] is the type name of an incomplete type, and as such, it is not a valid operand of the sizeof operator. This violates a language constraint, so a conforming C implementation is required to emit a diagnostic when it processes the code presented.
I was expecting write a size of the array but without that the
programme functions, right?
If you are saying that a program containing the code presented compiles and runs successfully then that is surprising, but ultimately it means nothing. The program has undefined behavior as far as the C language specification is concerned, but that does not mean that a compiler must reject it (after emitting the required diagnostic), or that it must fail at runtime. Other than the diagnostic, the spec doesn't say anything about what will happen -- that's what "undefined behavior" means.
Specifically, my question is, given this macro:
#define FAKE_VAL(type) ((type)0)
...is there any value of type (including structures, function pointers, etc.) where FAKE_VAL(type) will cause a compile-time error?
I'm asking because I have a macro that take a function pointer as an argument, and needs to find the size of its return value. I know the types and number of arguments the function pointer takes, so I'm planning to write something like:
sizeof(fptr(FAKE_VAL(arg_type_1), FAKE_VAL(arg_type_2)))
arg_type_1 and 2 could be literally anything.
Of course there is.
struct fred (i.e. not a pointer) - how do you convert 0 (scalar type) to a struct (non scalar type)?
Any literal value FAKE_VAL("hi") gives (("hi")0) - what does that mean?
You can't cast an int to an array type, so
FAKE_VAL(int[5]);
will fail. Try it!
To give it a more systematic answer. In C cast are only allowed for arithmetic and pointer types. So anything that is a struct, union or array type would lead to a compile time error.
typecast is used to inform the compiler that the programmer has already considered the side effects of using the variable in a different way than it was declared as. This causes the compiler to switch off its checks. So with typecasts you wont get warnings/errors.
Yeah. it does type conversion of any type to the typecasted one. My bad.
But this can cause segmentation faults/errors when the program is actually run.
I am new to programming. I was finding a square root of a number using sqrt()function in c. `
scanf("%d", &n);
printf("%d\n", sqrt(n));
}
return 0;
}
When I enter a value of n = 5, I got some negative large number. Can anyone explain, please?
You've produced undefined behavior by passing the wrong type to printf: the %d format required a matching argument of type int but your argument has type double. You need %f (or %e or %g or %a) to print it. Also, there may be other problems, e.g. if you omitted #include <math.h>.
As others have pointed out, the problem here is that the format specifier is wrong. You need to #include <math.h> to get the proper return type of sqrt(), then use a format specifier like %f. Also, turn up your compiler warnings until it tells you something was wrong here. -Wall -Wextra -pedantic -Wno-system-headers is a good choice.
I’m adding an answer, though, to provide historical background on why float variables get promoted to double in printf() argument lists, but not scanf(), since this confused people in the comments.
In the instruction set of the DEC PDP-10 and PDP-11 computers, on which C was originally developed, the float type existed only to save space, and a program needed to convert a float to double to do any calculations on it. In early versions of C, before ANSI function prototypes, all float arguments to a function were promoted to double automatically before being passed (and also char to int). Originally, this ran better at a low level, and also had the advantage of avoiding round-off and overflow error on math using the shorter types. This convention also simplified writing functions that took a varying number of arguments of varying types, such as printf(). The caller could just pass anything in, the compiler would let it, and it was the called function’s job to figure out what the argument list was supposed to be at runtime.
When C added function prototypes, these old rules were kept for backward-compatibility only with legacy function declarations (extern double sqrt() rather than extern double sqrt(double) or the C14 generic equivalent). Since basically nobody writes functions that way any more, this is a historic curiosity—with one exception. A varargs function like int printf(const char*, ...); cannot be written in C with type checking of the variable arguments. (There is a C++14 way to do this using templates.) The standards committee also did not want to break all existing code that printed a float. So those are still promoted according to the old rules.
In scanf(), none of this applies because the storage arguments are passed by reference, and scanf() needs to be sure it’s writing the data in the same type as the variable that holds it. Argument-promotion never comes into play, because only pointers are ever passed.
I meet the same problem. And I want to get an answer with int type. I use forced type conversion, like:
printf("%d\n", (int)sqrt(n));
this is happens because return type and input type of the sqrt() is not specified ,
you can solve this by either including the header file by :
#include<math.h>
or by explicitly specifying the return and input type like this :
double sqrt(double);
and also as mentioned above use correct format specifiers (eg : %f) .
I take out the age variable from the printf() call just to see what happens. I then compile it with make. It seems it only throws warning about more % conversions than data arguments and unused age variable but no compile error. I then run the executable file and it does run. Only every time I run it, it returns different random integer. I'm wondering what causes this behavior?
#include <stdio.h>
int main(int argc, char *arg[]) {
int age = 10;
int height = 72;
printf("I'm %d years old\n");
printf("I'm %d inches tall\n", height);
return 0;
}
As per the printf() specification, if there are insufficient number of arguments for the required format specifier, it invokes undefined behavior.
So, your code
printf("I'm %d years old\n");
which is missing the required argument for %d, invokes UB and not guaranteed to produce any valid result.
Cross reference, C11 standard, chapter §7.21.6.1
[..] If there are insufficient arguments for the format, the behavior is
undefined. [..]
According to the C Standard (7.21.6.1 The fprintf function - the same is valid for printf)
...If there are insufficient arguments for the format, the behavior is undefined. If the format is exhausted while arguments
remain, the excess arguments are evaluated (as always) but are
otherwise ignored.
The printf using cdecl, which using stack arguments. If you implied to the function that you are using one argument, it will be pulled out of the runtime stack, and if you didn't put there your number, the place will probably contain some garbage data. So the argument which will be printed is some arbitrary data.
With only one exception I know of, the C Standard imposes no requirements with regard to any action which in some plausible implementations might be usefully trapped. It is not hard to imagine a C compiler passing a variadic function like printf an indication of what arguments it has passed, nor would it be hard to an implementer thinking that it could be useful to have the compiler trigger a trap if code tries to retrieve a variadic parameters of some type when the corresponding argument is some other type or doesn't exist at all. Because it could be useful to have compilers trap in such cases, and because the behavior of such a trap would be outside the jurisdiction of the Standard, the Standard imposes no requirements about what may or may not happen when a variadic function tries to receive arguments which weren't passed to it.
In practice, rather than letting variadic functions know how many arguments they've received, most compilers simply have conventions which describe a relationship between the location of the non-variadic argument and the locations of subsequent variadic arguments. The generated code won't know whether a function has received e.g. two arguments of type int, but it will know that each such argument, if it exists, will be stored in a certain place. On such a compiler, using excess format specifiers will generally result in the generated code looking at the places where additional arguments would have been stored had they existed. In many cases, this location will have been used for some other purpose and then abandoned, and may hold the last value stored there for that purpose, but there is generally no reason to expect anything in particular about the contents of abandoned memory.
A couple of GCC versions ago, I could do neat things like this:
$ objcopy -I binary -O elf64-x86-64 -B i386 foo.png foo.png.o
... coupled by the following in C, as an example with SDL image loading:
extern void _binary_foo_png_start;
extern void _binary_foo_png_start;
SDL_Surface *image = IMG_Load_RW(SDL_RWFromMem(&_binary_foo_png_start, &_binary_foo_png_end));
Then I would link foo.png.o together with the object file from the C file and get an executable which neatly contained foo.png.
These days, I can still do that, but GCC warns me about it:
foo.c:57:19: warning: taking address of expression of type ‘void’
foo.c:57:44: warning: taking address of expression of type ‘void’
Clearly it still works, and as far as I can tell, it really does what it's supposed to. The symbols themselves have no well defined type, and therefore it seems fitting to declare them as void. I mean, sure, I could just as well give them any other arbitrary type and it would still work just as well seeing as how I just want their address anyway, but declaring them void seemed nicer than just making up some type.
So why has GCC suddenly decided to start warning me about this? Is there some other preferred way that this should be done?
It appears that at least the C11 standard disallows this:
6.3.2.1/1 An lvalue is an expression (with an object type other than void) that potentially
designates an object.
If your expression is not an lvalue, you cannot take its address.
Validity of the declaration
extern void _binary_foo_png_start;
is questionable, because it arguably does not declare an object (an object cannot have type void). Two out of four C compilers I have tried accept it though. One of these compilers accepts &_binary_foo_png_start. A bug is filed.
On a historic note, it seems that it once was the intent to allow such constructs (which may explain why Gcc used to accept it) some similar discussion can be found in DR 12. Keep in mind that relevant definitions such as lvalue differ in C90, C99 and C11.
From ISO/IEC9899:
6.3.2.2 void
1 The (nonexistent) value of a void expression (an expression that has type void) shall not
be used in any way, and implicit or explicit conversions (except to void) shall not be
applied to such an expression. If an expression of any other type is evaluated as a void
expression, its value or designator is discarded. (A void expression is evaluated for its
side effects.)
So to your question why they started warning on it:
Because they started to be able to detect this invalid usage of void.
The C99 specification guarantees that char has sizeof(char) == 1 (6.5.3.4) which also states "The sizeof operator yields the size (in bytes) of its operand" - thus char can be used as a type to represent bytes.
Given that PNG images are also arranged in bytes, it then follows that you should use char (or related: char* or arrays-of) to represent arbitrary binary data that is arranged in bytes (e.g. PNG images).
Thus, I'd change your void _binary_foo_png_start; to char _binary_foo_png_start and maybe add a typedef char byte; statement to a shared header file.
To elaborate a little: a "byte" is the smallest directly addressable unit in memory, a byte is not guaranteed to be 8 bits (an octet), it could be larger - however if a byte was more than 8 bits it can be expected that in data-interchange scenarios that imported data would simply have "empty bits", rather than the data being re-packed along new bit-level boundaries (so 10 bytes of data from an 8-bit byte computer would still occupy 10 bytes on a 10-bit machine (but use 100 bits rather than 80).