I'm quite often confused when coming back to C by the inability to create an array using the following initialisation pattern...
const int SOME_ARRAY_SIZE = 6;
const int myArray[SOME_ARRAY_SIZE];
My understanding of the problem is that the const operator does not guarantee const-ness but rather merely asserts that the value pointed to by SOME_ARRAY_SIZE will not change at runtime. But why can the compiler not assume that the value is constant at compile time? It says 6 right there in the source code...
I think I'm missing something core in my fundamental understanding of C. Somebody help me out here. :)
[UPDATE]After reading a bit more around C99 and variable length arrays I think I understand this a bit better. What I was trying to create was a variable length array - const does not create a compile time constant but rather a runtime constant. Therfore I was initialising a variable length array, which is only valid in C99 at a function/block scope. A variable length array at the file scope is impossible as the compiler cannot assign a fixed memory address to an unbounded array.[/UPDATE]
Well, in C++ the semantics are a bit different. In C++ your code would work fine. You must distinguish between 2 things, const and constant expression. Const means simply, as you described, that the value is read-only. constant expression, on the other hand, means the value is known compile time and is a compile-time constant. The semantics of const in C are always of the first type. The only constant expressions in C are literals, that's why #define is used for such kind of things.
In C++ however, any const object initialized with a constant expression is in itself a constant expression.
I don't know exactly WHY this is so in C, it's just the way it is
The problem is that the language syntax demands a integer value between the [ ]. SOME_ARRAY_SIZE is still a variable (even if you told the compiler nobody is allowed to vary it!)
The const keyword is basically a read-only indication. It does not, really, indicate the underlying value will not change, even though that is the case in your example.
When it comes to pointers, this is more clear:
void foo(int const * p)
{
if (*p == 100)
{
bar();
/* Here, the compiler can not assume that *p is 100 */
}
}
In this case, a compiler should not accept the code in your example, as it requires the array size to be constant. If it would accept it, the user could later run into trouble when porting the code a more strict compiler.
You can do this in C99, and some compilers prior to C99 also had support for this as an extension to C89 (e.g. gcc). If you're stuck with an old compiler that doesn't have C99 support though (e.g. MSVC) then you'll have to do it the old skool way and use a #define for the array size.
Note that that above comments apply only to such declarations at local scope (i.e. automatic variables). C99 still doesn't allow such declarations at global scope.
i just did a very quick test with my Xcode and Objective C file I currently had open on my machine and put this in the .m file:
const int arrs = 6;
const int arr[arrs];
This compiles without any issues.
Related
I am new to programming and on learning dynamic typing in python, it arisess a doubt in "static typing". I tried out this code (assigning a string to an integer variable which was previously declared) and printing the variable as printf(var_name) and its gives output; can anyone explain this concept?
#include<stdio.h>
#include<conio.h>
void main()
{
int i = 20 ;
i = "hello";
printf(i);
}
Besides your question might be a duplicate, let me append something missing of the read worthy answer https://stackoverflow.com/a/430414/3537677
C is strongly/statically typed but weakly checked
This is one of the biggest core language features which sets C apart from other languages like C++. (Which people are used to mistake a simply "C with classes"
Meaning although C has a strong type system in the context of needing and using it for knowing sizes of types at compile time, the C languages does not have a type system in order to check them for misuse. So compilers are neither mandated to check it nor are they allowed to error your code, because its legal C code. Modern compilers will issue a warning dough.
C compilers are only ensuring "their type system" for the mentioned size management. Meaning, if you just type int i = 42; this variable has so called automatic storage duration or what many people are calling more or less correctly "the stack". It means the compiler will take care of getting space for the variable and cleaning it up. If it can not know the size of it, but needs it then it will indeed generate an error. But this can be circumvented by doing things at run-time and using of types without any type whats so ever, i.e. pointers and void* aka void-pointers.
Regarding your code
Your code seems to be an old, non standard C compiler judging by the #include<conio.h> and void returning main. With a few modifications one can compile your code, but by calling printf with an illegal format string, you are causing so called undefined behaviour (UB), meaning it might work on your machine, but crashes on mine.
Is there a way to check if a variable has been initialized or not in C?
Consider the following example,
int main(){
int a = 3, b = 7, c;
if ( a > b )
c = a-b;
// Now, how can I check if the variable "c" has some value or not
// I don't want check like this,
// if ( isalpha(c) ) or if ( isdigit(c) )
// or anything similar to like that
}
In other words, does C has some function like defined in Perl. In Perl, I can simply do if (defined c)that would check if the variable is defined or not, and it'd return False for above example. How can I achieve the same in C?
C does not have this ability. You have two main options:
A sentinel value
For example, if you know that the value of c will never be negative, then initialize it to -1, and test that for that.
Add another variable
Create another variable bool we_set_c_to_something = false; and then set it to true when you write to c.
C is a compiled language which doesn't support runtime variable binding, while Perl is a interpreted language which support dynamic typing. So you can check the definition of a variable in Perl, but not in C.
When you declare a variable in C int c;, this variable c is defined but without initialization. The declaration and definition are in one statement.
The definition of a variable in C is not checked by code writer. The compilers do it for you. When compile and link your C code, the compiler will check all variable's definitions. An error will be invoked and the compiling or linking process will stop if there are undefined variables found in your code.
Hope this will make you distinguish the differences.
Wrong question. You're not asking whether the variable is defined. If the variable is not defined then compilation fails. Look up the difference between "declaration" and "definition". In the case of those local variables, you have defined the variable c.
What you're looking for is initialisation. Many compilers will warn you about using variables before they're initialised, but if you persist in running that code then the assumption is that you know better than the compiler. And at that point it's your problem. :) Some languages (e.g. Perl) have an extra flag that travels along with a variable to say whether it's been initialised or not, and they hide from you that there's this extra flag hanging around which you may or may not need. If you want this in C, you need to code it yourself.
Since C++ allows operator overloading, it's relatively easily to implement this in C++. Boost provides an "optional" template which does it, or you could roll your own if you want a coding exercise. C doesn't have the concept of operator overloading though (hell, the concept didn't really exist, and the compilers of the day probably couldn't have supported it anyway) so you get what you get.
Perl is a special case because it rolls the two together, but C doesn't. It's entirely possible in C to have variables which are defined but not initialised. Indeed there are a lot of cases where we want that to be the case, particularly when you start doing low-level access to memory for drivers and stuff like that.
Why is it allowed to change a const variable using a pointer to it with memcpy?
This code:
const int i=5;
int j = 0;
memcpy(&j, &i, sizeof(int));
printf("Source: i = %d, dest: j = %d\n", i,j);
j = 100;
memcpy(&i, &j, sizeof(int));
printf("Source: j = %d, dest: i = %d\n", j,i);
return 0;
compiled with just a warning:
warning: passing argument 1 of ‘memcpy’ discards ‘const’ qualifier
from pointer target type [enabled by default]
But did run just fine, and changed the value of a const variable.
Attempt to modify the value of a const-qualified variable leads to an undefined behavior in C. You should not rely on your results, since anything can happen.
C11 (n1570), § 6.7.3 Type qualifiers
If an attempt is made to modify an object defined with a const-qualified type through use
of an lvalue with non-const-qualified type, the behavior is undefined.
Nothing force the compiler to produce a diagnostic message.
In fact, this qualifier has not enormous effects on the machine code. A const-qualified variable does not usually reside in a read-only data segment (obviously, not in your implementation, although it could be different on an other one).
The compiler can't tell easily what a pointer is pointing to in a given function. It is possible with some static analysis tools, which perform pointer-analysis. However, it is difficult to implement, and it would be stupid to put it in the standard.
The question asks why. Here's why:
This is allowed because once you have a pointer to a memory address, the language does not know what it points to. It could be a variable, part of a struct, the heap or the stack, or anything. So it cannot prevent you from writing to it. Direct memory access is always unsafe and to be avoided if there's another way of doing it.
The const stops you modifying the value of a const with an assignment (or increment etc). This kind of mutation is the only operations it can guarantee you won't be able to perform on a const.
Another way to look at this is the division of the static context (i.e. at compile time) and the runtime context. When you compile a piece of code which may, for example, make an assignment to a variable, the language can say "that's not allowed, it's const" and that is a compilation error. After this, the code is compiled into an executable and the fact that it is a const is lost. Variable declarations (and the rest of the language) is written as input to the compiler. Once it is compiled, the code isn't relevant. You can do a logical proof in your compiler to say that consts aren't changed. The compiled program runs, and we know at compile time that we have created a program that doesn't break the rules.
When you introduce pointers, you have behaviour that can be defined at run-time. The code that you wrote is now irrelevant, and you can [attempt to] do what you want. The fact that pointers are typed (allowing pointer arithmetic, interpreting the memory at the end of a pointer as a particular type) means that the language gives you some help, but it can't prevent you from doing anything. It can make no guarantees, as you can point a pointer anywhere. The compiler can't stop you breaking the rules at run-time with code that uses pointers.
That said, pointers are the way we get dynamic behaviour and data structures, and are necessary for all but the most trivial code.
(The above is subject to lots of caveats, i.e. code heuristics, more sophisticated static analysis bus is broadly true of a vanilla compiler.)
The reason why is because the C language allows any pointer type to be implicitly casted to/from the type void*. It is designed that way because void pointers are used for generic programming.
So a C compiler is not allowed to stop your code from compiling, even though the program invokes undefined behavior in this case. A good compiler will however give a warning as soon as you implicitly try to cast away a const qualifier.
C++ has "stronger typing" than C, meaning that it would require an explicit cast of the pointer type for this code to compile. This is one flaw of the C language that C++ actually fixed.
While 'officially' it's undefined in reality it's very much defined - you will change the value of the const variable. Which raises the question why it's const to begin with.
This question already has answers here:
"static const" vs "#define" vs "enum"
(17 answers)
Closed 6 years ago.
The community reviewed whether to reopen this question 1 year ago and left it closed:
Original close reason(s) were not resolved
In many programs a #define serves the same purpose as a constant. For example.
#define FIELD_WIDTH 10
const int fieldWidth = 10;
I commonly see the first form preferred over the other, relying on the pre-processor to handle what is basically an application decision. Is there a reason for this tradition?
There is a very solid reason for this: const in C does not mean something is constant. It just means a variable is read-only.
In places where the compiler requires a true constant (such as for array sizes for non-VLA arrays), using a const variable, such as fieldWidth is just not possible.
They're different.
const is just a qualifier, which says that a variable cannot be changed at runtime. But all other features of the variable persist: it has allocated storage, and this storage may be addressed. So code does not just treat it as a literal, but refers to the variable by accessing the specified memory location (except if it is static const, then it can be optimized away), and loading its value at runtime. And as a const variable has allocated storage, if you add it to a header and include it in several C sources, you'll get a "multiple symbol definition" linkage error unless you mark it as extern. And in this case the compiler can't optimize code against its actual value (unless global optimization is on).
#define simply substitutes a name with its value. Furthermore, a #define'd constant may be used in the preprocessor: you can use it with #ifdef to do conditional compilation based on its value, or use the stringizing operator # to get a string with its value. And as the compiler knows its value at compile time it may optimize code based on that value.
For example:
#define SCALE 1
...
scaled_x = x * SCALE;
When SCALE is defined as 1 the compiler can eliminate the multiplication as it knows that x * 1 == x, but if SCALE is an (extern) const, it will need to generate code to fetch the value and perform the multiplication because the value will not be known until the linking stage. (extern is needed to use the constant from several source files.)
A closer equivalent to using #define is using enumerations:
enum dummy_enum {
constant_value = 10010
};
But this is restricted to integer values and doesn't have advantages of #define, so it is not widely used.
const is useful when you need to import a constant value from some library where it was compiled in. Or if it is used with pointers. Or if it is an array of constant values accessed through a variable index value. Otherwise, const has no advantages over #define.
The reason is that most of the time, you want a constant, not a const-qualified variable. The two are not remotely the same in the C language. For example, variables are not valid as part of initializers for static-storage-duration objects, as non-vla array dimensions (for example the size of an array in a structure, or any array pre-C99).
Expanding on R's answer a little bit: fieldWidth is not a constant expression; it's a const-qualified variable. Its value is not established until run-time, so it cannot be used where a compile-time constant expression is required (such as in an array declaration, or a case label in a switch statement, etc.).
Compare with the macro FIELD_WIDTH, which after preprocessing expands to the constant expression 10; this value is known at compile time, so it can be used for array dimensions, case labels, etc.
To add to R.'s and Bart's answer: there is only one way to define symbolic compile time constants in C: enumeration type constants. The standard imposes that these are of type int. I personally would write your example as
enum { fieldWidth = 10 };
But I guess that taste differs much among C programmers about that.
Although a const int will not always be appropriate, an enum will usually work as a substitute for the #define if you are defining something to be an integral value. This is actually my preference in such a case.
enum { FIELD_WIDTH = 16384 };
char buf[FIELD_WIDTH];
In C++ this is a huge advantage as you can scope your enum in a class or namespace, whereas you cannot scope a #define.
In C you don't have namespaces and cannot scope an enum inside a struct, and am not even sure you get the type-safety, so I cannot actually see any major advantage, although maybe some C programmer there will point it out to me.
According to K&R (2nd edition, page 211) the "const and volatile properties are new with the ANSI standard". This may imply that really old ANSI code did not have these keywords at all and it really is just a matter of tradition.
Moreover, it says that a compiler should detect attempts to change const variables but other than that it may ignore these qualifiers. I think it means that some compilers may not optimize code containing const variable to be represented as intermediate value in machine code (like #define does) and this might cost in additional time for accessing far memory and affect performance.
Some C compilers will store all const variables in the binary, which if preparing a large list of coefficients can use up a tremendous amount of space in the embedded world.
Conversely: using const allows flashing over an existing program to alter specific parameters.
The best way to define numeric constants in C is using enum. Read the corresponding chapter of K&R's The C Programming Language, page 39.
In C, shall I prefer constants over defines? I've reading a lot of code lately, and all of the examples make heavy use of defines.
No, in general you should not use const-qualified objects in C to create names constants. In order to create a named constant in C you should use either macros (#define) or enums. In fact, C language has no constants, in the sense that you seem to imply. (C is significantly different from C++ in this regard)
In C language the notions of constant and constant expression are defined very differently from C++. In C constant means a literal value, like 123. Here are some examples of constants in C
123
34.58
'x'
Constants in C can be used to build constant expressions. However, since const-qualified objects of any type are not a constants in C, they cannot be used in constant expressions, and, consequently, you cannot use const-qualified objects where constant expressions are required.
For example, the following is not a constant
const int C = 123; /* C is not a constant!!! */
and since the above C is not a constant, it cannot be used to declare an array type in file scope
typedef int TArray[C]; /* ERROR: constant expression required */
It cannot be used as a case label
switch (i) {
case C: ; /* ERROR: constant expression required */
}
It cannot be used as bit-field width
struct S {
int f : C; /* ERROR: constant expression required */
};
It cannot be used as an initializer for an object with static storage duration
static int i = C; /* ERROR: constant expression required */
It cannot be used as a enum initializer
enum {
E = C /* ERROR: constant expression required */
};
i.e it cannot be used anywhere where a constant is required.
This might seem counter-intuitive, but this is how C the language is defined.
This is why you see these numerous #define-s in the code you are working with. Again, in C language const-qualified object have very limited use. They are basically completely useless as "constants", which is why in C language you are basically forced to use #define or enums to declare true constants.
Of course, in situations when a const-qualified object works for you, i.e. it does what you want it to do, it is indeed superior to macros in many ways, since it is scoped and typed. You should probably prefer such objects where applicable, however in general case you'll have to take into account the above limitations.
Constants should be preferred over defines. There are several advantages:
Type safety. While C is a weakly typed languaged, using a define loses all of the type safety, which will allow the compiler to pick up problems for you.
Ease of debugging. You can change the value of constants through the debugger, while defines are automatically changed in the code by the pre-processor to the actual value, meaning that if you want to change the value for test/debugging purposes, you need to re-compile.
Maybe I have been using them wrong but, at least in gcc, you can't use constants in case statements.
const int A=12;
switch (argc) {
case A:
break;
}
Though this question is specific to C, I guess it is good to know this:
#include<stdio.h>
int main() {
const int CON = 123;
int* A = &CON;
(*A)++;
printf("%d\n", CON); // 124 in C
}
works in C, but not in C++
One of the reasons to use #define is to avoid such things to mess up your code, specially it is a mix of C and C++.
A lot of people here are giving you "C++ style" advice. Some even say the C++ arguments apply to C. That may be a fair point. (Whether it is or not feels kind of subjective.) The people who say const sometimes means something different in the two languages are also correct.
But these are mostly minor points and personally, I think in truth there is relatively minor consequence to going either way. It's a matter of style, and I think different groups of people will give you different answers.
In terms of common usage, historical usage, and most common style, in C, it's a lot more typical to see #define. Using C++isms in C code can come off as strange to a certain narrow segment of C coders. (Including me, so that's where my biases lie.)
But I'm surprised no one has suggested a middle ground solution, that "feels right" in both languages: if it fits into a group of integer constants, use an enum.
define can be used for many purposes(very loose) and should be avoided if you can substitute that with const, which define a variable and you can do a lot more with it.
In cases like below, define has to be used
directive switch
substitution to your source line
code macros
An example where you have to use define over const is when you have version number say 3 and you want version 4 to include some methods that is not available in version 3
#define VERSION 4
...
#if VERSION==4
................
#endif
Defines have been part of the language longer than constants, so a lot of older code will use them because defines where the only way to get the job done when the code was written. For more recent code it may be simply a matter of programmer habit.
Constants have a type as well as a value, so they would be preferred when it makes sense for your value to have a type, but not when it is typeless (or polymorphic).
If it's something that isn't determined programmatically, I use #define. For example, if I want all of my UI objects to have the same space between them, I might use #define kGUISpace 20.
Apart from the excellent reasons given by AndreyT for using DEFINES rather than constants in "C" code there is another more pragmatic reason for using DEFINES.
DEFINES are easy define and use from (.h) header files, which is where any experienced C coder would expect to find constants defined. Defining consts in header files is not quite so easy -- its more code to avoid duplicate definitions etc.
Also the "typesafe" arguments are moot most compilers will pick up glaring errors suchh as assing a string to and int, or, will "do the right thing" on a slight mismatch such as assigning an integer to a float.
Macros (defines) can be used by the pre-processor and at compile time, constants cannot.
You can do compile-time checks to make sure a macro is within a valid range (and #error or #fatal if it isn't). You can use default values for a macro if it hasn't already been defined. You can use a macro in the size of an array.
A compiler can optimize with macros better than it can with constants:
const int SIZE_A = 15;
#define SIZE_B 15
for (i = 0; i < SIZE_A + 1; ++i); // if not optimized may load A and add 1 on each pass
for (i = 0; i < SIZE_B + 1; ++i); // compiler will replace "SIZE_B + 1" with 16
Most of my work is with embedded processors that don't have amazing optimizing compilers. Maybe gcc will treat SIZE_A like a macro at some optimization level.