Type during array Initialization - c

If in my program, I have this:
int arr[some_number];
What is the type of some_number?
Integer?
Unsigned integer?
Automatically determined (long, unsigned long etc.)
This might be a hypothetical question (assuming I can allocate as much memory as needed at compile time), just curious to know if type of some_number is always integer.
**EDIT
In case my language is not clear, on a system where sizeof(integer) is 2 bytes and I define array like:
int arr[65537] , will "65537" overflow and it is effectively, int arr[-1]?

some_number must either be an actual positive integer as in:-
int arr[1024]
or it can be a MACRO which resolves to a positive integer:-
#DEFINE some_number 1024
int arr[some_number]
As the interpretation is done at compile time and there are no program variable is used then there is no "type".

By default in C, the type of a number is int. You can use the suffix u to make it an unsigned integer, the suffix l to make it a long (and with some compiler the suffix ll to make it a long long, ie. a 64-bit integer).

Type of an expression doesn't depend on the context, so the type of some_number (some_expression actually) will be the same as if it was used not in the array definition. Another question is what types of expressions are allowed for designation of the array size.

I think it is of type size_t, which can differ from one platform to another. It's unsigned, that's for sure. Look at this.

some_number must be TRUE constant ,it cann't be variable.
for example
main()
{
int a=1;
int kk[a]={1};
}
It would result an error saying
variable-sized object may not be initialized
However we can not also use
int const a;
as array subscript beacuse const is not consider as true constant in C.

Supposing this is a definition inside a function, some_number can be any integral expression with a strictly positive value. If it is non-constant, the beast is called variable length array, VLA.
If you place it in outer scope or make it static, it has to evaluate to something that is constant.

Related

Can I store the yield value of sizeof of type size_t in an unsigned int object?

sizeof is a standard C operator.
sizeof yields the size (in bytes) of its operand in type size_t, Quote from ISO/IEC 9899:2018 (C18), 6.5.3.4/5. Phrases surrounded by -- are my addition for clarification of context:
The value of the result of both operators -- (sizeof and _Alignof) -- is implementation-defined, and its type (an unsigned integer type) is size_t, defined in <stddef.h> (and other headers).
Implicitly, if I want my program to be standard conform and want to use sizeof, I need to include one of the header files in which size_t is defined, because the value it yields is of type size_t and I want to store the value in an appropriate object.
Of course, in any program which would not be a toy program I would need at least one of these headers all the way up regardless but in a simple program I need to explictly include those headers, although I do not need them otherwise.
Can I use an unsigned int object to store the size_t value sizeof yields without an explicit cast?
Like for example:
char a[19];
unsigned int b = sizeof(a);
I compiled that with gcc and -Wall and -Werror option flag but it did not had anything to complain.
But is that standard-conform?
It is permissible but it is your responsibility to provide that there will not be an overflow storing a value of the type size_t in an object of the type unsigned int. For unsigned integer types overflow is well-defined behavior.
However it is a bad programming style to use types that were not designed to store values of a wider integer type. This can be a reason of hidden bugs.
Usually the type size_t is an alias for the type unsigned long. On some 64-bit systems, the type unsigned long has the same size as the type unsigned long long, which is 8 bytes instead of the 4 bytes that unsigned int can be stored in.
It is totally conformant, though if you have a very large object (typically 4GB or larger) its size may not fit into an unsigned int. Otherwise there is nothing to worry about.
Having said that, your question and this answer probably have more characters than you would save by not including an appropriate header in a lifetime worth of toy programs.
This is allowed. It is an implicit conversion ("as if by assignment"). See the section labelled "Integer conversions":
A value of any integer type can be implicitly converted to any other integer type. Except where covered by promotions and boolean conversions above, the rules are:
if the target type can represent the value, the value is unchanged
otherwise, if the target type is unsigned, the value 2^(b-1), where b is the number of bits in the target type, is repeatedly subtracted or added to the source value until the result fits in the target type. In other words, unsigned integers implement modulo arithmetic.
In other words, this is always defined behaviour, but if the size is too big to fit in an unsigned int, it will be truncated.
In principle it is ok. The unsigned int can handle almost any sizeof except artificially constructed exotic things.
P.S. I have seen code similar to yours even in Linux kernel modules.

Why my attempt to define a natural type with typedef doesn't work?

I tried to define a 'natural' type, like that:
typedef unsigned int nat;
But if I define a nat variable, that variable behaves like an ordinary int:
nat natural_index;
natural_index = 10; // That's what I want.
natural_index = -10; // Still a valid option.
In resume, I wanted to know why the compiler does not show a message, like
"-10 is not a unsigned int", and what could I do to define a 'natural' type.
extra information: I "printf" the variable natural_index, and the value '-10' was printed. I expected at least, another positive number (not exactly 10).
C doesn't support what you are trying to do, on two different levels.
First, a typedef in C does not create a new, distinct type; it just creates a shorthand name for the original type. Thus, after
typedef unsigned int nat;
the declaration
nat natural_index;
is 100% equivalent to
unsigned int natural_index;
(What's the point of typedef, then? It's most useful when the "underlying type" might differ based on the target architecture; for instance, the standard typedef uint64_t might be shorthand for unsigned long or unsigned long long depending on the architecture.)
Second, C has no mechanism for changing whether or not arithmetic expressions perform implicit conversions. In
natural_index = -10;
the assignment operator will convert the negative number -10 (with type int) to a large unsigned number (namely (UINT_MAX - 10) + 1, which is probably, but not necessarily, 4,294,967,286) in the process, and there is no way to disable that.
Your options are to work in a language that actually supports this kind of thing (e.g. Ada, Haskell, ML) or to write a "linting" program that parses C itself and enforces whatever rules you want it to enforce (existing examples are lint and sparse).
you forgot to separate the expression-statements with semicolon ;.
natural_index = 10;
natural_index = -10;
-10 to a large signed number during the evaluation of assignment operator =.
6.5.16.1p2 (Simple assignment) from ISO9899 says:
In simple assignment (=), the value of the right operand is converted to the type of the assignment expression.
When you use the storage class specifier typedef, this tells the parser to add an alias identifier nat for the type unsigned int in the environment of the parser or in the parse tree that it outputs. In this case nat will be evaluated to a type in declarations, so when you declare the object with identifier natural_index, the left-value associated with this object will have the type unsigned int.

Why the gcc compiler allowed assigning value without giving data type?

Actually, I had mistakenly typed the below statement but compiler allowed me to execute the statement without throwing any error. My code is,
unsigned i=3;
Why the gcc allowed assigning value without giving data type? Is that the way what gcc will work?
From C11 standard, chapter §6.7.2, Type specifiers, the list of type specifiers appear as,
...
— int, signed, or signed int
— unsigned, or unsigned int
...
and the "Semantics" says,
Each of the comma-separated multisets designates the same type,...
So, basically, unsigned and unsigned int refers to the same type and can be considered interchangeable.
Same logic applies for int, signed and signed int.
So, to answer your question,
Why the gcc compiler allowed assigning value without giving data type?
unsigned itself is a type-specifier, which is same as unsigned int. So, essentially, the data type is not missing here.
Declaring the variable unsigned is the same as declaring it unsigned int in C. Check this Wikipedia link on C data types to learn more about it.
So, gcc treats it correctly and compiles fine.
As summarized by this page, signed, unsigned, short and long all implicitly declare an int, unless otherwise specified (e.g. long double).

declaration of array size as a[N+1]

The code defines array like this-
#define N 100
long int mid[N+1]
Here whether mid[N+1]=mid[100+1] i.e mid[101]?
Also I want to know can we declare array of 2 elements as int n[1+1]?
starting from the second question , yes, you can declare something like mid[2+1] , because you are declaring an array of literal size (3 being the literal), and not a variable size.
that brings us to the first question. Yes, it's the same. at an early phase of the compilation, the compiler takes all the definitions in the code and "expands" them to the defined value or expression, so mid[N+1] turns literally into mid[100+1].
note that the N here, is a defined value and not a variable. you can't declare mid[N+1] if that N is a variable (not until C99 i think).
#define N 100
long int mid[N+1]
That's perfectly valid (apart from the missing semicolon), and equivalent to
long int mid[101];
The length of an array can be any integer constant expression with a positive value. It doesn't have to be an integer constant (literal).
Similarly,
int n[1+1];
is equivalent to
int n[2];
(At block scope, an array defined without the static keyword can have a variable length, which can be specified by any integer expression. (If the value of the expression is not positive, the behavior is undefined.) Variable length arrays are not permitted in C90; they were introduced in C99, and support for them was made optional by C11, so not all compilers support them.)
You can declare array like this,Array size should be positive.
#define N 100
long int mid[N+1]
if you want to use array of length N+1,
you can use malloc also
malloc();
eg,
#define N 100
long int* mid;
int main()
{
mid = malloc((N+1)*sizeof(long int));
return 0;
}
So you can access array mid of size 101.

Do Compilers implicitly convert int types

If a value is set to an int e.g. 2, does the compiler convert the int types to the size it needs e.g. int8_t or uint16_t etc.?
Not in vanilla C, no. The compiler can't possible know what you meant if you do not tell it.
If you write
int value = 2;
the type is, by default, signed int. What the compiler does then really depends from the platform, but it has to guarantee that int's size is not less than short int's and not greater than long int'.s
For constants it may true, often the reverse conversion small to large is also done: byte to int for example.
It's somewhat dependent on the implementation and optimization techniques used by the compiler and the data alignment requirements by the architecture/OS. Have a look at this write up.
The compiler first looks at the context of the expression, learning about what type it expects. The context can be i.e.
Left hand side of an assignment
Expected argument type provided by the function header or operator definition
It then evaluates the expression, inserting implicit type conversions as needed (type coercion). It does
Promotion
Truncation
Rounding
In situations where all the bits matter, you need to be extremely carefull about what you write: Types, operators and order.
Integer numbers are values of the type "int". When you assign an integer value to short or char using the operator "=", the int value will be converted to short or char. The compiler may detect this conversion and do optimization to convert the integer value on compile time.
short a = 50; //50 is an int, it will be implicitly converted to short. The compiler may convert 50 to short on compile time.
int b = 60;
short c = b; //the value of b will be converted to short and assigned to c.
short d = b + 70; //b + 70 is a sum of ints and the result will be an int that will be converted to short.
int8_t and uint16_t are not standard types. Many times these types may be defined as something like:
typedef char int8_t;
typedef unsigned short uint16_t;

Resources