what are default integer values? - c

I read somewhere that default floating point values like 1.2 are double not float.
So what are default integer values like 6 , are they short , int or long?

The type of integer literals given in base 10 is the first type in the following list in which their value can fit:
int
long int
long long int
For octal and hexadecimal literals, unsigned types will be considered as well, in the following order:
int
unsigned int
long int
unsigned long int
long long int
unsigned long long int
You can specify a u suffix to force unsigned types, an l suffix to force long or long long, or an ll suffix to force long long.
Reference: C99, 6.4.4.1p5

Just if someone is interested:
C11 §6.4.4.1/5:
The type of an integer constant is the first of the corresponding list in which its value can
be represented.
---------------------------------------------------------------------------
Suffix Decimal Constant Octal/Hexadecimal Constant
---------------------------------------------------------------------------
none int int
long int unsigned int
long long int unsigned long int
long long int
unsigned long long int
---------------------------------------------------------------------------
u or U unsigned int unsigned int
unsigned long int unsigned long int
unsigned long long int unsigned long long int
---------------------------------------------------------------------------
l or L long int long int
long long int unsigned long int
long long int
unsigned long long int
---------------------------------------------------------------------------
Both u or U unsigned long int unsigned long int
and l or L unsigned long long int unsigned long long int
---------------------------------------------------------------------------
ll or LL long long int long long int
unsigend long long int
---------------------------------------------------------------------------
Both u or U unsigned long long int unsigned long long int
and ll or LL
---------------------------------------------------------------------------
As for the prefix §6.4.4.1/3:
A decimal constant begins with a nonzero digit and consists of a sequence of decimal digits. An octal constant consists of the prefix 0 optionally followed by a sequence of the digits 0 through 7 only. A hexadecimal constant consists of the prefix 0x or 0X followed by a sequence of the decimal digits and the letters a (or A) through f (or F) with values 10 through 15 respectively.

There are three types of integer literals(or integer constants in the standards terminology): decimal, octal or hex and the rule are slightly different for your specific example 6 would be int but in general for decimal constants without a suffix(u, U, l, L, ll, LL) it will be based on which type can represent the value which is covered in the draft C99 standard section 6.4.4.1 Integer constants paragraph 5 which says:
The type of an integer literal is the first of the corresponding list in which its value can be represented.
so for a decimal literal without a suffix the types would be the first of:
int
long int
long long int
and for octal and hex the types would be the first of:
int
unsigned int
long int
unsigned long int
long long int
unsigned long long int

Related

macro constants with and without "u" at the end of a number

What is the difference between Usage
#define CONSTANT_1 (256u)
#define CONSTANT_2 (0XFFFFu)
and
#define CONSTANT_1 (256)
#define CONSTANT_2 (0XFFFF)
when do I really need to add u and what problems we get into if not?
I am more interested in the example expressions where one usage can go wrong with other usage.
The trailing u makes the constant have unsigned type. For the examples given, this is probably unnecessary and may have surprising consequences:
#include <stdio.h>
#define CONSTANT_1 (256u)
int main() {
if (CONSTANT_1 > -1) {
printf("expected this\n");
} else {
printf("but got this instead!\n");
}
return 0;
}
The reason for this surprising result is the comparison is performed using unsigned arithmetics, -1 being implicitly converted to unsigned int with value UINT_MAX. Enabling extra warnings will save the day on modern compilers (-Wall -Werror for gcc and clang).
256u has type unsigned int whereas 256 has type int. The other example is more subtle: 0xFFFFu has type unsigned int, and 0xFFFF has type int except on systems where int has just 16 bits where it has type unsigned int.
Some industry standards such as MISRA-C mandate such constant typing, a counterproductive recommendation in my humble opinion.
The u indicates that the decimal constant is unsigned.
Without that, because the value fits in the range of signed integer, it'll be taken as a signed one.
Quoting C11, chapter 6.4.4.1, Integer constants
The type of an integer constant is the first of the corresponding list in which
its value can be represented.
Suffix Decimal Constant Octal or Hexadecimal Constant
---------------------------------------------------------------------------
none int int
long int unsigned int
long long int long int
unsigned long int
long long int
unsigned long long int
u or U unsigned int unsigned int
unsigned long int unsigned long int
unsigned long long int unsigned long long int

Wrong answer in C for long number calculations

long int k=(long int)(2000*2000*2000);
the above calculation is giving me wrong answer in C. What is wrong?
If a C integer constant fits in an int, it is of type int. So your expression is evaluated as:
long int k = (long int)((int)2000*(int)2000*(int)2000);
If int isn't large enough to hold the result of the multiplication, you'll get a signed integer overflow and undefined behavior. So if long is large enough to hold the result, you should write:
long k = 2000L * 2000L * 2000L;
The L suffix forces the type of the literal to long (long is equivalent to long int).
But on most platforms, even long is only a 32-bit type, so you have to use long long which is guaranteed to have at least 64 bits:
long long k = 2000LL * 2000LL * 2000LL;
The LL suffix forces the type of the literal to long long.
2000 is of type int, so 2000*2000*2000 is also of type int.
Assuming a 32-bit int (which is actually more than the standard requires, since an int is not required by the standard to represent a value more than 32767) the maximum representable value is about 2,147,483,647 (commas inserted for readability) which is less than 8,000,000,000.
You will probably want to do the calcuation as 2000LL*2000*2000 which takes advantage of multiplication being left-right associative, and will promote all the 2000 values to long long int before doing the multiplication. Your variable will also need to be of type long long int if you want a guarantee of being able to store the result.
Holt's answer is the correct one, I am just leaving this here as caveat!
You could try to use:
long long int
instead of
long int
However, in my local machine, it has no effect:
#include <stdio.h>
int main(void)
{
long int k=(long int)(2000*2000*2000);
printf("With long int, I am getting: %ld\n", k);
long long int n = 2000*2000*2000;
printf("With long long int, I am getting: %lld\n", n);
return 0;
}
Output:
With long int, I am getting: -589934592
With long long int, I am getting: -589934592
Warnings:
../main.c:6:36: warning: integer overflow in expression [-Woverflow]
long int k=(long int)(2000*2000*2000);
^
../main.c:9:32: warning: integer overflow in expression [-Woverflow]
long long int n = 2000*2000*2000;
Even this:
unsigned long long int n = 2000*2000*2000;
printf("With long long int, I am getting: %llu\n", n);
will overflow too.
There are two problems in your code:
long int is (on most architecture) not enough to store 8e9.
When you do 2000 * 2000 * 2000, operations are made using "simple" int, thus, int * int * int = int so you cast the result to an int and then to a long int.
You need to use long long int and specify that you want long long int:
long long int k = 2000LL*2000LL*2000LL;
Notice the extra LL after 2000 saying "It's 2000, but as a long long int!".
You can't just multiply the values together as ordinary precision integers and then cast the result to a higher precision, because the result has already overflowed at that point. Instead, the operands need to be higher precision integers before they're multiplied. Try the following:
#include <stdio.h>
int main(void)
{
long long int n = (long long int)2000*(long long int)2000*(long long int)2000;
printf("long long int operands: %lld\n", n);
return 0;
}
On my machine, this gives:
long long int operands: 8000000000

Cast implied unsigned integer type?

I'm new to C and have seen code such as (unsigned)b
Does that imply that b will be an unsigned int? Or what type does it imply?
b will be whatever type it was to begin with. That doesn't change
(unsigned)b will evaluate as whatever value b is, but that is subject to the cast to unsigned, which is synonymous with unsigned int.
How that happens depends entirely on the type of b to begin with, whether that type is convertible to unsigned int, and whether the value contained therein falls unscathed between 0...UINT_MAX or not.
Ex:
#include <stdio.h>
void foo(unsigned int x)
{
printf("%u\n", x);
}
int main()
{
int b = 100;
foo((unsigned)b); // will print 100
char c = 'a';
foo((unsigned)c); // will print 97 (assuming ASCII)
short s = -10; // will print 4294967286 for 32-bit int types
// (see below for why)
foo((unsigned)s);
// won't compile, not convertible
//struct X { int val; } x;
//foo((unsigned)x);
}
The only part of this that may raise your eye brow is the third example. When a value of a convertible type to an unsigned type is out of the unsigned type's range (ex: negative values are not same-value representable with any unsigned target types), the value is converted by repeatedly added the maximum value representable by the unsigned target plus-one to the out-of-range value until such time as it falls within the valid range of the unsigned type.
In other words, because -10 is not within the valid representation of unsigned int, UINT_MAX+1 is added to -10 repeatedly (only takes once in this case) until the result is within 0...UINT_MAX.
Hope that helps.
unsigned is a short of unsigned int
signed is a short of signed int
long is the short of long int
long long is the short of long long int
short is the short of short int

Question about C datatype and constant

Greetings!
I was experimenting with C language till I encountered something very strange.
I was not able to explain myself the result shown below.
The Code:
#include <stdio.h>
int main(void)
{
int num = 4294967295U;
printf("%u\n", num);
return 0;
}
The Question:
1.) As you see, I created an int which can hold numbers between -2147483648 to 2147483647.
2.) When I assign the value 4294967295 to this variable, the IDE shows me a warning message during compilation because the variable overflowed.
3.) Due to curiosity I added a U (unsigned) behind the number and when I recompiled it, the compiler did not return any warning message.
4.) I did further experiments by changing the U (unsigned) to L (long) and LL (long long). As expected, the warning message still persist for these two but not after I change it to UL (unsigned Long) and ULL (unsigned long long).
5.) Why is this happening?
The Warning Message :(For steps 2)
warning #2073: Overflow in converting constant expression from 'long long int' to 'int'.
The Warning Message:(For steps 4 LL & L)
warning #2073: Overflow in converting constant expression from 'long long int' to 'long int'.
And last, thanks for reading my question, your teachings and advices are much appreciated.
As per the ISO C99 standard, section 6.4.4.1 (Integer Constants), subsection Semantics, the type of an integer constant is the first type of the following table where the value can be represented:
Octal or Hexadecimal
Suffix Decimal Constant Constant
none int int
long int unsigned int
long long int long int
unsigned long int
long long int
unsigned long long int
u or U unsigned int unsigned int
unsigned long int unsigned long int
unsigned long long int unsigned long long int
l or L long int long int
long long int unsigned long int
long long int
unsigned long long int
both u or U unsigned long int unsigned long int
and l or L unsigned long long int unsigned long long int
ll or LL long long int long long int
unsigned long long int
both u or U unsigned long long int unsigned long long int
and ll or LL
Particular implementations can have extended integer types that follow the same pattern as above.
Perhaps, by default, the compiler assumes you're typing in signed integers. When you give it 4294967295, that number doesn't fit into a 4-byte integer, so it uses an 8-byte integer to store it, instead. Then it has to do a lossy conversion (long long, AKA 8-byte, to long, AKA 4-byte), so it gives you a warning.
However, when you type 4294967295U, it knows you want an unsigned integer. That number fits into a 4-byte unsigned integer, so it has type long int, and no lossy conversion is necessary. (You're not losing data by going from unsigned long int to long int, just mis-representing it.)

Unsigned hexadecimal constant in C?

Does C treat hexadecimal constants (e.g. 0x23FE) and signed or unsigned int?
The number itself is always interpreted as a non-negative number. Hexadecimal constants don't have a sign or any inherent way to express a negative number. The type of the constant is the first one of these which can represent their value:
int
unsigned int
long int
unsigned long int
long long int
unsigned long long int
It treats them as int literals(basically, as signed int!). To write an unsigned literal just add u at the end:
0x23FEu
According to cppreference, the type of the hexadecimal literal is the first type in the following list in which the value can fit.
int
unsigned int
long int
unsigned long int
long long int(since C99)
unsigned long long int(since C99)
So it depends on how big your number is. If your number is smaller than INT_MAX, then it is of type int. If your number is greater than INT_MAX but smaller than UINT_MAX, it is of type unsigned int, and so forth.
Since 0x23FE is smaller than INT_MAX(which is 0x7FFF or greater), it is of type int.
If you want it to be unsigned, add a u at the end of the number: 0x23FEu.

Resources