This question already has answers here:
Reasons to use (or not) stdint
(4 answers)
Closed 8 years ago.
I just wanted to know, Why we need to have uint64_t which is actually a typedef of unsigned long , when unsigned long is anyway available. Is it only for make the name short or any other reason ?
The reason should be pretty obvious; it's a guarantee that there is 64 bits of precision. There is no such guarantee for unsigned long.
The fact that on your system it's a typedef, just means that on your system unsigned long is 64 bits. That's not true in general, but the typedef can be varied (by the compiler implementors) to make the guarantee for uint64_t.
It's not always a typedef for unsigned long, because unsigned long is not universally 64 bit wide. For example, on x64 Windows, unsigned long is 32 bit, only unsigned long long is 64 bit.
Because uint64_t means a type of exactly 64 bits, where unsigned long can be 32 bits or higher.
On a system where unsigned long is 32 bits, it would be used to typedef uint32_t , and unsigned long long would be used to typedef uint64_t.
Yes, it does make it shorter, but that's not the only reason to use it. The main motto of using this typedef-ed data types is to make a code more robust and portable accross various platforms.
Sometimes, unsigned long may not be 64 bits, but a definition of uint64_t will always gurantee 64 bit precision. The typedef of uint64_t can [ read as : will] be varied accross different platform to have a precision of 64 bits.
Example:
for systems having long as 32 bits, uint64_t will be typedef to unsigned long long
for systems having long as 64 bits, uint64_t will be typedef to unsigned long
Not only to make it shorter, but also to make it more descriptive.
If you see in code uint64_t it will be easy to read as unsigned int of 64 bits data type.
Related
What's the difference between long long and long? And they both don't work with 12 digit numbers (600851475143), am I forgetting something?
#include <iostream>
using namespace std;
int main(){
long long a = 600851475143;
}
Going by the standard, all that's guaranteed is:
int must be at least 16 bits
long must be at least 32 bits
long long must be at least 64 bits
On major 32-bit platforms:
int is 32 bits
long is 32 bits as well
long long is 64 bits
On major 64-bit platforms:
int is 32 bits
long is either 32 or 64 bits
long long is 64 bits as well
If you need a specific integer size for a particular application, rather than trusting the compiler to pick the size you want, #include <stdint.h> (or <cstdint>) so you can use these types:
int8_t and uint8_t
int16_t and uint16_t
int32_t and uint32_t
int64_t and uint64_t
You may also be interested in #include <stddef.h> (or <cstddef>):
size_t
ptrdiff_t
long long does not exist in C++98/C++03, but does exist in C99 and c++0x.
long is guaranteed at least 32 bits.
long long is guaranteed at least 64 bits.
To elaborate on #ildjarn's comment:
And they both don't work with 12 digit numbers (600851475143), am I forgetting something?
The compiler looks at the literal value 600851475143 without considering the variable that you're assigning it to/initializing it with. You've written it as an int typed literal, and it won't fit in an int.
Use 600851475143LL to get a long long typed literal.
Your C++ compiler supports long long, that is guaranteed to be at least 64-bits in the C99 standard (that's a C standard, not a C++ standard). See Visual C++ header file to get the ranges on your system.
Recommendation
For new programs, it is recommended that one use only bool, char, int, and double, until circumstance arises that one of the other types is needed.
http://www.somacon.com/p111.php
Depends on your compiler.long long is 64 bits and should handle 12 digits.Looks like in your case it is just considering it long and hence not handling 12 digits.
The standard gives the following minimum bit widths for standard unsigned types:
unsigned char >= 8
unsigned short >= 16
unsigned int >= 16
unsigned long >= 32
unsigned long long >= 64
(Implicitly, by specifying minimum maximal values).
Does that imply the following equivalencies?
unsigned char == uint_fast8_t
unsigned short == uint_fast16_t
unsigned int == uint_fast16_t
unsigned long == uint_fast32_t
unsigned long long == uint_fast64_t
No, because the size of the default "primitive data types" are picked by the compiler to be something convenient for the given system. Convenient meaning easy to work with for various reasons: integer range, the size of the other integer types, backwards compatibility etc. It is not necessarily picked to be the fastest possible.
For example, in practice unsigned int has the size 32 on a 32 bit system, but also size 32 on a 64 bit system. But on the 64 bit system, uint_fast32_t could be 64 bits.
Just look at the most commonly used sizes in practice for unsigned char versus uint_fast8_t:
Data bus unsigned char uint_fast8_t
8 bit 8 bit 8 bit
16 bit 8 bit 16 bit
32 bit 8 bit 32 bit
64 bit 8 bit 32/64 bit
This is so, because for convenience we need a byte type to work with. Yet the optimizing compiler may very well place this unsigned char byte at an aligned address and read it as a 32 bit access, so it might perform optimizations despite the actual size of the data type.
I don't know if this is an answer to the question but (at least in glibc), int_fastx_t and uint_fastx_t are typedefed depending of the word size:
/* Fast types. */
/* Signed. */
typedef signed char int_fast8_t;
#if __WORDSIZE == 64
typedef long int int_fast16_t;
typedef long int int_fast32_t;
typedef long int int_fast64_t;
#else
typedef int int_fast16_t;
typedef int int_fast32_t;
__extension__
typedef long long int int_fast64_t;
#endif
/* Unsigned. */
typedef unsigned char uint_fast8_t;
#if __WORDSIZE == 64
typedef unsigned long int uint_fast16_t;
typedef unsigned long int uint_fast32_t;
typedef unsigned long int uint_fast64_t;
#else
typedef unsigned int uint_fast16_t;
typedef unsigned int uint_fast32_t;
__extension__
typedef unsigned long long int uint_fast64_t;
#endif
Basically you're right, but not de jure.
int is meant to be the "natural" integer type for the system. In fact it is usually 32 bit on 32 and 64 bit systems, and 16 bits on small systems, because 64 bit ints would break too many interfaces. So int is essentially the fast_32_t.
However it's not guaranteed. And the smaller types like 8 and 16 bits might well have faster 32 bit equivalents. However in practise using int will likely give you the fastest code.
I'm trying to check some homework answers about overflow for 2's complement addition, subtraction, etc. and I'm wondering if I can specify the size of a data type. For instance if I want to see what happens when I try to assign -128 or -256 to a 7-bit unsigned int.
On further reading I see you wanted bit sizes that are not normal ones, such as 7 bit and 9 bit etc.
You can achieve this using bitfields
struct bits9
{
int x : 9;
};
Now you can use this type bits9 which has one field in it x that is only 9 bits in size.
struct bits9 myValue;
myValue.x = 123;
For an arbitrary sized value, you can use bitfields in structs. For example for a 7-bit value:
struct something {
unsigned char field:7;
unsigned char padding:1;
};
struct something value;
value.field = -128;
The smallest size you have have is char which is an 8 bit integer. You can have unsigned and signed chars. Take a look at the stdint.h header. It defines a int types for you in a platform independent way. Also there is no such thing as an 7 bit integer.
Using built in types you have things like:
char value1; // 8 bits
short value2; // 16 bits
long value3; // 32 bits
long long value4; // 64 bits
Note this is the case with Microsoft's compiler on Windows. The C standard does not specify exact widths other than "this one must be at least as big as this other one" etc.
If you only care about a specific platform you can print out the sizes of your types and use those once you have figured them out.
Alternatively you can use stdint.h which is in the C99 standard. It has types with the width in the name to make it clear
int8_t value1; // 8 bits
int16_t value2; // 16 bits
int32_t value3; // 32 bits
int64_t value4; // 64 bits
I have some issue when compiling in ARM cross compiler with a variable of type unsigned long long
The variable represents partition size (~256GBytes). I expect it to be stored as 64-bit, but on printing it using %lld, or even trying to print it as Mega bytes (value/(1024*1024*1024)), I always see the only 32-bit of the real value.
Does anyone know why the compiler store it as 32-bits?
Edit:
My mistake, the value is set in C using the following calculation:
partition_size = st.f_bsize*st.f_frsize;
struct statvfs { unsigned long int f_bsize; unsigned long int f_frsize; ...}
The issue is that f_bsize and f_frsize are only 32 bits, and the compiler does not automatically cast them to 64bits! Casting solved this issue for me.
My mistake........
the value is set in C using the following calcualtion:
partition_size = st.f_bsize*st.f_frsize;
struct statvfs
{ unsigned long int f_bsize; unsigned long int f_frsize;...}
The issue is that f_bsize & f_frsize are only 32 bits, and the compiler does not automatically cast them to 64bits!
Casting solved this issue for me.
The below code prints the whole 64 bits.Try printing it using %llu
main()
{
unsigned long long num = 4611111275421987987;
printf("%llu\n",num);
}
In MPLAB IDE what is the sizes of data types (int, unsigned int, float, unsigned float, char...)?
This is hard without knowing for which CPU you want to compile code. Assuming e.g. Microchip's C18 compiler for the PIC18, the User Guide states the following fundamental type sizes:
TYPE SIZE RANGE
char(1,2) 8 bits -128 127
signed char 8 bits -128 127
unsigned char 8 bits 0 255
int 16 bits -32,768 32,767
unsigned int 16 bits 0 65,535
short 16 bits -32,768 32,767
unsigned short 16 bits 0 65,535
short long 24 bits -8,388,608 8,388,607
unsigned short long 24 bits 0 16,777,215
long 32 bits -2,147,483,648 2,147,483,647
unsigned long 32 bits 0 4,294,967,295
Note that this includes some types (short long) that are not standard in C.
Values for int, long, etc., are never standardly defined across all compilers(reference) . For this reason, it is advised to make use of the library:
#include <stdint.h>
To make use of this library for your own purposes, try using the code as follows:
typedef uint8_t BYTE
typedef uint16_t WORD
typedef uint32_t LONG
Then you just use these to define your variables. This method usually makes use of an integer.h file to store these definitions and is included wherever needed.
Here is the implementation of integer data types on different MPLAB XC compilers.
Data Types for 8-bit devices (implementation on XC8 compiler):
Data Types for 16-bit devices (implementation on XC16 compiler):
Data Types for 32-bit devices (implementation on XC32 compiler):
I would be wary of such generalizations. MPLAB is just an IDE - it is suitable for different chips. Microchip has 8-bit controllers like PIC18F, 16-bit and 32-bit controllers. The data types for each may be different and hold serious implications for performance. I.e. for the 8-bit chips the 16 and 32 bit data types may be emulated in software, which isn't always what you want.
#include<stdint.h>
long x;
These two things helped me get through ;)
And the rest info. is already shared by other folks.