Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
From Python Numeric Types:
Integers have unlimited precision.
This test
#include <stdio.h>
int main(void) {
printf("%zu bytes", sizeof(long long));
return 0;
}
gives me 8 bytes or 64 bits under Linux.
How this is implemented in cpython (this was answered in the comment section)?
What happens when the integer exceeds long long in the implementation?
How big is the speed difference in arithmetics between pre-8-byte and post-8-byte integers?
What if I have a bigger number in Python (not going into 8 bytes)?
Python will adapt and always store the correct number, no approximation. Even with very big INTEGER (but this is not true with other types)
How is it stored in the system, literally? Like a few smaller integers?
That's python implementation. You could find it from the source code here : svn.python.org/projects/python/trunk/Objects/longobject.c (thanks #samgak)
Does it hugely slow down the arithmetics on this number?
Yes like in other languages when the number becomes bigger than .e.g 2^32 on 32 bits systems the arithmetic becomes slower. How much is implementation dependent.
What does Python do, when it encounters such number?
Huge integers are stored in a different way and all arithmetic is adapted to fit.
Is there any difference in Python's 2 & 3 behavior except storing as long in Python 2 and appending L to string representation?
Python 2 and 3 should have the same high level behaviour.
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
What would sizeof operator return for int data type in 16bit processor system ?
I'm thinking it would be 2 bytes since that's max int that can be represented in system
This answer is about C and not C++. They are two different languages. It may or may not be applicable to C++.
The only thing the standard says about the size is that it should be at least 16 bits. It has nothing to do with the hardware. A compiler may use 16-bit ints on a 32 bit system. The hardware does not dictate this. The compiler constructors typically make optimizations towards certain hardware for obvious reasons, but they are not required to.
An int should be able to hold all values in the range [-32767, 32767], although it's common with [-32768, 32767] on 16 bit systems that are using two complement representation, which almost all modern system does.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
My question is pretty simple. I am newbie in microcontrollers world and trying to understand usage of haxadecimal or decimal naming conventions. I saw much C code and first part of programmers uses decimal naming convention:
#define TEST_BUTTON_PORT 1
#define TEST_BUTTON_BIT 19
the second part uses hexadecimal way:
#define IOCON_FUNC0 0x0
#define IOCON_FUNC1 0x1
Is any important reason for different conventions? Is just a programmer choice?
The purpose of hex is to ease the use of binary numbers, since binary is very hard for humans to read. Some examples when hexadecimal is used:
Describing binary numbers and binary representations.
Dealing with hardware addresses.
Doing bit-wise arithmetic.
Declaring bit masks/bit fields.
Dealing with any form of raw data, such as memory dumps, machine code or data protocols.
An exception to this is, oddly, when specifying the number of bits to shift. This is almost always done in decimal notation. If you wish to set bit 19 you would usually do it by writing:
PORT |= 1 << 19;
This assuming bits are enumerated from 0 to n.
I suppose this is because decimal format is more convenient when enumerating things, such as bit/pin numbers. (And manufacturers of MCUs etc usually enumerate pins with decimal notation.)
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
For Example : Can we increase the size of long long variable to store value of bigger size.(more than 64 bit)
If you are talking about compiler implementation: Yes, I think the C standard doesn't impose any upper bound, only minimums (like char is 8 bits or more) and limits on relative sizes (like, long can't be shorter than int). So you can set the size to anything in your own compiler (or a backend to an existing compiler), or provide command line switches to select it at compile time.
If you are asking from application programmers perspective: No, I don't know of a compiler which would support this. It would have the complication, that you would also need all libraries compiled with custom integer type sizes, because if library code expects a long to be 64 bits, and you call it with 128 bit long, there'll be trouble. At machine code level, there is no C type, there's just raw bytes in registers and memory, and the machine code just has to handle them right everywhere in the application and the librararies.
Perhaps you should ask a question about what you actually want to achieve, there is probably a solution. Use a bigint library? Use a compiler-specific non-standard large integer type? Use a struct or an array with several integers in it (a bigint library basically does this under the hood)? Use floating point? Use ascii text numbers?
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
example for int in c when I add (2^31 - 1) + 1 it gives the output -2^31..again -2^31 + 1 gives the output -(2^31 + 1)..In what procedure computer makes such operation?
In C, when the sum of 2 int exceed the range of int, the result is undefined behavior. Code should not rely on a specific result. No particular procedure runs in such a case - the results are undefined.
When the mathematical sum of 2 unsigned exceeds the range of unsigned, the mathematical UINT_MAX + 1 is subtracted to bring the result in range.
I believe this is inherent in how the computer represents the numbers. It's a system called Two's complement. It isn't something you should rely on, though, because it won't be portable. If another computer system uses a different mechanism for representing the numbers, who knows what result you'd get!
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
Hello everyone I was going through a few programming questions and encounter one strange thing, The problem asked me to do some logic which is not relevant or nor I am asking the logic here, But the question involved the integers and said that I should keep in consideration that integer value be less than 10000000000
My doubt what data type to be used to store such ranges or, Lets just assume that some C program is used in some banking application which involves huge no of these magnitude, How do we store such huge no, Note: Even type 'long long' wont be able to store such huge no , Then how do we store such no ?
if possible use int64_t instead of long long that is defined in standard library header stdint.h. That holds 64 bit integers for sure. Larges number that can be represented is 2**63-1 and that is 9223372036854775807 (9e18). So it can hold 10000000000.
You could imagine the value stored in a database where numeric precision can be specified for data columns. But it's more likely that the particular value is specified to force you to think about how the algorithm or code would handle numeric overflow.
BTW, many systems use a 64 bit long long, which could hold the value you mention. Here's a great site to experiment with numbers and gain an intuitive feel for this:
http://www.wolframalpha.com/input/?i=2%5E64