Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
My question is pretty simple. I am newbie in microcontrollers world and trying to understand usage of haxadecimal or decimal naming conventions. I saw much C code and first part of programmers uses decimal naming convention:
#define TEST_BUTTON_PORT 1
#define TEST_BUTTON_BIT 19
the second part uses hexadecimal way:
#define IOCON_FUNC0 0x0
#define IOCON_FUNC1 0x1
Is any important reason for different conventions? Is just a programmer choice?
The purpose of hex is to ease the use of binary numbers, since binary is very hard for humans to read. Some examples when hexadecimal is used:
Describing binary numbers and binary representations.
Dealing with hardware addresses.
Doing bit-wise arithmetic.
Declaring bit masks/bit fields.
Dealing with any form of raw data, such as memory dumps, machine code or data protocols.
An exception to this is, oddly, when specifying the number of bits to shift. This is almost always done in decimal notation. If you wish to set bit 19 you would usually do it by writing:
PORT |= 1 << 19;
This assuming bits are enumerated from 0 to n.
I suppose this is because decimal format is more convenient when enumerating things, such as bit/pin numbers. (And manufacturers of MCUs etc usually enumerate pins with decimal notation.)
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
What would sizeof operator return for int data type in 16bit processor system ?
I'm thinking it would be 2 bytes since that's max int that can be represented in system
This answer is about C and not C++. They are two different languages. It may or may not be applicable to C++.
The only thing the standard says about the size is that it should be at least 16 bits. It has nothing to do with the hardware. A compiler may use 16-bit ints on a 32 bit system. The hardware does not dictate this. The compiler constructors typically make optimizations towards certain hardware for obvious reasons, but they are not required to.
An int should be able to hold all values in the range [-32767, 32767], although it's common with [-32768, 32767] on 16 bit systems that are using two complement representation, which almost all modern system does.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I have a problem in C, I am not allowed to use floats as the microcontroller it will be flashed does not support that data type. Now all my integers are being rounded off as it should. How do I handle this case?
A short research indicates using bit wise operation such as left shift and right shift. I know what are these operations. But I do not know how to use these operations to achieve what I want.
Another possibility is the Q number format.
You will get some results if you google "Q number format" or some variations.
It is often used for some DSP related topics in C. Here another blog post that explains that number format and here is an example code implementation for q-numbers in C.
In general you can say that q-numbers represent a number between -1 and 1 without using floating point arithmetic.
Normally a microcontroller don't have a floating point unit, everything works with integers. But its up to you which unit you like for your integers.
For example:
100 could be 100 cm or 1,00 m
1000 could be 100,0 cm or 1,000 m and so on..
Please have a look at the description:
electronic.stackexchange
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
From Python Numeric Types:
Integers have unlimited precision.
This test
#include <stdio.h>
int main(void) {
printf("%zu bytes", sizeof(long long));
return 0;
}
gives me 8 bytes or 64 bits under Linux.
How this is implemented in cpython (this was answered in the comment section)?
What happens when the integer exceeds long long in the implementation?
How big is the speed difference in arithmetics between pre-8-byte and post-8-byte integers?
What if I have a bigger number in Python (not going into 8 bytes)?
Python will adapt and always store the correct number, no approximation. Even with very big INTEGER (but this is not true with other types)
How is it stored in the system, literally? Like a few smaller integers?
That's python implementation. You could find it from the source code here : svn.python.org/projects/python/trunk/Objects/longobject.c (thanks #samgak)
Does it hugely slow down the arithmetics on this number?
Yes like in other languages when the number becomes bigger than .e.g 2^32 on 32 bits systems the arithmetic becomes slower. How much is implementation dependent.
What does Python do, when it encounters such number?
Huge integers are stored in a different way and all arithmetic is adapted to fit.
Is there any difference in Python's 2 & 3 behavior except storing as long in Python 2 and appending L to string representation?
Python 2 and 3 should have the same high level behaviour.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
For Example : Can we increase the size of long long variable to store value of bigger size.(more than 64 bit)
If you are talking about compiler implementation: Yes, I think the C standard doesn't impose any upper bound, only minimums (like char is 8 bits or more) and limits on relative sizes (like, long can't be shorter than int). So you can set the size to anything in your own compiler (or a backend to an existing compiler), or provide command line switches to select it at compile time.
If you are asking from application programmers perspective: No, I don't know of a compiler which would support this. It would have the complication, that you would also need all libraries compiled with custom integer type sizes, because if library code expects a long to be 64 bits, and you call it with 128 bit long, there'll be trouble. At machine code level, there is no C type, there's just raw bytes in registers and memory, and the machine code just has to handle them right everywhere in the application and the librararies.
Perhaps you should ask a question about what you actually want to achieve, there is probably a solution. Use a bigint library? Use a compiler-specific non-standard large integer type? Use a struct or an array with several integers in it (a bigint library basically does this under the hood)? Use floating point? Use ascii text numbers?
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I am looking for fast check sum algorithm that produce 1 byte checksum.
I checked CRC8 and Adler8, but I do not fully understand the samples.
Also different CRC8 implementations give different results.
In all cases I do not need anything that fancy.
CRC's are based on a type of finite field math, using polynomials with 1 bit coefficients (math modulo 2). An 8 bit CRC is the result of treating data as a very long polynomial dividend with 1 bit coefficients and dividing it by a 9 bit polynomial divisor, which produces an 8 bit remainder. Since 1 bit coefficients are used, add or subtract effectively become exclusive or. You don't really need to understand finite field math to implement a CRC, just use a lookup table or use an algorithm to generate the CRC.
You could just add up all the bytes into a 1 byte sum, and use that as a checksum. The advantage of a CRC is that if bytes are missing or out of order, it has a better chance of detecting that.