I cannot understand what is meant by following operation in embedded C?
NVIC_ICPR |= 1 << (vector_number%32);
From the reference manual, I found that
Vector number — the value stored on the stack when an interrupt is serviced.
IRQ number — non-core interrupt source count, which is the vector number minus
16.
But why is it modular division by 32?
It is basically a register with 32 bits in it.
This removes pending state of one or more interrupts within a group of 32. Each bit represents an interrupt number from IRQ0 - IRQ31 (Vector number from 16 - 47).
Writing 1 will remove the pending state. Writing 0 has no effect.
An important point is you should use it like this
NVIC_ICPR |= 1U << (vector_number%32);
This ensures that this will be unsigned int arithmetic - it saves you from UB which arises when vector_number=31. (chux pointed this).
Related
My Watch dog timer has a default value of 0x0fffff and i want to write a 2 byte variable (u2 compare) in it. What happens when i assign the value simply like this
wdt_register = compare;
What happens to most significant byte of register?
Register definition. It's 3 bytes register containing H, M, L 8bit registers. 4 most significat bits of H are not used and then it's actually a 20 bit register. Datasheet named all of them as WDTCR_20.
My question is what happens when i assign a value to the register using this line (just an example of 2 byte value written to 3 byte register) :
WDTCR_20 = 0x1234;
Your WDT is a so-called special function register. In hardware, it may end up being three bytes, or it could be four bytes, some of which are fixed/read-only/unused. Your compiler's implementation of the write is itself implementation-dependent if the SFR is declared in a particular way that makes the compiler emit SFR-specific write instructions.
This effectively makes the result of the assignment implementation-dependent; the high eight bits might end up being discarded, might set some other microarchitectural flags, or might cause a trap/crash if they aren't set to a specific (likely all-zeros value). It depends on the processor's datasheet (since you didn't mention a processor/toolchain, we don't know exactly).
For example, the AVR-based atmega328p datasheet shows an example of such a register:
In this case, the one-byte register is actually only three bits, effectively (bits 7..3 are fixed to zero on read and ignored on write, and could very well have no physical flip-flop or SRAM cell associated with them).
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I am writing a c program which has bit masking in it. What does the macros below define??
What all these operations <<, >>, | and & do??
1. #define SINE_PHASEREG_BASE (0x3 << 14)
2. #define IOPORT_MODE_MUX_MASK (0x7 << 0) /*!< MUX bits mask */
3. #define IOPORT_MODE_MUX_D ( 3 << 0) /*!< MUX function D */
These are C macros that perform bit shift operations.
Numbers and hardware registers are represented by bits at the lowest level.
Some C basics:
Macros are like functions, but instead of calling the function, the
C preprocessor replacing the text which calls the macro, with the
macro itself.
For example, I can create a simple C macro to add 1 to a number as follows:
#define ADD_ONE(x) ((x)+1)
And then I can compute the value of a number plus one as follows:
int I = ADD_ONE(5);
This will get replaced by the CPP preprocessor as:
int I = ((5)+1);
Then the compiler compiles this into the equivalent of:
int I = 6;
Notice that this "call" to ADD_ONE is done at compile time, not run time since ADD_ONE is a macro, and not a function.
Lines 1 to 3 are C macros, and they replace the text where they are called, prior to the code being compiled. Sometimes this is awesome, and sometimes it does things you don't expect. If you stick to the basics, they can be very useful, but experts can make code dance with these things.
Numbers are represented by binary numbers, with the rightmost bit
(the least significant bit aka b0) representing the value 0 if the
bit is zero, or 1 if it is a one, or b0*(2^0).
Why the complicated way of expressing zero or one? Because, the other bits use the similar formula. Bit 1 represents either zero or two: b1*(2^1).
In general, bit n represents bn*(2^n).
So if you have an int x set to 5, then:
X = 5=4+1 = 1*2^2+0*2^1+1*2^0 = 101 in binary.
What's a bit shift operation?
It's an how computers shift bits left or right. Numerically, shifting the bits left is the same as multiplying by two, while shifting right is integer division by 2.
The operator << shifts bits left, and >> shifts bits right. The | operator is a bitwise OR operator, and & is a bitwise AND operator. For a great introduction to bitwise operators, refer to this excellent answer.
So if x is 5, then x<<1 is 1010b which equals = 8+0+2+0 = 10. The same as x*2.
Why should you care?
Because these macros are performing bit shift operations! So you need to understand how numbers are represented in binary to understand it.
Let's look at what those macros do!
SINE_CONTREG_BASE (0x1 << 13)
This takes the number one and shifts it left 13 times, so when this macro is used, it is replaced with the text (0x1 << 13) by CPP, and compiled as the constant value of 8196 (which is 2^0 * 2^13). So this macro is a way of documenting that the 14th bit of the SINE_CON register is important enough to have a macro which defines the value of this bit.
SINE_PHASEREG_BASE (0x3 << 14)
Similarly, this is used to represent a two-bit binary bit field in the SINE_PHASE register that can be found in bits 15 and 14 (notice that 3 is 11b).
IOPORT_MODE_MUX_MASK
This is saying that the IOPORT_MODE_MUX register is the first three bits in that register, and a MASK refers to a value, that can be used to extra those three bits using a bitwise AND operation on the value of the register. To set the values, one uses a bitwise OR operation to set the hardware bits in that register.
IOPORT_MODE_MUX_D
The IOPORT_MODE_MUX function D bits re the first two bits in that same register. You can use this macro to extract or set those two bits accordingly.
I know that many had similar questions over here about converting from/to two's complement format and I tried many of them but nothing seems to help in my case.
Well, I'm working on an embedded project that involves writing/reading registers of a slave device over SPI. The register concerned here is a 22-bit position register that stores the uStep value in two's complement format and it ranges from -2^21 to +2^21 -1. The problem is when I read the register, I get a big integer that has nothing to do with the actual value.
Example:
After sending a command to the slave to move 4000 steps (forward/positive), I read the position register and I get exactly 4000. However, if I send a reverse move command, say -1, and then read the register, the value I get is something like 4292928. I believe it's the negative offset of the register as the two's complement has no zero. I have no problem sending a negative integer to the device to move x number of steps, however, getting the actual negative integer from the value retrieved is something else.
I know that this involves two's complement but the question is, how to get the actual negative integer out of that strange value? I mean, if I moved the device -4000 steps, what I have to do to get the exact value for the negative steps moved so far from my register?
You need to sign-extend bit 21 through the bits to the left.
For negative values when bit 21 is set, you can do this by ORring the value with 0xFFC00000.
For positive values when bit 21 is clear, you can ensure by ANDing the value with 0x003FFFFF.
The solutions by Clifford and Weather Vane assume the target machine is two's-complement. This is very likely true, but a solution that removes this dependency is:
static const int32_t sign_bit = 0x00200000;
int32_t pos_count = (getPosRegisterValue() ^ sign_bit) - sign_bit;
It has the additional advantage of being branch-free.
The simplest method perhaps is simply to shift the position value left by 10 bits and assign to an int32_t. You will then have a 32 bit value and the position will be scaled up by 210 (1024), and have 32 bit resolution, but 10 bit granularity, which normally shouldn't matter since the position units are entirely arbitrary in any case, and can be converted to real-world units if necessary taking into account the scaling:
int32_t pos_count = (int32_t)(getPosRegisterValue() << 10) ;
Where getPosRegisterValue() returns a uint32_t.
If you do however want to retain 22 bit resolution then it is simply a case of dividing the value by 1024:
int32_t pos_count = (int32_t)(getPosRegisterValue() << 10)) / 1024 ;
Both solutions rely in the implementation-defined behaviour of casting a uint32_t of value not representable in an int32_t; but one a two's complement machine any plausible implementation will not modify the bit-pattern and the result will be as required.
Another perhaps less elegant solution also retaining 22 bit resolution and single bit granularity is:
int32_t pos_count = getPosRegisterValue() ;
// If 22 bit sign bit set...
if( (pos_count & 0x00200000) != 0)
{
// Sign-extend to 32bit
pos_count |= 0xFFC00000 ;
}
It would be wise perhaps to wrap the solution is a function to isolate any implementation defined behaviour:
int32_t posCount()
{
return (int32_t)(getPosRegisterValue() << 10)) / 1024 ;
}
When I write the following program and use the GNU C++ compiler, the output is 1 which I think is due to the rotation operation performed by the compiler.
#include <iostream>
int main()
{
int a = 1;
std::cout << (a << 32) << std::endl;
return 0;
}
But logically, as it's said that the bits are lost if they overflow the bit width, the output should be 0. What is happening?
The code is on ideone, http://ideone.com/VPTwj.
This is caused due to a combination of an undefined behaviour in C and the fact that code generated for IA-32 processors has a 5 bit mask applied on the shift count. This means that on IA-32 processors, the range of a shift count is 0-31 only. 1
From The C programming language 2
The result is undefined if the right operand is negative, or greater than or equal to the number of bits in the left expression’s type.
From IA-32 Intel Architecture Software Developer’s Manual 3
The 8086 does not mask the shift count. However, all other IA-32 processors (starting with the Intel 286 processor) do mask the shift count to 5 bits, resulting in a maximum count of 31. This masking is done in all operating modes (including the virtual-8086 mode) to reduce the maximum execution time of the instructions.
1 http://codeyarns.com/2004/12/20/c-shift-operator-mayhem/
2 A7.8 Shift Operators, Appendix A. Reference Manual, The C Programming Language
3 SAL/SAR/SHL/SHR – Shift, Chapter 4. Instruction Set Reference, IA-32 Intel Architecture Software Developer’s Manual
In C++, shift is only well-defined if you shift a value less steps than the size of the type. If int is 32 bits, then only 0 to, and including, 31 steps is well-defined.
So, why is this?
If you take a look at the underlying hardware that performs the shift, if it only has to look at the lower five bits of a value (in the 32 bit case), it can be implemented using less logical gates than if it has to inspect every bit of the value.
Answer to question in comment
C and C++ are designed to run as fast as possible, on any available hardware. Today, the generated code is simply a ''shift'' instruction, regardless how the underlying hardware handles values outside the specified range. If the languages would have specified how shift should behave, the generated could would have to check that the shift count is in range before performing the shift. Typically, this would yield three instructions (compare, branch, shift). (Admittedly, in this case it would not be necessary as the shift count is known.)
It's undefined behaviour according to the C++ standard:
The value of E1 << E2 is E1
left-shifted E2 bit positions; vacated
bits are zero-filled. If E1 has an
unsigned type, the value of the result
is E1 × 2^E2, reduced modulo one more
than the maximum value representable
in the result type. Otherwise, if E1
has a signed type and non-negative
value, and E1×2^E2 is representable in
the result type, then that is the
resulting value; otherwise, the
behavior is undefined.
The answers of Lindydancer and 6502 explain why (on some machines) it happens to be a 1 that is being printed (although the behavior of the operation is undefined). I am adding the details in case they aren't obvious.
I am assuming that (like me) you are running the program on an Intel processor. GCC generates these assembly instructions for the shift operation:
movl $32, %ecx
sall %cl, %eax
On the topic of sall and other shift operations, page 624 in the Instruction Set Reference Manual says:
The 8086 does not mask the shift count. However, all other Intel Architecture processors
(starting with the Intel 286 processor) do mask the shift count to five bits, resulting in a
maximum count of 31. This masking is done in all operating modes (including the virtual-8086
mode) to reduce the maximum execution time of the instructions.
Since the lower 5 bits of 32 are zero, then 1 << 32 is equivalent to 1 << 0, which is 1.
Experimenting with larger numbers, we would predict that
cout << (a << 32) << " " << (a << 33) << " " << (a << 34) << "\n";
would print 1 2 4, and indeed that is what is happening on my machine.
It doesn't work as expected because you're expecting too much.
In the case of x86 the hardware doesn't care about shift operations where the counter is bigger than the size of the register (see for example SHL instruction description on x86 reference documentation for an explanation).
The C++ standard didn't want to impose an extra cost by telling what to do in these cases because generated code would have been forced to add extra checks and logic for every parametric shift.
With this freedom implementers of compilers can generate just one assembly instruction without any test or branch.
A more "useful" and "logical" approach would have been for example to have (x << y) equivalent to (x >> -y) and also the handling of high counters with a logical and consistent behavior.
However this would have required a much slower handling for bit shifting so the choice was to do what the hardware does, leaving to the programmers the need to write their own functions for side cases.
Given that different hardware does different things in these cases what the standard says is basically "Whatever happens when you do strange things just don't blame C++, it's your fault" translated in legalese.
Shifting a 32 bit variable by 32 or more bits is undefined behavior and may cause the compiler to make daemons fly out of your nose.
Seriously, most of the time the output will be 0 (if int is 32 bits or less) since you're shifting the 1 until it drops off again and nothing but 0 is left. But the compiler may optimize it to do whatever it likes.
See the excellent LLVM blog entry What Every C Programmer Should Know About Undefined Behavior, a must-read for every C developer.
Since you are bit shifting an int by 32 bits; you'll get: warning C4293: '<<' : shift count negative or too big, undefined behavior in VS. This means that you're shifting beyond the integer and the answer could be ANYTHING, because it is undefined behavior.
You could try the following. This actually gives the output as 0 after 32 left shifts.
#include<iostream>
#include<cstdio>
using namespace std;
int main()
{
int a = 1;
a <<= 31;
cout << (a <<= 1);
return 0;
}
I had the same problem and this worked for me:
f = ((long long)1 << (i-1));
Where i can be any integer bigger than 32 bits. The 1 has to be a 64 bit integer for the shifting to work.
Try using 1LL << 60. Here LL is for long long. You can shift now to max of 61 bits.
I am currently C programming 10-bit ADC inside 32-bit ARM9 based microcontroller. This 10-bit ADC is saving digitalised analog value in 10 bit register named "ADC_DATA_REG" that uses bits 9-0 (LSB). I have to read this register's value and compare it to a 32 bit constant named "CONST". My attempt looked like this, but it isn't working. What am i missing here? Should i use shift operations? This is my frst time dealing with this so any example will be welcomed.
The below code has been edited regarding coments and anwsers and is still not working. I allso added a while statement which checks if ADC_INT_STATUS flag is raized before reading ADC_DATA_REG. The flag mentioned indicates an interrupt which is pending as soon as ADC finishes conversion and data is ready to read from ADC_DATA_REG. It turns out data remains 0 even after assigning value of register ADC_DATA_REG to it, so that is why my LED is always on. And it allso means i got an interrupt and there should be data in ADC_DATA_REG, instead it seems there isnt...
#define CONST 0x1FF
unsigned int data = 0;
while (!(ADC_INT_STATUS_REG & ADC_INT_STATUS))
data = ADC_DATA_REG;
if ((data & 0x3FF)> CONST){
//code to turn off the LED
}
else{
//code to turn on the LED
}
You dont write how ADC_DATA_REG fetches the 10-bit value. But I assume that it is only a read to some IO-address. In this case the read from the address return 32 bits. In your case only the lower 10 are valid (or interresting). The other 22 bit can be anything (e.g. status bits, rubbish, ...), so before you proceed with the data, you should zero the first 22 bits.
In case the 10-bit value is signed, you should also perform a sign extension and correct your datatype (I know the port IO is unsigned, but maybe the 10-bit value the adc returns isnt). Then your comparison should work.