C ARM programming - c

I had two questions regarding writing into registers for ARM programming using C language.
1st: I'm trying to write to the Application Interrupt and Reset Control Register or AIRCR. It's a 32-bit register. I need two write 0x5FA values at bits 16 to 31 (the need of register). I also need to modify some other bits, but singularly. I meant, only bitwise (0 or 1). I know how to do this when it's only a bit. using *iser0 |= 0UL << 2; for example. But my question is that how can I write to a part of the register, namely, the AIRCR[31:16] while being able to manipulate other bits.
2nd: This issue is not my main issue, But when I compile my C program, It's return value is not 0. Is this abnormal? What has been my mistake?
#include <stdint.h>
typedef __int32 int32_t;
typedef unsigned __int32 uint32_t;
int main()
{
//Multi drive register
uint32_t* muer=(uint32_t*) 0x400E0E50UL;
*muer |= 1UL << 8;
uint32_t* mudr=(uint32_t*) 0x400E0E54UL;
*mudr |= 0UL << 8;
//Pio controller register
uint32_t* per=(uint32_t*) 0x400E0E00UL;
*per |= 1UL << 8;
uint32_t* pdr=(uint32_t*) 0x400E0E04UL;
*pdr |= 0UL << 8;
//output register
uint32_t* oer=(uint32_t*) 0x400E0E10UL;
*oer |= 0UL << 8;
uint32_t* odr=(uint32_t*) 0x400E0E14UL;
*odr |= 1UL << 8;
//edge select
uint32_t* esr=(uint32_t*) 0x400E0EC0;
*esr |= 1UL << 8;
//level select
uint32_t* lsr=(uint32_t*) 0x400E0EC4;
*lsr |= 0UL << 8;
//Rising edge
uint32_t* rehlsr=(uint32_t*) 0x400E0ED4;
*rehlsr |= 1UL << 8;
//Falling edge edge
uint32_t* fellsr=(uint32_t*) 0x400E0ED8;
*fellsr |= 0UL << 8;
//Interrupt set-enable register
uint32_t* iser0=(uint32_t*) 0xE000E100;
*iser0 |= 1UL << 11;
//Interrupt clear-enable register
uint32_t* icer0=(uint32_t*) 0xE000E180;
*icer0 |= 0UL << 11;
//AIRCR
uint32_t* aircr=(uint32_t*) 0xFA050000;
//VECTKEY
*aircr |= 0x5FA << 16;
//ENDIANESS
*iser0 |= 0UL << 15;
//PRIGRIOUP
*iser0 |= 5UL << 8;
//SYSRESETREQ
*iser0 |= 0UL << 2;
//VECTCLRACTIVE
*iser0 |= 0UL << 1;
//SYSRESETREQ
*iser0 |= 0UL;
}
This is my code.
Update:
I understood that I cannot do |= 0UL, instead, I should use &= 1 << bits.
I tried this one for my other code. But it still doesn't return 0, and surprisingly, it takes 10 seconds to compile.
#include<stdint.h>
int main()
{
//Pull up register
volatile uint32_t* puer=(uint32_t*) 0x400E0E64UL;
*puer &= 1 << 8;
volatile uint32_t* pudr=(uint32_t*) 0x400E0E60UL;
*pudr |= 1UL << 8;
//Multi drive register
volatile uint32_t* muer=(uint32_t*) 0x400E0E50UL;
*muer &= 1 << 8;
volatile uint32_t* mudr=(uint32_t*) 0x400E0E54UL;
*mudr |= 1UL << 8;
//Pio controller register
volatile uint32_t* per=(uint32_t*) 0x400E0E00UL;
*per |= 1UL << 8;
volatile uint32_t* pdr=(uint32_t*) 0x400E0E04UL;
*pdr &= 1 << 8;
//ABSR register
volatile uint32_t* absr=(uint32_t*) 0x400E0E70UL;
*absr &= 1 << 8;
//output register
volatile uint32_t* oer=(uint32_t*) 0x400E0E10UL;
*oer |= 1UL << 8;
volatile uint32_t* odr=(uint32_t*) 0x400E0E14UL;
*odr &= 1 << 8;
}
Update
I have not connected my micro to my PC. One possible issue is that I don't have these addresses on my machine?

In order to clear a single bit of a variable, you don't use *register |= 0 << bit but instead *register &= ~(1 << bit). If you want to manipulate more than one bit, essentially what you have to do is erase the relevant part and then overwrite it with your desired pattern.
This for example will delete the bits 4 to 6, and then overwrite them with the contents of value:
*register &= 0x7 << 4
*register |= (value & 0x7) << 4;
As for your second question:
Never ever try to run code that tries to access random memory locations like that on a PC. This is something you only do to access registers on a microcontroller. Your PC doesn't have those registers and even if it did, your operating system hides the actual physical memory from you anyway.
DISCLAIMER:
As pointed out by Lundin, using bitfields is compiler/architecture-dependent and therefore non-portable. If you want to use them, you will have to check your compiler's documentation to make sure they work as expected. When in doubt, prefer the solution above. Continue reading at your own risk ;-)
With that out of the way, what you could also do, in order to avoid the bit manipulation is something like this:
union
{
struct
{
uint32_t vectreset :1;
uint32_t vectclractive :1;
uint32_t sysresetreq :1;
uint32_t :5;
uint32_t prigroup :3;
uint32_t :4;
uint32_t endianness :1;
uint32_t vectkey :0;
};
uint32_t u32;
} *aircr = (void*)0xFA050000;
And then in your code, access the bitfield like this: (*aircr).vectkey = 0x5FA;

Related

STM32F031 - Code runs in line-by-line debugging but not otherwise. Some functions only run if defined as macros

I am writing code for the STM32F031K6T6 MCU using the Keil uVision. The IDE information is shown in the image below:
enter image description here
The C/C++ options for Target are configured as shown here:
enter image description here
I started a new project, selected the chip, and configured the run-time environment as below:
enter image description here
I initialized the clock and configured the Flash registers for the appropriate latency. I tested the frequency using MCO and it seems correct. I also initialized some GPIOs, UART, and the SysTick. The peripheral registered is modified as expected as seen on the System View for the respective peripheral in the debugging mode.
The problem is that some functions, such as functions for sending and receiving data via UART and some functions that use GPIO ports only work in debugging mode when I run the code line-by-line. If I click the run button the code gets stuck and the chip stops responding. I still see the VAL and CURRENT registers of the SysTick updating.
This is an example of a function that works:
void System_Clock_init(void){
FLASH->ACR &= ~FLASH_ACR_LATENCY;
FLASH->ACR |= FLASH_ACR_LATENCY | 0x01;
RCC->CR |= RCC_CR_HSION;
while((RCC->CR & RCC_CR_HSIRDY) == 0);
RCC->CR &= ~RCC_CR_HSITRIM;
RCC->CR |= 16UL << 3;
RCC->CR &= ~RCC_CR_PLLON;
while((RCC->CR & RCC_CR_PLLRDY) == RCC_CR_PLLRDY);
RCC->CFGR &= ~RCC_CFGR_PLLSRC;
RCC->CFGR |= 10UL << 18;
RCC->CFGR &= ~RCC_CFGR_HPRE;
RCC->CFGR &= ~RCC_CFGR_PPRE;
RCC->CR |= RCC_CR_PLLON;
while((RCC->CR & RCC_CR_PLLRDY) == 0);
RCC->CFGR &= ~RCC_CFGR_SW;
RCC->CFGR |= RCC_CFGR_SW_PLL;
while((RCC->CFGR & RCC_CFGR_SWS) != RCC_CFGR_SWS_PLL);
}
This is an example of a function that doesn’t work:
void UV_LED_Driver(uint32_t d){
for(uint32_t i = 0; i<16; i++){
if(d&(((uint32_t)0x8000)>>i)){
SDI2_ON;
}
else {
SDI2_OFF;
}
CLK2
}
LATCH2
}
The macros used in the function above are defined as below:
// CLK2 -> PA5
// LE2 -> PA4
// SDI2 -> PA6
#define CLK2_OFF GPIOA->ODR |= (1UL << 5)
#define CLK2_ON GPIOA->ODR &= ~(1UL << 5)
#define LE2_OFF GPIOA->ODR |= (1UL << 4)
#define LE2_ON GPIOA->ODR &= ~(1UL << 4)
#define SDI2_ON GPIOA->ODR &= ~(1UL << 6)
#define SDI2_OFF GPIOA->ODR |= (1UL << 6)
#define CLK2 {CLK2_ON; us_Delay(1); CLK2_OFF;}
#define LATCH2 {LE2_ON; us_Delay(1); LE2_OFF;}
The GPIO pins used in the function above are initialized as follows:
// CLK2 -> PA5
// LE2 -> PA4
// SDI2 -> PA6
void UV_LED_Driver_Init(void){
RCC->AHBENR |= RCC_AHBENR_GPIOAEN;
GPIOA->MODER &= ~((3UL << 8) | (3UL << 10) | (3UL << 12));
GPIOA->MODER |= ((1UL << 8) | (1UL << 10) | (1UL << 12));
GPIOA->OTYPER &= ~(0x70UL);
GPIOA->PUPDR &= ~((1UL << 8) | (1UL << 10) | (1UL << 12));
GPIOA->OSPEEDR &= ~((3UL << 8) | (3UL << 10) | (3UL << 12));
GPIOA->OSPEEDR |= ((1UL << 8) | (1UL << 10) | (1UL << 12));
GPIOA->ODR |= (0x70UL);
}
And the us_Delay() function is based on SysTick. These are defined as:
static uint32_t usDelay = 0;
void SysTick_init(uint32_t ticks){
SysTick->CTRL = 0;
SysTick->LOAD = ticks - 1;
NVIC_SetPriority(SysTick_IRQn, (1<<__NVIC_PRIO_BITS) - 1);
SysTick->VAL = 0;
SysTick->CTRL |= SysTick_CTRL_CLKSOURCE_Msk;
SysTick->CTRL |= SysTick_CTRL_TICKINT_Msk;
SysTick->CTRL |= SysTick_CTRL_ENABLE_Msk;
}
void SysTick_Handler(void){
if(usDelay > 0){
usDelay--;
}
}
void us_Delay(uint32_t us){
usDelay = us;
while(usDelay != 0);
}
Now, this is the same UV_LED_Driver(uint32_t d) function defined as a macro (Runs as expected):
#define UV_LED_DRIVER(d) {for(int i = 0; i<16; i++){if(d&(0x000F>>i)){SDI2_ON;}else {SDI2_OFF;}CLK2}LATCH2}
This is the main():
#include <stm32f031x6.h>
#include "clock.h"
#include "LED_Driver.h"
#include "UART.h"
int main(void){
System_Clock_init();
Color_LED_Driver_Init();
UV_LED_Driver_Init();
Nucleo_Green_LED_Init();
UART_init();
SysTick_init(47);
//MCO_Init(); // Check PIN 18 (PA8) for the frequency of the MCO using an Oscilloscope
while(1){
UV_LED_DRIVER(~(0x0000)) // This runs well
//UV_LED_Driver((uint32_t)~(0x0000)); // If I run this line
//the debugger gets stuck here. It works if I run line-by-line
ms_Delay(100);
UV_LED_DRIVER(~(0xFFFF)) // This runs well
//UV_LED_Driver((uint32_t)~(0xFFFF)); // If I run this line
//the debugger gets stuck here. It works if I run line-by-line
ms_Delay(100);
}
}
Interestingly, if I define the functions as macros, they behave as desired. I finally tested the code on a STM32F429ZIT chip and it worked well, given the needed modifications in the initialization of the main clock and the GPIO.
Has anyone ever experienced anything similar or happens to know what could be causing this issue? I know that I could walk around this issue using CubeMX but I would like to find out what is causing this problem.
Thank you.
I asked the same question at the ST Community forum and the user waclawek.jan answered it. The problem is that I was calling the SysTick interrupt too often, not leaving any time for the main() to run. To fix the code, I just called the SysTick_init() function passing "479" as an argument instead of "47".
Thank you!

How to make changes to msr 0x199 from EFI stick?

I have a macbookpro11,3 without a battery. When battery is removed the firmware throttles the CPU to half speed. In Windows I can override this using Throttlestop to turn off BD PROCHOT and set multiplier to 25. I want to do this from EFI so that boot and updates run at a normal speed.
Based on source for rEFInd which updates 0x3a register I wrote this program but while BD PROCHOT is disabled correctly after booting into Windows the multiplier is not.
#include "../include/tiano_includes.h"
static VOID DisablePROCHOT(VOID)
{
UINT32 msr = 0x1FC;
UINT32 low_bits = 0, high_bits = 0;
__asm__ volatile ("rdmsr" : "=a" (low_bits), "=d" (high_bits) : "c" (msr));
// lowest bit is BD PROCHOT
low_bits &= ~(1 << 0);
__asm__ volatile ("wrmsr" : : "c" (msr), "a" (low_bits), "d" (high_bits));
} // VOID DisablePROCHOT()
static VOID SetMultiplier25(VOID)
{
UINT32 msr = 0x199;
UINT32 low_bits = 0, high_bits = 0;
__asm__ volatile ("rdmsr" : "=a" (low_bits), "=d" (high_bits) : "c" (msr));
// second lowest byte is multiplier
// 25 is .... xxxxxxxx 00011001 xxxxxxxx
low_bits |= 1 << 8;
low_bits &= ~(1 << 9);
low_bits &= ~(1 << 10);
low_bits |= 1 << 11;
low_bits |= 1 << 12;
low_bits &= ~(1 << 13);
low_bits &= ~(1 << 14);
low_bits &= ~(1 << 15);
__asm__ volatile ("wrmsr" : : "c" (msr), "a" (low_bits), "d" (high_bits));
} // VOID SetMultiplier25()
EFI_STATUS
EFIAPI
efi_main (
IN EFI_HANDLE ImageHandle,
IN EFI_SYSTEM_TABLE *SystemTable
)
{
DisablePROCHOT();
SetMultiplier25();
return EFI_SUCCESS;
}
Reading the registers with rdmsr from EFI appears to show both are set correctly however when booted into Windows while bit 0 of 0x1FC is correctly set off the multiplier stored in 0x199 is unchanged from the default of 12 when I expect it to be 25.
Default values
These are values after standard boot into Windows (from RWEverything)
Results after calling program
Program was called from EFI shell before calling Windows boot loader bootmgfw.efi
0x1FC is updated, 0x199 is not.
Updating 0x199 with RWEverything from within Windows changes the multiplier correctly so I'm fairly sure it is the correct register.
As this is my first EFI (or C) program I may have overlooked something trivial.
You have to create a loop and change processor affinity each time through the loop. Then you do a wrmsr for each thread (CPU1, CPU2, CPU3, CPU4) each time through the loop. In Windows you use this function.
https://learn.microsoft.com/en-us/windows/desktop/api/winbase/nf-winbase-setthreadaffinitymask
As soon as you boot up, Windows changes the values in MSR 0x199 so seeing what values are in MSR 0x199 after you boot up does not prove anything.
To simplify things, you can do this in SetMultiplier,
low_bits = 0x1900

byte order using GCC struct bit packing

I am using GCC struct bit fields in an attempt interpret 8 byte CAN message data. I wrote a small program as an example of one possible message layout. The code and the comments should describe my problem. I assigned the 8 bytes so that all 5 signals should equal 1. As the output shows on an Intel PC, that is hardly the case. All CAN data that I deal with is big endian, and the fact that they are almost never packed 8 bit aligned makes htonl() and friends useless in this case. Does anyone know of a solution?
#include <stdio.h>
#include <netinet/in.h>
typedef union
{
unsigned char data[8];
struct {
unsigned int signal1 : 32;
unsigned int signal2 : 6;
unsigned int signal3 : 16;
unsigned int signal4 : 8;
unsigned int signal5 : 2;
} __attribute__((__packed__));
} _message1;
int main()
{
_message1 message1;
unsigned char incoming_data[8]; //This is how this message would come in from a CAN bus for all signals == 1
incoming_data[0] = 0x00;
incoming_data[1] = 0x00;
incoming_data[2] = 0x00;
incoming_data[3] = 0x01; //bit 1 of signal 1
incoming_data[4] = 0x04; //bit 1 of signal 2
incoming_data[5] = 0x00;
incoming_data[6] = 0x04; //bit 1 of signal 3
incoming_data[7] = 0x05; //bit 1 of signal 4 and signal 5
for(int i = 0; i < 8; ++i){
message1.data[i] = incoming_data[i];
}
printf("signal1 = %x\n", message1.signal1);
printf("signal2 = %x\n", message1.signal2);
printf("signal3 = %x\n", message1.signal3);
printf("signal4 = %x\n", message1.signal4);
printf("signal5 = %x\n", message1.signal5);
}
Because struct packing order varies between compilers and architectures, the best option is to use a helper function to pack/unpack the binary data instead.
For example:
static inline void message1_unpack(uint32_t *fields,
const unsigned char *buffer)
{
const uint64_t data = (((uint64_t)buffer[0]) << 56)
| (((uint64_t)buffer[1]) << 48)
| (((uint64_t)buffer[2]) << 40)
| (((uint64_t)buffer[3]) << 32)
| (((uint64_t)buffer[4]) << 24)
| (((uint64_t)buffer[5]) << 16)
| (((uint64_t)buffer[6]) << 8)
| ((uint64_t)buffer[7]);
fields[0] = data >> 32; /* Bits 32..63 */
fields[1] = (data >> 26) & 0x3F; /* Bits 26..31 */
fields[2] = (data >> 10) & 0xFFFF; /* Bits 10..25 */
fields[3] = (data >> 2) & 0xFF; /* Bits 2..9 */
fields[4] = data & 0x03; /* Bits 0..1 */
}
Note that because the consecutive bytes are interpreted as a single unsigned integer (in big-endian byte order), the above will be perfectly portable.
Instead of an array of fields, you could use a structure, of course; but it does not need to have any resemblance to the on-the-wire structure at all. However, if you have several different structures to unpack, an array of (maximum-width) fields usually turns out to be easier and more robust.
All sane compilers will optimize the above code just fine. In particular, GCC with -O2 does a very good job.
The inverse, packing those same fields to a buffer, is very similar:
static inline void message1_pack(unsigned char *buffer,
const uint32_t *fields)
{
const uint64_t data = (((uint64_t)(fields[0] )) << 32)
| (((uint64_t)(fields[1] & 0x3F )) << 26)
| (((uint64_t)(fields[2] & 0xFFFF )) << 10)
| (((uint64_t)(fields[3] & 0xFF )) << 2)
| ( (uint64_t)(fields[4] & 0x03 ) );
buffer[0] = data >> 56;
buffer[1] = data >> 48;
buffer[2] = data >> 40;
buffer[3] = data >> 32;
buffer[4] = data >> 24;
buffer[5] = data >> 16;
buffer[6] = data >> 8;
buffer[7] = data;
}
Note that the masks define the field length (0x03 = 0b11 (2 bits), 0x3F = 0b111111 (16 bits), 0xFF = 0b11111111 (8 bits), 0xFFFF = 0b1111111111111111 (16 bits)); and the shift amount depends on the bit position of the least significant bit in each field.
To verify such functions work, pack, unpack, repack, and re-unpack a buffer that should contain all zeros except one of the fields all ones, and verify the data stays correct over two roundtrips. It usually suffices to detect the typical bugs (wrong bit shift amounts, typos in masks).
Note that documentation will be key to ensure the code remains maintainable. I'd personally add comment blocks before each of the above functions, similar to
/* message1_unpack(): Unpack 8-byte message to 5 fields:
field[0]: Foobar. Bits 32..63.
field[1]: Buzz. Bits 26..31.
field[2]: Wahwah. Bits 10..25.
field[3]: Cheez. Bits 2..9.
field[4]: Blop. Bits 0..1.
*/
with the field "names" reflecting their names in documentation.

AVR ATMega328P ADC channel selection issue

I'm tinkering around with an ATMega328P right now and wanted to read an analogue value from a pin through the ADC and simply output the value to 4 LEDs. Really simple
#define F_CPU 20000000UL
#include <avr/io.h>
#include <avr/interrupt.h>
#include <util/delay.h>
#define BRIGHTNESS_PIN 2
#define ADC_SAMPLES 5
void init_adc()
{
//set ADC VRef to AVCC
ADMUX |= (1 << REFS0);
//enable ADC and set pre-scaler to 128
ADCSRA = (1 << ADPS0) | (1 << ADPS1) | (1 << ADPS2) | (1 << ADEN);
}
uint16_t read_adc(unsigned char channel)
{
//clear lower 4 bits of ADMUX and select ADC channel according to argument
ADMUX &= (0xF0);
ADMUX |= (channel & 0x0F); //set channel, limit channel selection to lower 4 bits
//start ADC conversion
ADCSRA |= (1 << ADSC);
//wait for conversion to finish
while(!(ADCSRA & (1 << ADIF)));
ADCSRA |= (1 << ADIF); //reset as required
return ADC;
}
int main(void)
{
uint32_t brightness_total;
uint16_t brightness = 0;
uint32_t i = 0;
init_adc();
sei();
while (1)
{
//reset LED pins
PORTB &= ~(1 << PINB0);
PORTD &= ~(1 << PIND7);
PORTD &= ~(1 << PIND6);
PORTD &= ~(1 << PIND5);
PORTB |= (1 << PINB1); //just blink
read_adc(BRIGHTNESS_PIN); //first throw-away read
//read n sample values from the ADC and average them out
brightness_total = 0;
for(i = 0; i < ADC_SAMPLES; ++i)
{
brightness_total += read_adc(BRIGHTNESS_PIN);
}
brightness = brightness_total / ADC_SAMPLES;
//set pins for LEDs depending on read value.
if(brightness > 768)
{
PORTB |= (1 << PINB0);
PORTD |= (1 << PIND7);
PORTD |= (1 << PIND6);
PORTD |= (1 << PIND5);
}
else if (brightness <= 768 && brightness > 512)
{
PORTB |= (1 << PINB0);
PORTD |= (1 << PIND7);
PORTD |= (1 << PIND6);
}
else if (brightness <= 512 && brightness > 256)
{
PORTB |= (1 << PINB0);
PORTD |= (1 << PIND7);
}
else if (brightness <= 256 && brightness >=64)
{
PORTB |= (1 << PINB0);
}
_delay_ms(500);
PORTB &= ~(1 << PINB1); //just blink
_delay_ms(500);
}
}
This works kind of fine, except the channel selection. When I select a channel it works fine, but independently from the selected channel, channel 0 also always reads and converts. What I mean with that is that if I plug the cable into the selected channel pin, it reads the values correctly. When I plug it into any other channel pin it obviously doesn't, except for ADC0. No matter what channel I set, not only does that one read but also ADC0.
Why is that and how do I fix that?
I already checked my PCB for solder bridges, but there are none and I would also expect some slightly different behaviour with that.
Also ADC4 and ADC5 don't seem to properly convert either. Any idea why that is? The only clue I found in the datasheet is, that those two use digital power, while all the other ADCs use analogue. What's the difference, why does it matter and why does it not correctly convert my anlogue signal?
Both ARef and AVCC are connected according to the datasheet, with the exception that the inductor for ARef is missing.
I think what is happening is that
ADMUX &= (0xF0);
is setting the channel to 0, and
ADMUX |= (channel & 0x0F);
is then setting the channel to the one you want. You're then taking a reading and throwing the result away, which should mean that the initial channel being set to 0 doesn't matter.
Howevever, when you then try to take an actual reading, you are setting the channel again, by using read_adc to take the reading. So, you don't ever throw a reading away.
What I would do is replace your ADMUX setting commands with:
ADMUX = (0xF0) | (channel & 0x0F)
Then move this into a separate function called something like set_adc_channel(int channel). Include a throw away read in that function, then remove the ADMUX setting from your read_adc function. Just start a conversion and get the result.
Also note that since you only ever use one channel, you could move the channel setting part to init_adc(). I assume it's in a separate function so you could later read more than one channel.
I hope that's clear. Let me know if not.
EDIT: So as you stated, ADIF is really reset by writing logic 1.
I've just tested your adc_read function and it is working for me (if you don't mind Arduino mixture)
uint16_t read_adc(unsigned char channel)
{
//clear lower 4 bits of ADMUX and select ADC channel according to argument
ADMUX &= (0xF0);
ADMUX |= (channel & 0x0F); //set channel, limit channel selection to lower 4 bits
//start ADC conversion
ADCSRA |= (1 << ADSC);
//wait for conversion to finish
while(!(ADCSRA & (1 << ADIF)));
ADCSRA |= (1 << ADIF); //reset as required
return ADC;
}
void setup() {
Serial.begin(57600);
//set ADC VRef to AVCC
ADMUX |= (1 << REFS0);
//enable ADC and set pre-scaler to 128
ADCSRA = (1 << ADPS0) | (1 << ADPS1) | (1 << ADPS2) | (1 << ADEN);
pinMode(A0, INPUT_PULLUP);
pinMode(A1, INPUT_PULLUP);
pinMode(A2, INPUT_PULLUP);
pinMode(A3, INPUT_PULLUP);
}
void loop() {
Serial.println(read_adc(0));
Serial.println(read_adc(1));
Serial.println(read_adc(2));
Serial.println(read_adc(3));
delay(1000);
}
I just connect one of these channels to 3.3V pin and it'll read 713 on it. Other channels are pulled up to levels about 1017.

Execution / Timing difference depending upon statement styles

How would the below two statements differ wrt timing / execution.
I am working on AT91CSAM7x512 device.
We were able to resolve a troublesome bug by changing the below assignment style.
I am using IAR Embedded Workbench Ver 4.41A. Is this happening due to some compiler directive or some other reason ?
AT91C_BASE_PIOA->PIO_PER |= (((unsigned int)1<<12) | ((unsigned int)1<<13));
AT91C_BASE_PIOA->PIO_ODR |= (((unsigned int)1<<12) | ((unsigned int)1<<13));
AT91C_BASE_PIOA->PIO_IFER |= (((unsigned int)1<<12) | ((unsigned int)1<<13));
MARK1.occurrence = 0;
MARK2.occurrence = 0;
AT91C_BASE_PIOA->PIO_PER |= (unsigned int)1<<12) ;
AT91C_BASE_PIOA->PIO_ODR |= (unsigned int)1<<12) ;
AT91C_BASE_PIOA->PIO_IFER |= (unsigned int)1<<12) ;
MARK1.occurrence = 0;
AT91C_BASE_PIOA->PIO_PER |= (unsigned int)1<<13) ;
AT91C_BASE_PIOA->PIO_ODR |= (unsigned int)1<<13) ;
AT91C_BASE_PIOA->PIO_IFER |= (unsigned int)1<<13) ;
MARK2.occurrence = 0;
Would this have anything to do with the way the stack is handeled # instructions
i am comparatively new to processors & need help with this.

Resources