I am new to the processor STM32F107. I have to read the input value from an external source that is a balance. This balance is external to the board that contains the processor and communicates with it via PA4.
What I have to do to read the analogue input from the balance?
Here is my first attempt to read the input from the balance.
I use this function to setup the ADC:
void ADC_Configuration(void) {
ADC_InitTypeDef ADC_InitStructure;
/* PCLK2 is the APB2 clock */
/* ADCCLK = PCLK2/6 = 72/6 = 12MHz*/
RCC_ADCCLKConfig(RCC_PCLK2_Div6);
/* Enable ADC1 clock so that we can talk to it */
RCC_APB2PeriphClockCmd(RCC_APB2Periph_ADC1, ENABLE);
/* Put everything back to power-on defaults */
ADC_DeInit(ADC1);
/* ADC1 Configuration ------------------------------------------------------*/
/* ADC1 and ADC2 operate independently */
ADC_InitStructure.ADC_Mode = ADC_Mode_Independent;
/* Disable the scan conversion so we do one at a time */
ADC_InitStructure.ADC_ScanConvMode = DISABLE;
/* Don't do contimuous conversions - do them on demand */
ADC_InitStructure.ADC_ContinuousConvMode = DISABLE;
/* Start conversin by software, not an external trigger */
ADC_InitStructure.ADC_ExternalTrigConv = ADC_ExternalTrigConv_None;
/* Conversions are 12 bit - put them in the lower 12 bits of the result */
ADC_InitStructure.ADC_DataAlign = ADC_DataAlign_Right;
/* Say how many channels would be used by the sequencer */
ADC_InitStructure.ADC_NbrOfChannel = 1;
/* Now do the setup */ ADC_Init(ADC1, &ADC_InitStructure);
/* Enable ADC1 */ ADC_Cmd(ADC1, ENABLE);
/* Enable ADC1 reset calibaration register */
ADC_ResetCalibration(ADC1);
/* Check the end of ADC1 reset calibration register */
while(ADC_GetResetCalibrationStatus(ADC1));
/* Start ADC1 calibaration */
ADC_StartCalibration(ADC1);
/* Check the end of ADC1 calibration */
while(ADC_GetCalibrationStatus(ADC1));
}
And I use this function to get the input:
u16 readADC1(u8 channel) {
ADC_RegularChannelConfig(ADC1, channel, 1, ADC_SampleTime_1Cycles5);
// Start the conversion
ADC_SoftwareStartConvCmd(ADC1, ENABLE);
// Wait until conversion completion
while(ADC_GetFlagStatus(ADC1, ADC_FLAG_EOC) == RESET);
// Get the conversion value
return ADC_GetConversionValue(ADC1);
}
The problem is that in N measurements of the same weight, I get N different results.
For example, the weight is 70kg and the output of the readADC1(ADC_Channel_4) is 715,760,748,711,759 etc.
What am I doing wrong?
The balance has a load cell inside it, which generates an analog voltage. The processor on the balance is some how not utilized (i assume this as not much details are present in your question).
The stm32 f107 controller has an on-chip ADC (analog to digital convertor). Connect the output of load cell (analog signal coming from balance) to the analog input pin of stm32 f107. Configure the ADC to sample and convert the analog signal into digital (use the example code as reference to write the software).
PA4 is multiplexed with ADC12_IN4 (an analogue input that can itself be mapped to channel 4 on either ADC1 or ADC2).
Programming the ADC, selecting the correct peripheral clocking and mapping multiplexed pins on STM32 is somewhat complex, but I strongly suggest that you utilise the STM32F10x Standard Peripheral Library which provides an API for all STM32F10x peripherals as well as numerous examples of how to use the library, including ADC examples.
The ADC itself may be polled, interrupt driven, use DMA, and be software triggered or free-running, self-clocked or clocked from a timer peripheral. The options are numerous, only some combinations are covered by the example code, but it is a good place to start nonetheless. To understand all the options and how to use them with the Standard Peripheral Library, you will need a fairly thorough understanding of the Reference Manual.
Another resource you may find useful is STM's MicroXplorer. This allows visual configuration and allocation of multiplexed pins and generates source code you can use directly.
Furthermore you may need some hardware signal conditioning at the input to ensure that the input is within the valid and tolerable range of the ADC input.
Related
I have been trying to get SPI master transmit to work using DMA and STM32 LL drivers, on STM32G030C8.
I did get the SPI to work with LL drivers without DMA, so I believe that at least my wiring is correct.
What I have done:
Set SPI to use DMA in cubeMX by setting SPI1_TX Request to DMA1 channel 1
Setup the transmit in code:
main.c
#include "main.h"
#include "dma.h"
#include "gpio.h"
#include "spi.h"
uint8_t test_data[8] = {0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF};
void SystemClock_Config(void);
int main(void) {
HAL_Init();
SystemClock_Config();
MX_GPIO_Init();
MX_SPI1_Init();
MX_DMA_Init();
while (1) {
LL_DMA_ConfigAddresses(DMA1, LL_DMA_CHANNEL_1, (uint32_t)(&test_data[0]),
(uint32_t)LL_SPI_DMA_GetRegAddr(SPI1),
LL_DMA_GetDataTransferDirection(DMA1, LL_DMA_CHANNEL_1));
LL_DMA_SetDataLength(DMA1, LL_DMA_CHANNEL_1, 8);
LL_SPI_EnableDMAReq_TX(SPI1);
LL_SPI_Enable(SPI1);
LL_DMA_EnableChannel(DMA1, LL_DMA_CHANNEL_1);
HAL_Delay(1000);
HAL_GPIO_TogglePin(STATUS_LED_GPIO_Port, STATUS_LED_Pin);
}
}
spi.c:
#include "spi.h"
void MX_SPI1_Init(void)
{
LL_SPI_InitTypeDef SPI_InitStruct = {0};
LL_GPIO_InitTypeDef GPIO_InitStruct = {0};
GPIO_InitStruct.Pin = LL_GPIO_PIN_1;
GPIO_InitStruct.Mode = LL_GPIO_MODE_ALTERNATE;
GPIO_InitStruct.Speed = LL_GPIO_SPEED_FREQ_LOW;
GPIO_InitStruct.OutputType = LL_GPIO_OUTPUT_PUSHPULL;
GPIO_InitStruct.Pull = LL_GPIO_PULL_NO;
GPIO_InitStruct.Alternate = LL_GPIO_AF_0;
LL_GPIO_Init(GPIOA, &GPIO_InitStruct);
GPIO_InitStruct.Pin = LL_GPIO_PIN_2;
GPIO_InitStruct.Mode = LL_GPIO_MODE_ALTERNATE;
GPIO_InitStruct.Speed = LL_GPIO_SPEED_FREQ_LOW;
GPIO_InitStruct.OutputType = LL_GPIO_OUTPUT_PUSHPULL;
GPIO_InitStruct.Pull = LL_GPIO_PULL_NO;
GPIO_InitStruct.Alternate = LL_GPIO_AF_0;
LL_GPIO_Init(GPIOA, &GPIO_InitStruct);
LL_DMA_SetPeriphRequest(DMA1, LL_DMA_CHANNEL_1, LL_DMAMUX_REQ_SPI1_TX);
LL_DMA_SetDataTransferDirection(DMA1, LL_DMA_CHANNEL_1, LL_DMA_DIRECTION_MEMORY_TO_PERIPH);
LL_DMA_SetChannelPriorityLevel(DMA1, LL_DMA_CHANNEL_1, LL_DMA_PRIORITY_LOW);
LL_DMA_SetMode(DMA1, LL_DMA_CHANNEL_1, LL_DMA_MODE_NORMAL);
LL_DMA_SetPeriphIncMode(DMA1, LL_DMA_CHANNEL_1, LL_DMA_PERIPH_NOINCREMENT);
LL_DMA_SetMemoryIncMode(DMA1, LL_DMA_CHANNEL_1, LL_DMA_MEMORY_INCREMENT);
LL_DMA_SetPeriphSize(DMA1, LL_DMA_CHANNEL_1, LL_DMA_PDATAALIGN_BYTE);
LL_DMA_SetMemorySize(DMA1, LL_DMA_CHANNEL_1, LL_DMA_MDATAALIGN_BYTE);
NVIC_SetPriority(SPI1_IRQn, 0);
NVIC_EnableIRQ(SPI1_IRQn);
SPI_InitStruct.TransferDirection = LL_SPI_FULL_DUPLEX;
SPI_InitStruct.Mode = LL_SPI_MODE_MASTER;
SPI_InitStruct.DataWidth = LL_SPI_DATAWIDTH_8BIT;
SPI_InitStruct.ClockPolarity = LL_SPI_POLARITY_LOW;
SPI_InitStruct.ClockPhase = LL_SPI_PHASE_1EDGE;
SPI_InitStruct.NSS = LL_SPI_NSS_SOFT;
SPI_InitStruct.BaudRate = LL_SPI_BAUDRATEPRESCALER_DIV4;
SPI_InitStruct.BitOrder = LL_SPI_MSB_FIRST;
SPI_InitStruct.CRCCalculation = LL_SPI_CRCCALCULATION_DISABLE;
SPI_InitStruct.CRCPoly = 7;
LL_SPI_Init(SPI1, &SPI_InitStruct);
LL_SPI_SetStandard(SPI1, LL_SPI_PROTOCOL_MOTOROLA);
LL_SPI_DisableNSSPulseMgt(SPI1);
}
dma.c:
#include "dma.h"
void MX_DMA_Init(void)
{
LL_AHB1_GRP1_EnableClock(LL_AHB1_GRP1_PERIPH_DMA1);
NVIC_SetPriority(DMA1_Channel1_IRQn, 0);
NVIC_EnableIRQ(DMA1_Channel1_IRQn);
}
from the reference manual I found the following steps for DMA configuration:
Channel configuration procedure
The following sequence is needed to configure a DMA channel x:
Set the peripheral register address in the DMA_CPARx register.
The data is moved from/to this address to/from the memory after the peripheral event,
or after the channel is enabled in memory-to-memory mode.
Set the memory address in the DMA_CMARx register.
The data is written to/read from the memory after the peripheral event or after the
channel is enabled in memory-to-memory mode.
Configure the total number of data to transfer in the DMA_CNDTRx register.
After each data transfer, this value is decremented.
Configure the parameters listed below in the DMA_CCRx register:
– the channel priority
– the data transfer direction
– the circular mode
– the peripheral and memory incremented mode
– the peripheral and memory data size
– the interrupt enable at half and/or full transfer and/or transfer error
Activate the channel by setting the EN bit in the DMA_CCRx register.
A channel, as soon as enabled, may serve any DMA request from the peripheral connected
to this channel, or may start a memory-to-memory block transfer.
To my understanding steps 1,2,3 and 5 are done in main.c, and the step 4 already in spi.c
Also in the reference manual about SPI and DMA:
A DMA access is requested when the TXE or RXNE enable bit in the SPIx_CR2 register is
set. Separate requests must be issued to the Tx and Rx buffers.
-In transmission, a DMA request is issued each time TXE is set to 1. The DMA then
writes to the SPIx_DR register
and
When starting communication using DMA, to prevent DMA channel management raising
error events, these steps must be followed in order:
Enable DMA Rx buffer in the RXDMAEN bit in the SPI_CR2 register, if DMA Rx is
used.
Enable DMA streams for Tx and Rx in DMA registers, if the streams are used.
Enable DMA Tx buffer in the TXDMAEN bit in the SPI_CR2 register, if DMA Tx is used.
Enable the SPI by setting the SPE bit.
In my understanding I have done all the steps, but I cannot see anything with my oscilloscope attached to SPI1 lines.
I must be missing something (or something is done in wrong order) but I cannot figure out what is wrong.
In some other questions the problem has been that the DMA channel was wrong and not supporting SPI, but in this MCU, if I understood correctly, the DMAMUX handles that, and any signal should be available in any DMA channel? (configured in spi.c)
EDIT:
Reading flags from SPI and DMA:
LL_SPI_IsActiveFlag_BSY(SPI1) returns 0
LL_SPI_IsEnabledDMAReq_TX(SPI1) returns 1
LL_SPI_IsEnabled(SPI1) returns 1
LL_DMA_IsEnabledChannel(DMA1, LL_DMA_CHANNEL_1) returns 1
LL_DMA_IsActiveFlag_TE1(DMA1) returns 0
LL_SPI_IsActiveFlag_TXE(SPI1) returns 1
So, everything seems to be enabled, no errors but no data is transferred!
Any help is appreciated!
Thanks!
After some time debugging I found there is a bug in STM32cubeMX code generator. (this also seems to be already reported (https://community.st.com/s/question/0D53W00001HJ3EhSAL/wrong-initialization-sequence-with-cubemx-in-cubeide-when-using-i2s-with-dma?t=1641156995520&searchQuery)
when selecting SPI and DMA, the generator first initializes SPI and then DMA with
MX_SPIx_Init();
MX_DMA_Init();
The problem is that the SPI initialization tries to set DMA registers, but the DMA clock is not yet enabled so the values are not saved.
This is also the reason I got it working with USART, the USART initialization came after the DMA Initialization.
So the simple solution is to move the DMA initialization before the other peripherals.
BUT since this code is generated automatically every time you make changes in cubeMX, the changes will be lost.
To get around this, I used the cubeMX to disable the automatic initialization call in the project manager tab under advanced settings:
And then called these initialization functions manually in the following user code segment, and now it looks like this:
/* Initialize all configured peripherals */
MX_GPIO_Init();
MX_DMA_Init();
/* USER CODE BEGIN 2 */
MX_SPI1_Init();
MX_USART1_UART_Init();
/* USER CODE END 2 */
I am trying to implement my own SPI communication from FPGA to STM in which my FPGA serve as MASTER and generate Chip enable and clock for communication. FPGA transmit data at its rising edge and receive data at its falling edge my FPGA code works properly.
In STM side i capture this master clock on interrupts and receive data at its rising edge and transmit at its falling edge but communication not work properly if i increase clock speed from 250khz
According to my understand STM work at 168 Mega hz i set clock setting according to 168Mhz and handling of 1mhz interrupt is not a big problem so can you any guide how i handle this high speed clock in STM
My code is written below
/*
* Project name:
EXTI_interrupt (EXTI interrupt test)
* Copyright:
(c) Mikroelektronika, 2011.
* Revision History:
20111226:
- Initial release;
* Description:
This code demonstrates how to use External Interrupt on PD10.
PD10 is external interrupt pin for click1 socket.
receive data from mosi line in each rising edge.
* Test configuration:
MCU: STM32F407VG
http://www.st.com/st-web-
ui/static/active/en/resource/technical/document/datasheet/DM00037051.pdf
dev.board: EasyMX PRO for STM32
http://www.mikroe.com/easymx-pro/stm32/
Oscillator: HSI-PLL, 140.000MHz
Ext. Modules: -
SW: mikroC PRO for ARM
http://www.mikroe.com/mikroc/arm/
* NOTES:
receive 32 bit data from mosi line in each rising edge
*/
//D10 clk
//D2 ss
//C0 MOSI
//C1 FLAG
int read=0;
int flag_int=0;
int val=0;
int rec_data[32];
int index_rec=0;
int display_index=0;
int flag_dint=0;
void ExtInt() iv IVT_INT_EXTI15_10 ics ICS_AUTO {
EXTI_PR.B10 = 1; // clear flag
flag_int=1; //Flag on interrupt
}
TFT_Init_ILI9340();
void main() {
GPIO_Digital_Input(&GPIOD_BASE, _GPIO_PINMASK_10);
GPIO_Digital_Output(&GPIOD_BASE, _GPIO_PINMASK_13); // Set PORTD as
digital output
GPIO_Digital_Output(&GPIOD_BASE, _GPIO_PINMASK_12); // Set PORTD as
digital output
GPIO_Digital_Output(&GPIOD_BASE, _GPIO_PINMASK_14); // Set PORTD as
digital output
GPIO_Digital_Output(&GPIOD_BASE, _GPIO_PINMASK_15); // Set PORTD as
digital output
GPIO_Digital_Input(&GPIOA_IDR, _GPIO_PINMASK_0); // Set PA0 as
digital input
GPIO_Digital_Input(&GPIOC_IDR, _GPIO_PINMASK_0); // Set PA0 as
digital input
GPIO_Digital_Input(&GPIOC_IDR, _GPIO_PINMASK_2); // Set PA0 as
digital input
GPIO_Digital_Output(&GPIOC_IDR, _GPIO_PINMASK_1); // Set PA0 as
digital input
//interupt register
SYSCFGEN_bit = 1; // Enable clock for alternate pin
functions
SYSCFG_EXTICR3 = 0x00000300; // Map external interrupt on PD10
EXTI_RTSR = 0x00000000; // Set interrupt on Rising edge
(none)
EXTI_FTSR = 0x00000400; // Set Interrupt on Falling edge
(PD10)
EXTI_IMR |= 0x00000400; // Set mask
//NVIC_IntEnable(IVT_INT_EXTI15_10); // Enable External interrupt
while(1)
{
//interrupt is not enable until i push the button
if((GPIOD_ODR.B2==0)&&(flag_dint==0))
{ if (Button(&GPIOA_IDR, 0, 1, 1))
{
Delay_ms(100);
GPIOC_ODR.B1=1; //Status for FPGA
NVIC_IntEnable(IVT_INT_EXTI15_10); // Enable External interrupt
}
}
if(flag_int==1)
{
//functionality on rising edge
flag_int=0;
if(index_rec<31)
{
//display data on led
GPIOD_ODR.B13= GPIOC_IDR.B0;
//save data in an array
rec_data[index_rec]= GPIOC_IDR.B0;
//read data
index_rec=index_rec+1;
}
else
{
flag_dint=1;
NVIC_IntDisable(IVT_INT_EXTI15_10);
}
} // Infinite loop
}
}
Without getting into your code specific, see PeterJ_01's comment, the clock rate problem can be explained by a misunderstanding of throughput in your assumtions.
You assume that given that your STM device has a clock of 168Mhz it can sustain the same throughput of interrupts, which you seem to have conservatively relaxed to 1Mhz.
However the throughput of interrupts it will be able to support is given by the inverse of the time it takes the device to process each interrupt. This time includes both the time the processor takes to enter the service routing (ie detect the interrupt, interrupt the current code and resolve from the vector table where to jump to) plus the time taken to execute the service routine.
Lets be super optimistic and say that entering the routine takes 1 cycle and the routing itself takes 3 (2 for the flags you set and 1 for the jump out of the routine). This gives 4 cycles at 168Mhz is 23.81ns, taking the inverse 42Mhz. This can also be computed by dividing the maximum frequency you would achieve (168Mhz) by the number of cycles spent processing.
Hence our really optimistic bound is 42Mhz, but realistically will be lower. For a more accurate estimate you should test your implementation timings and dig into your device's documentation to see interrupt response times.
Edit:
The solution was to turn down the I2C clock in the initialization block. Although the STM could handle it, the data sheet for the LCD stated it could handle only up to 10kHz.
For the DMA there is an IRQ that must be enabled and setup in the CubeMX software that will enable DMA TX/RX lines.
I'm using an STM32 - Nucleo-F401RE board with freeRTOS. I've used freeRTOS a bit recently, but I have never really used I2C.
Trying to setup a simple LCD display using the I2C drivers that CubeMX generates.
So far it only sends about half the data I request to send.
I've tested it with the simple "show firmware" command and this works. So I can verify that the I2C instance is setup correctly.
/* I2C1 init function */
static void MX_I2C1_Init(void)
{
hi2c1.Instance = I2C1;
hi2c1.Init.ClockSpeed = 100000;
hi2c1.Init.DutyCycle = I2C_DUTYCYCLE_2;
hi2c1.Init.OwnAddress1 = 0;
hi2c1.Init.AddressingMode = I2C_ADDRESSINGMODE_7BIT;
hi2c1.Init.DualAddressMode = I2C_DUALADDRESS_DISABLE;
hi2c1.Init.OwnAddress2 = 0;
hi2c1.Init.GeneralCallMode = I2C_GENERALCALL_DISABLE;
hi2c1.Init.NoStretchMode = I2C_NOSTRETCH_DISABLE;
if (HAL_I2C_Init(&hi2c1) != HAL_OK)
{
Error_Handler();
}
}
It has international pull-up resistors set as well.
/* vdebugTask function */
void vdebugTask(void const * argument)
{
/* USER CODE BEGIN vdebugTask */
/* Infinite loop */
for(;;)
{
char hi[6];
hi[0] = 'h';
hi[1] = 'e';
hi[2] = 'y';
hi[3] = 'a';
hi[4] = 'h';
hi[5] = '!';
HAL_I2C_Master_Transmit_DMA(&hi2c1, (0x50), hi, 6);
vTaskDelay(10);
}
/* USER CODE END vdebugTask */
}
This is the code I am trying to run, I have not changed the HAL function at all. I don't think that it could be more simple than this, however this is all that happens.
I followed the timing constraints in the data sheet for the LCD, and the CubeMX software didn't warn or state anywhere that their I2C drivers had any special requirements. Am I doing something wrong with the program?
I have also tried using the non-DMA blocking mode polling transfer function that was also created by CubeMX
HAL_I2C_Master_Transmit(&hi2c1, (0x50), hi, 6, 1000);
This is even worse and just continuously spams the screen with unintelligible text.
The solution was to turn down the I2C clock in the initialization block. Although the STM could handle it, the data sheet for the LCD stated it could handle only up to 10kHz.
For the DMA there is an IRQ that must be enabled and setup in the CubeMX software that will enable DMA TX/RX lines.
Note that the clock still must adhere to the hardware limitations. I assume that roughly 20% less than maximum stated in the data sheet will suffice. For my LCD this means that I am going to be using 80kHz.
First go to configuration :
And then click on DMA to setup a DMA request. I've only selected TX as I don't care about an RX DMA from the LCD.
Working on a simple ADC project reading voltages from a potential divider on a number of channels. I am using the STM32f103rb on a commercial and well made PCB. I am reading pins on PORTA and the following pins work and return a voltage:
PA0, PA1, PA4, PA5, PA6, PA7.
However the following pins do not work and return a raw value of around 2000-2700 approx:
PA8, PA11 and PA12.
The nature of the project and the fact that the PCB is of a fixed design means that we are stuck with these pin selections. The datasheet is quite specific about these pins being usable as AIN. All set-up and config is as per standard STM32, taken from basic example code and modified for our purposes. The included code is a debugging version we made in an attempt to find the cause but to no avail.
Voltage at the pins has been measured and is correct for the type of voltage divider.
Any help would be greatly appreciated on this.
/* Includes ------------------------------------------------------------------*/
#include "main.h"
#include "stdio.h"
#include "stdlib.h"
// Standard STM peripheral config functions
void RCC_Configuration(void);
void GPIO_Configuration(void);
// ADC config - STM example code
void NEW_ADC_Configuration(void);
// Function to read ADC channel and output value
u16 NEW_readADC1(u8 channel);
// Variables
double voltage_su; // Variable to store supply voltage
double voltage7;
double voltage8;
//*****************************************************************************
// Main program
int main(void)
{
// Initialise peripheral modules
RCC_Configuration();
GPIO_Configuration();
NEW_ADC_Configuration();
// Infinate loop
while (1)
{
// Get value of supply voltage and convert
voltage_su = (((3.3 * NEW_readADC1(ADC_Channel_12)) / 4095));
}
}
//*****************************************************************************
// STM RCC config
void RCC_Configuration(void)
{
RCC_APB2PeriphClockCmd(RCC_APB2Periph_GPIOC, ENABLE); // Connect PORTC
RCC_APB2PeriphClockCmd(RCC_APB2Periph_GPIOB, ENABLE); // Connect PORTB...
RCC_APB2PeriphClockCmd(RCC_APB2Periph_GPIOA, ENABLE); // Connect PORTA...
}
//*****************************************************************************
// STM GPIO config
void GPIO_Configuration(void)
{
GPIO_InitTypeDef GPIO_InitStructure; // Value for setting up the pins
// Sup
GPIO_InitStructure.GPIO_Pin = GPIO_Pin_12;
GPIO_InitStructure.GPIO_Mode = GPIO_Mode_AIN;
GPIO_InitStructure.GPIO_Speed = GPIO_Speed_50MHz;
GPIO_Init(GPIOA, &GPIO_InitStructure);
}
//*****************************************************************************
void NEW_ADC_Configuration(void)
{
GPIO_InitTypeDef GPIO_InitStructure;
ADC_InitTypeDef ADC_InitStructure; // Varibale used to setup the ADC
RCC_ADCCLKConfig(RCC_PCLK2_Div4); // Set ADC clock to /4
// Enable ADC 1 so we can use it
RCC_APB2PeriphClockCmd(RCC_APB2Periph_ADC1, ENABLE);
// Conenct the port A to peripheral clock
RCC_APB2PeriphClockCmd(RCC_APB2Periph_GPIOA, ENABLE);
// Restore to defaults
ADC_DeInit(ADC1);
/* ADC1 Configuration ----------------------------------------------------*/
ADC_InitStructure.ADC_Mode = ADC_Mode_Independent;
// Scan 1 at a time
ADC_InitStructure.ADC_ScanConvMode = DISABLE;
// No continuous conversions - do them on demand
ADC_InitStructure.ADC_ContinuousConvMode = DISABLE;
// Start conversion on software request - not bu external trigger
ADC_InitStructure.ADC_ExternalTrigConv = ADC_ExternalTrigConv_None;
// Conversions are 12 bit - put them in the lower 12 bits of the result
ADC_InitStructure.ADC_DataAlign = ADC_DataAlign_Right;
// How many channels are to be sued by the sequencer
ADC_InitStructure.ADC_NbrOfChannel = 1;
// Setup ADC
ADC_Init(ADC1, &ADC_InitStructure);
// Enable ADC 1
ADC_Cmd(ADC1, ENABLE);
// Enable ADC1 reset calibaration register
ADC_ResetCalibration(ADC1);
// Check end of reset calib reg
while(ADC_GetResetCalibrationStatus(ADC1));
//Start ADC1 calib
ADC_StartCalibration(ADC1);
// Check the end of ADC1 calib
while(ADC_GetCalibrationStatus(ADC1));
}
//*****************************************************************************
// Function to return the value of ADC ch
u16 NEW_readADC1(u8 channel)
{
// Config the channel to be sampled
ADC_RegularChannelConfig(ADC1, channel, 1, ADC_SampleTime_239Cycles5);
// Start the conversion
ADC_SoftwareStartConvCmd(ADC1, ENABLE);
// Wait until conversion complete
while(ADC_GetFlagStatus(ADC1, ADC_FLAG_EOC) == RESET);
// Get the conversion value
return ADC_GetConversionValue(ADC1);
}
//*****************************************************************************
I have never worked with this chip, but after looking at the hardware data sheet for 5 minutes I came up with the following naive conclusions:
The chip has 2 ADCs with 8 channels each, a total of 16.
PA0, PA1, PA4, PA5, PA6, PA7 sounds as if they will belong to the first ADC.
I then assume that PA8, PA11 and PA12 belong to the second ADC, called ADC2 by ST.
Your code seems to only ever mention and initialize ADC1. No trace of ADC2.
Maybe these pins don't work because you haven't even initialized or activated the ADC they belong to?
Also, you should check the data sheet for contention due to those pins being used as "Alternate Function".
Typically, they will list what needs to be done/disconnected in order to utilize the pins without contention (e.g. disconnect some resistor)
I have an ARM stm32f107 chip. I'm porting a project from IAR to GCC
IAR provides the following functions to enable and disable interrupts:
#define __disable_interrupt() ...
#define __enable_interrupt() ...
How do I enable / disable interrupt for my chip using GCC?
When developing for the STM32, RM0008 is your best friend. From Section 10.2.4 on page 199:
To generate the interrupt, the interrupt line should be configured and
enabled. This is done by programming the two trigger registers with
the desired edge detection and by enabling the interrupt request by
writing a ‘1’ to the corresponding bit in the interrupt mask register.
So you need to set the appropriate mask bits in the appropriate registers. For external interrupts, that's the EXTI_IMR and EXTI_EMR registers. There are many others.
I can't answer for ARM but the same function in Coldfire boils down to setting/clearing the Interrupt Priority Level masking register in the CPU. Setting it to the highest number disables/ignores all but non-maskable, setting it to 0 enables all (YMMV).
Worth noting that it's handy to read-back the value when "disabling" and restore when "enabling" to ensure that stacked interrupts don't break each other:
ipl = DisableInts(); // Remember what the IPL was
<"Risky" code happens here>
EnableInts(ipl); // Restore value
This is useful when twiddling interrupt masks, which may cause spurious interrupts, or doing stuff that shouldn't be interrupted.
Functions come out as:
uint8 DisableInts(void)
{
return(asm_set_ipl(7));
}
uint8 EnableInts(uint8 ipl)
{
return(asm_set_ipl(ipl));
}
Both of which map to this asm:
asm_set_ipl:
_asm_set_ipl:
/* Modified for CW7.2! */
link A6,#-8
movem.l D6-D7,(SP)
move.l D0,D6 /* save argument */
move.w SR,D7 /* current sr */
move.l D7,D0 /* prepare return value */
andi.l #0x0700,D0 /* mask out IPL */
lsr.l #8,D0 /* IPL */
andi.l #0x07,D6 /* least significant three bits */
lsl.l #8,D6 /* move over to make mask */
andi.l #0x0000F8FF,D7 /* zero out current IPL */
or.l D6,D7 /* place new IPL in sr */
move.w D7,SR
movem.l (SP),D6-D7
//lea 8(SP),SP
unlk A6
rts
The ARM Documentation says that _enable_irq(); compiles to “CPSIE I” that means Clear All Masks. On the other hand _disable_irq(); compiles to Set Mask.