STM32 SPI LL DMA Transmit - c

I have been trying to get SPI master transmit to work using DMA and STM32 LL drivers, on STM32G030C8.
I did get the SPI to work with LL drivers without DMA, so I believe that at least my wiring is correct.
What I have done:
Set SPI to use DMA in cubeMX by setting SPI1_TX Request to DMA1 channel 1
Setup the transmit in code:
main.c
#include "main.h"
#include "dma.h"
#include "gpio.h"
#include "spi.h"
uint8_t test_data[8] = {0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF};
void SystemClock_Config(void);
int main(void) {
HAL_Init();
SystemClock_Config();
MX_GPIO_Init();
MX_SPI1_Init();
MX_DMA_Init();
while (1) {
LL_DMA_ConfigAddresses(DMA1, LL_DMA_CHANNEL_1, (uint32_t)(&test_data[0]),
(uint32_t)LL_SPI_DMA_GetRegAddr(SPI1),
LL_DMA_GetDataTransferDirection(DMA1, LL_DMA_CHANNEL_1));
LL_DMA_SetDataLength(DMA1, LL_DMA_CHANNEL_1, 8);
LL_SPI_EnableDMAReq_TX(SPI1);
LL_SPI_Enable(SPI1);
LL_DMA_EnableChannel(DMA1, LL_DMA_CHANNEL_1);
HAL_Delay(1000);
HAL_GPIO_TogglePin(STATUS_LED_GPIO_Port, STATUS_LED_Pin);
}
}
spi.c:
#include "spi.h"
void MX_SPI1_Init(void)
{
LL_SPI_InitTypeDef SPI_InitStruct = {0};
LL_GPIO_InitTypeDef GPIO_InitStruct = {0};
GPIO_InitStruct.Pin = LL_GPIO_PIN_1;
GPIO_InitStruct.Mode = LL_GPIO_MODE_ALTERNATE;
GPIO_InitStruct.Speed = LL_GPIO_SPEED_FREQ_LOW;
GPIO_InitStruct.OutputType = LL_GPIO_OUTPUT_PUSHPULL;
GPIO_InitStruct.Pull = LL_GPIO_PULL_NO;
GPIO_InitStruct.Alternate = LL_GPIO_AF_0;
LL_GPIO_Init(GPIOA, &GPIO_InitStruct);
GPIO_InitStruct.Pin = LL_GPIO_PIN_2;
GPIO_InitStruct.Mode = LL_GPIO_MODE_ALTERNATE;
GPIO_InitStruct.Speed = LL_GPIO_SPEED_FREQ_LOW;
GPIO_InitStruct.OutputType = LL_GPIO_OUTPUT_PUSHPULL;
GPIO_InitStruct.Pull = LL_GPIO_PULL_NO;
GPIO_InitStruct.Alternate = LL_GPIO_AF_0;
LL_GPIO_Init(GPIOA, &GPIO_InitStruct);
LL_DMA_SetPeriphRequest(DMA1, LL_DMA_CHANNEL_1, LL_DMAMUX_REQ_SPI1_TX);
LL_DMA_SetDataTransferDirection(DMA1, LL_DMA_CHANNEL_1, LL_DMA_DIRECTION_MEMORY_TO_PERIPH);
LL_DMA_SetChannelPriorityLevel(DMA1, LL_DMA_CHANNEL_1, LL_DMA_PRIORITY_LOW);
LL_DMA_SetMode(DMA1, LL_DMA_CHANNEL_1, LL_DMA_MODE_NORMAL);
LL_DMA_SetPeriphIncMode(DMA1, LL_DMA_CHANNEL_1, LL_DMA_PERIPH_NOINCREMENT);
LL_DMA_SetMemoryIncMode(DMA1, LL_DMA_CHANNEL_1, LL_DMA_MEMORY_INCREMENT);
LL_DMA_SetPeriphSize(DMA1, LL_DMA_CHANNEL_1, LL_DMA_PDATAALIGN_BYTE);
LL_DMA_SetMemorySize(DMA1, LL_DMA_CHANNEL_1, LL_DMA_MDATAALIGN_BYTE);
NVIC_SetPriority(SPI1_IRQn, 0);
NVIC_EnableIRQ(SPI1_IRQn);
SPI_InitStruct.TransferDirection = LL_SPI_FULL_DUPLEX;
SPI_InitStruct.Mode = LL_SPI_MODE_MASTER;
SPI_InitStruct.DataWidth = LL_SPI_DATAWIDTH_8BIT;
SPI_InitStruct.ClockPolarity = LL_SPI_POLARITY_LOW;
SPI_InitStruct.ClockPhase = LL_SPI_PHASE_1EDGE;
SPI_InitStruct.NSS = LL_SPI_NSS_SOFT;
SPI_InitStruct.BaudRate = LL_SPI_BAUDRATEPRESCALER_DIV4;
SPI_InitStruct.BitOrder = LL_SPI_MSB_FIRST;
SPI_InitStruct.CRCCalculation = LL_SPI_CRCCALCULATION_DISABLE;
SPI_InitStruct.CRCPoly = 7;
LL_SPI_Init(SPI1, &SPI_InitStruct);
LL_SPI_SetStandard(SPI1, LL_SPI_PROTOCOL_MOTOROLA);
LL_SPI_DisableNSSPulseMgt(SPI1);
}
dma.c:
#include "dma.h"
void MX_DMA_Init(void)
{
LL_AHB1_GRP1_EnableClock(LL_AHB1_GRP1_PERIPH_DMA1);
NVIC_SetPriority(DMA1_Channel1_IRQn, 0);
NVIC_EnableIRQ(DMA1_Channel1_IRQn);
}
from the reference manual I found the following steps for DMA configuration:
Channel configuration procedure
The following sequence is needed to configure a DMA channel x:
Set the peripheral register address in the DMA_CPARx register.
The data is moved from/to this address to/from the memory after the peripheral event,
or after the channel is enabled in memory-to-memory mode.
Set the memory address in the DMA_CMARx register.
The data is written to/read from the memory after the peripheral event or after the
channel is enabled in memory-to-memory mode.
Configure the total number of data to transfer in the DMA_CNDTRx register.
After each data transfer, this value is decremented.
Configure the parameters listed below in the DMA_CCRx register:
– the channel priority
– the data transfer direction
– the circular mode
– the peripheral and memory incremented mode
– the peripheral and memory data size
– the interrupt enable at half and/or full transfer and/or transfer error
Activate the channel by setting the EN bit in the DMA_CCRx register.
A channel, as soon as enabled, may serve any DMA request from the peripheral connected
to this channel, or may start a memory-to-memory block transfer.
To my understanding steps 1,2,3 and 5 are done in main.c, and the step 4 already in spi.c
Also in the reference manual about SPI and DMA:
A DMA access is requested when the TXE or RXNE enable bit in the SPIx_CR2 register is
set. Separate requests must be issued to the Tx and Rx buffers.
-In transmission, a DMA request is issued each time TXE is set to 1. The DMA then
writes to the SPIx_DR register
and
When starting communication using DMA, to prevent DMA channel management raising
error events, these steps must be followed in order:
Enable DMA Rx buffer in the RXDMAEN bit in the SPI_CR2 register, if DMA Rx is
used.
Enable DMA streams for Tx and Rx in DMA registers, if the streams are used.
Enable DMA Tx buffer in the TXDMAEN bit in the SPI_CR2 register, if DMA Tx is used.
Enable the SPI by setting the SPE bit.
In my understanding I have done all the steps, but I cannot see anything with my oscilloscope attached to SPI1 lines.
I must be missing something (or something is done in wrong order) but I cannot figure out what is wrong.
In some other questions the problem has been that the DMA channel was wrong and not supporting SPI, but in this MCU, if I understood correctly, the DMAMUX handles that, and any signal should be available in any DMA channel? (configured in spi.c)
EDIT:
Reading flags from SPI and DMA:
LL_SPI_IsActiveFlag_BSY(SPI1) returns 0
LL_SPI_IsEnabledDMAReq_TX(SPI1) returns 1
LL_SPI_IsEnabled(SPI1) returns 1
LL_DMA_IsEnabledChannel(DMA1, LL_DMA_CHANNEL_1) returns 1
LL_DMA_IsActiveFlag_TE1(DMA1) returns 0
LL_SPI_IsActiveFlag_TXE(SPI1) returns 1
So, everything seems to be enabled, no errors but no data is transferred!
Any help is appreciated!
Thanks!

After some time debugging I found there is a bug in STM32cubeMX code generator. (this also seems to be already reported (https://community.st.com/s/question/0D53W00001HJ3EhSAL/wrong-initialization-sequence-with-cubemx-in-cubeide-when-using-i2s-with-dma?t=1641156995520&searchQuery)
when selecting SPI and DMA, the generator first initializes SPI and then DMA with
MX_SPIx_Init();
MX_DMA_Init();
The problem is that the SPI initialization tries to set DMA registers, but the DMA clock is not yet enabled so the values are not saved.
This is also the reason I got it working with USART, the USART initialization came after the DMA Initialization.
So the simple solution is to move the DMA initialization before the other peripherals.
BUT since this code is generated automatically every time you make changes in cubeMX, the changes will be lost.
To get around this, I used the cubeMX to disable the automatic initialization call in the project manager tab under advanced settings:
And then called these initialization functions manually in the following user code segment, and now it looks like this:
/* Initialize all configured peripherals */
MX_GPIO_Init();
MX_DMA_Init();
/* USER CODE BEGIN 2 */
MX_SPI1_Init();
MX_USART1_UART_Init();
/* USER CODE END 2 */

Related

STM32L0 Freeze on setting NVIC/GPIO

I'm working with an STM32L073RZ CPU running MbedOS 5.11.2. Eventually I aim to get this working in a very low-power mode (STOP mode) that will be awoken with either an RTC interrupt or an interrupt from a peripheral device on pin PA_0 (WAKEUP_PIN_1). At the moment I am simply attempting to setup PA_0 as an interrupt using the STM32 HAL API. Please see my code below:
#include "mbed.h"
#define LOG(...) pc.printf(__VA_ARGS__); pc.printf("\r\n");
DigitalOut led(LED1);
Serial pc(USBTX, USBRX);
void HAL_GPIO_EXTI_Callback(uint16_t GPIO_Pin) {
led = !led;
}
int main()
{
pc.baud(9600);
led = 1;
// GPIO SETUP
LOG("Enabling GPIO port A clock");
__HAL_RCC_GPIOA_CLK_ENABLE();
GPIO_InitTypeDef GPIO_InitStruct;
GPIO_InitStruct.Pin = GPIO_PIN_0;
GPIO_InitStruct.Mode = GPIO_MODE_IT_RISING;
GPIO_InitStruct.Pull = GPIO_NOPULL;
LOG("Initialising PA_0");
HAL_GPIO_Init(GPIOA, &GPIO_InitStruct);
// NVIC SETUP
LOG("Setting IRQ Priority");
HAL_NVIC_SetPriority(EXTI0_1_IRQn, 0, 1); // Priorities can be 0, 1, 2, 3 with lowest being highest prio
LOG("Enabling IRQ");
HAL_NVIC_EnableIRQ(EXTI0_1_IRQn);
LOG("Going into STOP mode");
HAL_PWR_EnterSTOPMode(PWR_LOWPOWERREGULATOR_ON, PWR_STOPENTRY_WFE);
}
As you can see, the code is broken into two sections: GPIO setup and NVIC setup. My issue is as follows:
If I perform GPIO setup before NVIC setup then the program seems to hang on HAL_NVIC_SetPriority(), however, if I perform NVIC setup before GPIO setup then the code seems to hang on HAL_NVIC_EnableIRQ().
I am completely stumped as to what is causing this. Any insight is greatly appreciated.
You don't need to do this manually. As long as you run Mbed OS in tickless mode (set MBED_TICKLESS=1 macro in your mbed_app.json) the MCU will automatically enter stop mode whenever all threads are idle. If you want to wake up you can either use an InterruptIn on the pin or use a LowPowerTicker.
If you're looking for the absolute lowest power modes, you can use the standby feature (without RAM retention) for which there's a library here: stm32-standby-rtc-wakeup.

STM32F407 USART1 : Clearing USART_FLAG_TC requires pgm to be halted before actually clearing the bit

When initializing USART1 on STM32F407 I have run into a problem when enabling TC interrupt. The TC flag in SR is set ('1') as soon as the USART RCC is enabled and clearing the flag before enabling TC interrupt has become a headache for me because it throws a TC interrupt as soon as UART_IT_TC is enabld. i.e. this is a startup problem.
By writing 0 to the flag in status register (SR) when initializing the USART and before enabling interrupts should clear the flags. But it does not when I run the program straight trough, but (!!!) if having a breakpoint and stepping trough the code where SR is cleared works fine, TC flag is set to 0.
Therefor i always get a TC interrupt even before transmitting anything if I run the code without a breakpoint. And then another one after transmitting a character, naturally. But when I was using a breakpoint where TC flag is cleared gives only one TC IRQ and thats when actually transmitting something.
Inside the IRQ handler I also clear the flags by writing 0 to SR register and this works without any breakpoint. Both transmitting and receiving data works well after this.
So, timing issue was in my mind, but adding even an second delay did not change behavior and then it should not work clearing the SR register later either. Or? Does USARTs require settling time for RCC before initializing the USART?
void Console_init(long baud)
{
GPIO_InitTypeDef GPIO_InitStruct;
USART_InitTypeDef USART_InitStruct;
NVIC_InitTypeDef NVIC_InitStruct;
/* Init clock domains */
RCC_AHB1PeriphClockCmd(RCC_AHB1Periph_GPIOA, ENABLE);
RCC_APB2PeriphClockCmd(RCC_APB2Periph_USART1, ENABLE);
/* Set alternate functions */
GPIO_PinAFConfig(GPIOA, GPIO_PinSource9, GPIO_AF_USART1);
GPIO_PinAFConfig(GPIOA, GPIO_PinSource10, GPIO_AF_USART1);
/* Init GPIO pins */
GPIO_StructInit(&GPIO_InitStruct);
GPIO_InitStruct.GPIO_Pin = GPIO_Pin_9 | GPIO_Pin_10;
GPIO_InitStruct.GPIO_Mode = GPIO_Mode_AF;
GPIO_InitStruct.GPIO_OType = GPIO_OType_PP;
GPIO_InitStruct.GPIO_PuPd = GPIO_PuPd_UP;
GPIO_InitStruct.GPIO_Speed = GPIO_Speed_25MHz;
GPIO_Init(GPIOA, &GPIO_InitStruct);
/* Configure UART setup */
USART_StructInit(&USART_InitStruct);
USART_InitStruct.USART_BaudRate = baud;
USART_InitStruct.USART_HardwareFlowControl = USART_HardwareFlowControl_None;
USART_InitStruct.USART_Mode = USART_Mode_Tx | USART_Mode_Rx;
USART_InitStruct.USART_Parity = USART_Parity_No;
USART_InitStruct.USART_StopBits = USART_StopBits_1;
USART_InitStruct.USART_WordLength = USART_WordLength_8b;
USART_Init(USART1, &USART_InitStruct);
/* Enable global interrupts for USART */
NVIC_InitStruct.NVIC_IRQChannel = USART1_IRQn;
NVIC_InitStruct.NVIC_IRQChannelCmd = ENABLE;
NVIC_InitStruct.NVIC_IRQChannelPreemptionPriority = 0;
NVIC_InitStruct.NVIC_IRQChannelSubPriority = 1;
NVIC_Init(&NVIC_InitStruct);
/* Clear old interrupt flags and enable required IRQs*/
USART1->SR = 0;
/* Enable USART */
USART_Cmd(USART1, ENABLE);
}
void console_transmit(void)
{
USART_ITConfig(USART1, USART_IT_TC, ENABLE);
USART1->DR = 0x55;
}
void USART1_IRQHandler(void)
{
if (USART1->SR & USART_FLAG_TC)
{
USART_ITConfig(USART1, USART_IT_TC, DISABLE);
}
USART1->SR = 0;
}
In this code I will get two interrupts on TC, one as soon I enables TC and one after character is transmitted. This code is what I am using now while trying to understand the issue.
Note: I have also tried to clear the SR inside console_transmit before enabling TC but to no help. It almost like it requires to be cleared inside the interrupt handler.
I think this is a conceptual misunderstanding.
A transmit interrupt does not tell you that something has been transmitted. It tells you that there is space in the buffer, and it is OK to push data to the UART. Of course at the startup the UART buffer is empty, and you are getting an interrupt as soon as you enable it. It is not a matter of timing. It is an expected behavior.
Such design makes the transmit flow very straightforward:
the mainline code prepares the buffer, and enables the interrupt
the ISR moves data to UART until either UART is full or there is no more data to send, and
once there is no more data to send, the ISR disables the interrupt

STM Nucleo I2C not sending all data

Edit:
The solution was to turn down the I2C clock in the initialization block. Although the STM could handle it, the data sheet for the LCD stated it could handle only up to 10kHz.
For the DMA there is an IRQ that must be enabled and setup in the CubeMX software that will enable DMA TX/RX lines.
I'm using an STM32 - Nucleo-F401RE board with freeRTOS. I've used freeRTOS a bit recently, but I have never really used I2C.
Trying to setup a simple LCD display using the I2C drivers that CubeMX generates.
So far it only sends about half the data I request to send.
I've tested it with the simple "show firmware" command and this works. So I can verify that the I2C instance is setup correctly.
/* I2C1 init function */
static void MX_I2C1_Init(void)
{
hi2c1.Instance = I2C1;
hi2c1.Init.ClockSpeed = 100000;
hi2c1.Init.DutyCycle = I2C_DUTYCYCLE_2;
hi2c1.Init.OwnAddress1 = 0;
hi2c1.Init.AddressingMode = I2C_ADDRESSINGMODE_7BIT;
hi2c1.Init.DualAddressMode = I2C_DUALADDRESS_DISABLE;
hi2c1.Init.OwnAddress2 = 0;
hi2c1.Init.GeneralCallMode = I2C_GENERALCALL_DISABLE;
hi2c1.Init.NoStretchMode = I2C_NOSTRETCH_DISABLE;
if (HAL_I2C_Init(&hi2c1) != HAL_OK)
{
Error_Handler();
}
}
It has international pull-up resistors set as well.
/* vdebugTask function */
void vdebugTask(void const * argument)
{
/* USER CODE BEGIN vdebugTask */
/* Infinite loop */
for(;;)
{
char hi[6];
hi[0] = 'h';
hi[1] = 'e';
hi[2] = 'y';
hi[3] = 'a';
hi[4] = 'h';
hi[5] = '!';
HAL_I2C_Master_Transmit_DMA(&hi2c1, (0x50), hi, 6);
vTaskDelay(10);
}
/* USER CODE END vdebugTask */
}
This is the code I am trying to run, I have not changed the HAL function at all. I don't think that it could be more simple than this, however this is all that happens.
I followed the timing constraints in the data sheet for the LCD, and the CubeMX software didn't warn or state anywhere that their I2C drivers had any special requirements. Am I doing something wrong with the program?
I have also tried using the non-DMA blocking mode polling transfer function that was also created by CubeMX
HAL_I2C_Master_Transmit(&hi2c1, (0x50), hi, 6, 1000);
This is even worse and just continuously spams the screen with unintelligible text.
The solution was to turn down the I2C clock in the initialization block. Although the STM could handle it, the data sheet for the LCD stated it could handle only up to 10kHz.
For the DMA there is an IRQ that must be enabled and setup in the CubeMX software that will enable DMA TX/RX lines.
Note that the clock still must adhere to the hardware limitations. I assume that roughly 20% less than maximum stated in the data sheet will suffice. For my LCD this means that I am going to be using 80kHz.
First go to configuration :
And then click on DMA to setup a DMA request. I've only selected TX as I don't care about an RX DMA from the LCD.

Analogue input pins PA8, PA11 and PA12 not working on STM32F103RB

Working on a simple ADC project reading voltages from a potential divider on a number of channels. I am using the STM32f103rb on a commercial and well made PCB. I am reading pins on PORTA and the following pins work and return a voltage:
PA0, PA1, PA4, PA5, PA6, PA7.
However the following pins do not work and return a raw value of around 2000-2700 approx:
PA8, PA11 and PA12.
The nature of the project and the fact that the PCB is of a fixed design means that we are stuck with these pin selections. The datasheet is quite specific about these pins being usable as AIN. All set-up and config is as per standard STM32, taken from basic example code and modified for our purposes. The included code is a debugging version we made in an attempt to find the cause but to no avail.
Voltage at the pins has been measured and is correct for the type of voltage divider.
Any help would be greatly appreciated on this.
/* Includes ------------------------------------------------------------------*/
#include "main.h"
#include "stdio.h"
#include "stdlib.h"
// Standard STM peripheral config functions
void RCC_Configuration(void);
void GPIO_Configuration(void);
// ADC config - STM example code
void NEW_ADC_Configuration(void);
// Function to read ADC channel and output value
u16 NEW_readADC1(u8 channel);
// Variables
double voltage_su; // Variable to store supply voltage
double voltage7;
double voltage8;
//*****************************************************************************
// Main program
int main(void)
{
// Initialise peripheral modules
RCC_Configuration();
GPIO_Configuration();
NEW_ADC_Configuration();
// Infinate loop
while (1)
{
// Get value of supply voltage and convert
voltage_su = (((3.3 * NEW_readADC1(ADC_Channel_12)) / 4095));
}
}
//*****************************************************************************
// STM RCC config
void RCC_Configuration(void)
{
RCC_APB2PeriphClockCmd(RCC_APB2Periph_GPIOC, ENABLE); // Connect PORTC
RCC_APB2PeriphClockCmd(RCC_APB2Periph_GPIOB, ENABLE); // Connect PORTB...
RCC_APB2PeriphClockCmd(RCC_APB2Periph_GPIOA, ENABLE); // Connect PORTA...
}
//*****************************************************************************
// STM GPIO config
void GPIO_Configuration(void)
{
GPIO_InitTypeDef GPIO_InitStructure; // Value for setting up the pins
// Sup
GPIO_InitStructure.GPIO_Pin = GPIO_Pin_12;
GPIO_InitStructure.GPIO_Mode = GPIO_Mode_AIN;
GPIO_InitStructure.GPIO_Speed = GPIO_Speed_50MHz;
GPIO_Init(GPIOA, &GPIO_InitStructure);
}
//*****************************************************************************
void NEW_ADC_Configuration(void)
{
GPIO_InitTypeDef GPIO_InitStructure;
ADC_InitTypeDef ADC_InitStructure; // Varibale used to setup the ADC
RCC_ADCCLKConfig(RCC_PCLK2_Div4); // Set ADC clock to /4
// Enable ADC 1 so we can use it
RCC_APB2PeriphClockCmd(RCC_APB2Periph_ADC1, ENABLE);
// Conenct the port A to peripheral clock
RCC_APB2PeriphClockCmd(RCC_APB2Periph_GPIOA, ENABLE);
// Restore to defaults
ADC_DeInit(ADC1);
/* ADC1 Configuration ----------------------------------------------------*/
ADC_InitStructure.ADC_Mode = ADC_Mode_Independent;
// Scan 1 at a time
ADC_InitStructure.ADC_ScanConvMode = DISABLE;
// No continuous conversions - do them on demand
ADC_InitStructure.ADC_ContinuousConvMode = DISABLE;
// Start conversion on software request - not bu external trigger
ADC_InitStructure.ADC_ExternalTrigConv = ADC_ExternalTrigConv_None;
// Conversions are 12 bit - put them in the lower 12 bits of the result
ADC_InitStructure.ADC_DataAlign = ADC_DataAlign_Right;
// How many channels are to be sued by the sequencer
ADC_InitStructure.ADC_NbrOfChannel = 1;
// Setup ADC
ADC_Init(ADC1, &ADC_InitStructure);
// Enable ADC 1
ADC_Cmd(ADC1, ENABLE);
// Enable ADC1 reset calibaration register
ADC_ResetCalibration(ADC1);
// Check end of reset calib reg
while(ADC_GetResetCalibrationStatus(ADC1));
//Start ADC1 calib
ADC_StartCalibration(ADC1);
// Check the end of ADC1 calib
while(ADC_GetCalibrationStatus(ADC1));
}
//*****************************************************************************
// Function to return the value of ADC ch
u16 NEW_readADC1(u8 channel)
{
// Config the channel to be sampled
ADC_RegularChannelConfig(ADC1, channel, 1, ADC_SampleTime_239Cycles5);
// Start the conversion
ADC_SoftwareStartConvCmd(ADC1, ENABLE);
// Wait until conversion complete
while(ADC_GetFlagStatus(ADC1, ADC_FLAG_EOC) == RESET);
// Get the conversion value
return ADC_GetConversionValue(ADC1);
}
//*****************************************************************************
I have never worked with this chip, but after looking at the hardware data sheet for 5 minutes I came up with the following naive conclusions:
The chip has 2 ADCs with 8 channels each, a total of 16.
PA0, PA1, PA4, PA5, PA6, PA7 sounds as if they will belong to the first ADC.
I then assume that PA8, PA11 and PA12 belong to the second ADC, called ADC2 by ST.
Your code seems to only ever mention and initialize ADC1. No trace of ADC2.
Maybe these pins don't work because you haven't even initialized or activated the ADC they belong to?
Also, you should check the data sheet for contention due to those pins being used as "Alternate Function".
Typically, they will list what needs to be done/disconnected in order to utilize the pins without contention (e.g. disconnect some resistor)

read input value from external device STM32F107

I am new to the processor STM32F107. I have to read the input value from an external source that is a balance. This balance is external to the board that contains the processor and communicates with it via PA4.
What I have to do to read the analogue input from the balance?
Here is my first attempt to read the input from the balance.
I use this function to setup the ADC:
void ADC_Configuration(void) {
ADC_InitTypeDef ADC_InitStructure;
/* PCLK2 is the APB2 clock */
/* ADCCLK = PCLK2/6 = 72/6 = 12MHz*/
RCC_ADCCLKConfig(RCC_PCLK2_Div6);
/* Enable ADC1 clock so that we can talk to it */
RCC_APB2PeriphClockCmd(RCC_APB2Periph_ADC1, ENABLE);
/* Put everything back to power-on defaults */
ADC_DeInit(ADC1);
/* ADC1 Configuration ------------------------------------------------------*/
/* ADC1 and ADC2 operate independently */
ADC_InitStructure.ADC_Mode = ADC_Mode_Independent;
/* Disable the scan conversion so we do one at a time */
ADC_InitStructure.ADC_ScanConvMode = DISABLE;
/* Don't do contimuous conversions - do them on demand */
ADC_InitStructure.ADC_ContinuousConvMode = DISABLE;
/* Start conversin by software, not an external trigger */
ADC_InitStructure.ADC_ExternalTrigConv = ADC_ExternalTrigConv_None;
/* Conversions are 12 bit - put them in the lower 12 bits of the result */
ADC_InitStructure.ADC_DataAlign = ADC_DataAlign_Right;
/* Say how many channels would be used by the sequencer */
ADC_InitStructure.ADC_NbrOfChannel = 1;
/* Now do the setup */ ADC_Init(ADC1, &ADC_InitStructure);
/* Enable ADC1 */ ADC_Cmd(ADC1, ENABLE);
/* Enable ADC1 reset calibaration register */
ADC_ResetCalibration(ADC1);
/* Check the end of ADC1 reset calibration register */
while(ADC_GetResetCalibrationStatus(ADC1));
/* Start ADC1 calibaration */
ADC_StartCalibration(ADC1);
/* Check the end of ADC1 calibration */
while(ADC_GetCalibrationStatus(ADC1));
}
And I use this function to get the input:
u16 readADC1(u8 channel) {
ADC_RegularChannelConfig(ADC1, channel, 1, ADC_SampleTime_1Cycles5);
// Start the conversion
ADC_SoftwareStartConvCmd(ADC1, ENABLE);
// Wait until conversion completion
while(ADC_GetFlagStatus(ADC1, ADC_FLAG_EOC) == RESET);
// Get the conversion value
return ADC_GetConversionValue(ADC1);
}
The problem is that in N measurements of the same weight, I get N different results.
For example, the weight is 70kg and the output of the readADC1(ADC_Channel_4) is 715,760,748,711,759 etc.
What am I doing wrong?
The balance has a load cell inside it, which generates an analog voltage. The processor on the balance is some how not utilized (i assume this as not much details are present in your question).
The stm32 f107 controller has an on-chip ADC (analog to digital convertor). Connect the output of load cell (analog signal coming from balance) to the analog input pin of stm32 f107. Configure the ADC to sample and convert the analog signal into digital (use the example code as reference to write the software).
PA4 is multiplexed with ADC12_IN4 (an analogue input that can itself be mapped to channel 4 on either ADC1 or ADC2).
Programming the ADC, selecting the correct peripheral clocking and mapping multiplexed pins on STM32 is somewhat complex, but I strongly suggest that you utilise the STM32F10x Standard Peripheral Library which provides an API for all STM32F10x peripherals as well as numerous examples of how to use the library, including ADC examples.
The ADC itself may be polled, interrupt driven, use DMA, and be software triggered or free-running, self-clocked or clocked from a timer peripheral. The options are numerous, only some combinations are covered by the example code, but it is a good place to start nonetheless. To understand all the options and how to use them with the Standard Peripheral Library, you will need a fairly thorough understanding of the Reference Manual.
Another resource you may find useful is STM's MicroXplorer. This allows visual configuration and allocation of multiplexed pins and generates source code you can use directly.
Furthermore you may need some hardware signal conditioning at the input to ensure that the input is within the valid and tolerable range of the ADC input.

Resources