I am working with the Microsemi Smartfusion2 development kit from Arrow. It uses the Smartfusion2 M2S010-FG484 FPGA. (https://www.arrow.com/en/products/sf2plus-dev-kit/arrow-development-tools)
I am new to the Smartfusion2, I am trying to establish a connection between the Smartfusion2 FPGA and Arduino using the microcontroller subsystem (MSS) and programing ti in Softconsole. The problem is that I can't seem to get it to work. While trying to debug, I attached an LED (i had to improvise since I don't have an oscilloscope) to pin 3 of the arduino connector (J2) on the development board. This pin should contain the slave select 3 (pin J18 on the FPGA) as indicated in the datasheet on page 16 (https://static4.arrow.com/-/media/images/part-detail-pages/sf2-plus/new-sf2-files/sf2plus_user_guide_v1p1.pdf?la=zh-cn) and in the I/O editor of Libero SoC.
With the following code I am trying to toggle the LED/Select and deselect the specified slave (Slave 3). But nothing happens. The Slave select is active low but the LED is always on and never turns off.
/*
* main.c
*
* Created on: Aug 16, 2017
*/
#include "drivers/mss_gpio/mss_gpio.h"
#include "drivers/mss_spi/mss_spi.h"
#include <stdio.h>
/*Delay function in milliseconds, for 100MHz clock*/
void Delay(uint32_t Delayms){
uint32_t i = 0;
uint32_t DelayValue = Delayms*2000; //1000ms*100000 = 100000000 (100MHz)
for(i = 0; i <= DelayValue; i++){
}
}
/*Configuration for SPI0*/
void ConfigSPI1(void){
/*Initialize and Configure SPI1*/
MSS_SPI_init(&g_mss_spi1);
MSS_SPI_disable(&g_mss_spi1); //Disable SPI1
/*Configure SPI1 as master, protocol mode, clk speed, frame size*/
MSS_SPI_configure_master_mode(
&g_mss_spi1, //Selects SPI1 for configuration
MSS_SPI_SLAVE_3, //Set the target device as slave 3
MSS_SPI_MODE2, //Serial peripheral interface operating mode
64u, //Divider used on APB bus (PCLK) clock in order to generate the SPI clock
12); //Number of bits making up the frame, max = 32
MSS_SPI_enable(&g_mss_spi1); //Enable SPI1
}
/*SPI0 test function*/
void SPI1Test(void){
MSS_SPI_set_slave_select(&g_mss_spi1, MSS_SPI_SLAVE_3); //Used by a MSS SPI master to select a specific slave
MSS_SPI_transfer_frame(&g_mss_spi1, 0xaaa); //Transfers "0xaaa" to the selected slave (slave 3)
Delay(1000); // I used this delay for testing, to keep the SS low for a longer time to be able to see the LED turn off
MSS_SPI_clear_slave_select(&g_mss_spi1, MSS_SPI_SLAVE_3); //Used by a MSS SPI master to deselect a specific slave
}
int main(){
/*Configure modules*/
ConfigSPI1();
/*Infinite loop*/
for(;;){
SPI1Test();
Delay(3000);
}
return 0;
}
Does anyone see a mistake in my code that might be causing the problem? or maybe have an working example code for me?
I wrote another program to toggle pin J18/the slave select 3 pin with the gpio driver, and that worked, it toggled the LED. I am also pretty certain that the design in Libero SoC is correct and imported correctly.
Thank you for your time!
Related
I am currently working on a low-power project using the Adafruit Feather M0 microprocessor. A requirement of my project is to be able to sleep the CPU and wake it again using an external interrupt triggered from the MPU6050 accelerometer.
I have tested the following code sample from GitHub - it works successfully! The question that I need answering is how to I alter this sample code to work on Pin 13 of the feather, rather than pin 6.
#define interruptPin 6
volatile bool SLEEP_FLAG;
void EIC_ISR(void) {
SLEEP_FLAG ^= true; // toggle SLEEP_FLAG by XORing it against true
//Serial.print("EIC_ISR SLEEP_FLAG = ");
//Serial.println(SLEEP_FLAG);
}
void setup() {
// put your setup code here, to run once:
Serial.begin(9600);
delay(3000); // wait for console opening
attachInterrupt(digitalPinToInterrupt(interruptPin), EIC_ISR, CHANGE); // Attach interrupt to pin 6 with an ISR and when the pin state CHANGEs
SYSCTRL->XOSC32K.reg |= (SYSCTRL_XOSC32K_RUNSTDBY | SYSCTRL_XOSC32K_ONDEMAND); // set external 32k oscillator to run when idle or sleep mode is chosen
REG_GCLK_CLKCTRL |= GCLK_CLKCTRL_ID(GCM_EIC) | // generic clock multiplexer id for the external interrupt controller
GCLK_CLKCTRL_GEN_GCLK1 | // generic clock 1 which is xosc32k
GCLK_CLKCTRL_CLKEN; // enable it
while (GCLK->STATUS.bit.SYNCBUSY); // write protected, wait for sync
EIC->WAKEUP.reg |= EIC_WAKEUP_WAKEUPEN4; // Set External Interrupt Controller to use channel 4 (pin 6)
PM->SLEEP.reg |= PM_SLEEP_IDLE_CPU; // Enable Idle0 mode - sleep CPU clock only
//PM->SLEEP.reg |= PM_SLEEP_IDLE_AHB; // Idle1 - sleep CPU and AHB clocks
//PM->SLEEP.reg |= PM_SLEEP_IDLE_APB; // Idle2 - sleep CPU, AHB, and APB clocks
// It is either Idle mode or Standby mode, not both.
SCB->SCR |= SCB_SCR_SLEEPDEEP_Msk; // Enable Standby or "deep sleep" mode
SLEEP_FLAG = false; // begin awake
// Built-in LED set to output and high
PORT->Group[g_APinDescription[LED_BUILTIN].ulPort].DIRSET.reg = (uint32_t)(1<<g_APinDescription[LED_BUILTIN].ulPin); // set pin direction to output
PORT->Group[g_APinDescription[LED_BUILTIN].ulPort].OUTSET.reg = (uint32_t)(1<<g_APinDescription[LED_BUILTIN].ulPin); // set pin mode to high
Serial.println("Setup() Run!");
}
void loop() {
// put your main code here, to run repeatedly:
if (SLEEP_FLAG == true) {
PORT->Group[g_APinDescription[LED_BUILTIN].ulPort].OUTCLR.reg = (uint32_t)(1<<g_APinDescription[LED_BUILTIN].ulPin); // set pin mode to low
Serial.println("I'm going to sleep now.");
__WFI(); // wake from interrupt
SLEEP_FLAG = false;
Serial.println("Ok, I'm awake");
Serial.println();
}
//Serial.print("SLEEP_FLAG = ");
//Serial.println(SLEEP_FLAG);
PORT->Group[g_APinDescription[LED_BUILTIN].ulPort].OUTTGL.reg = (uint32_t)(1<<g_APinDescription[LED_BUILTIN].ulPin); // toggle output of built-in LED pin
delay(1000);
}
As per the pinout diagram and Atmel datasheet, I am struggling to work out which changes to make to allow pin 13 to operate in the same way as pin 6.
Atmel Datasheet
The obvious solution is to change the following lines...
#define interruptPin 13
EIC->WAKEUP.reg |= EIC_WAKEUP_WAKEUPEN1; // Set External Interrupt Controller to use channel 4 (pin 6)
I suspected channel 1 (WAKEUPEN1) due to the ENINT^1 next to pin 13 on the pinout diagram. But this didn't work, the code pin operation did not exhibit the same behaviour as the pin 6 setup.
I would be very grateful for any suggestion of how to implement this code working on Pin 13. Many thanks for your support.
I'm not an authority here, and your code looks correct to me.
Except, the pin out shows Pin 13 is the built-in LED line, and you manipulate LED_BUILTIN several places in your code. That's almost certainly conflicting with your attempts to use 13 as an interrupt line.
I'm trying to write my own driver for USART_TX on an STM32L476RG Nucleo Board.
Here the datasheet and the reference manual.
I'm using Keil uVision 5 and I set in the Manage dialog:
CMSIS > Core
Device > Startup
Xtal=16MHz
I want to create a single character transmitter. According to the manual instructions in Sec. 40 p 1332 I wrote this code:
// APB1 connects USART2
// The USART2 EN bit on APB1ENR1 is the 17th
// See alternate functions pins and label for USART2_TX! PA2 is the pin and AF7 (AFRL register) is the function to be set
#include "stm32l4xx.h" // Device header
#define MASK(x) ((uint32_t) (1<<(x)));
void USART2_Init(void);
void USART2_Wr(int ch);
void delayMs(int delay);
int main(void){
USART2_Init();
while(1){
USART2_Wr('A');
delayMs(100);
}
}
void USART2_Init(void){
RCC->APB1ENR1 |= MASK(17); // Enable USART2 on APB1
// we know that the pin that permits the USART2_TX is the PA2, so...
RCC->AHB2ENR |= MASK(0); // enable GPIOA
// Now, in GPIOA 2 put the AF7, which can be set by placing AF7=0111 in AFSEL2 (pin2 selected)
// AFR[0] refers to GPIOA_AFRL register
// Remember: each pin asks for 4 bits to define the alternate functions. see pg. 87
// of the datasheet
GPIOA->AFR[0] |= 0x700;
GPIOA->MODER &= ~MASK(4);// now ... we set the PA2 directly with moder as alternate function "10"
// USART Features -----------
//USART2->CR1 |=MASK(15); //OVER8=1
USART2->BRR = 0x683; //USARTDIV=16Mhz/9600?
//USART2->BRR = 0x1A1; //This one works!!!
USART2->CR1 |=MASK(0); //UE
USART2->CR1 |=MASK(3); //TE
}
void USART2_Wr(int ch){
//wait when TX buffer is empty
while(!(USART2->ISR & 0x80)) {} //when data is transfered in the register the ISR goes 0x80.
//then we lock the procedure in a while loop until it happens
USART2->TDR =(ch & 0xFF);
}
void delayMs(int delay){
int i;
for (; delay>0; delay--){
for (i=0; i<3195; i++);
}
}
Now, the problem:
The system works, but not properly. I mean: if I use RealTerm at 9600 baud-rate, as configured by 0x683 in USART_BRR reg, it shows me wrong char but if I set 2400 as baud rate on real term it works!
To extract the 0x683 in USART_BRR reg i referred to Sec. 40.5.4 USART baud rate generation and it says that if OVER8=0 the USARTDIV=BRR. In my case, USARTDIV=16MHz/9600=1667d=683h.
I think that the problem lies in this code row:
USART2->BRR = 0x683; //USARTDIV=16Mhz/9600?
because if I replace it as
USART2->BRR = 0x1A1; //USARTDIV=16Mhz/9600?
THe system works at 9600 baud rate.
What's wrong in my code or in the USARTDIV computation understanding?
Thank you in advance for your support.
Sincerely,
GM
The default clock source for the USART is PCLK1 (figure 15) PCLK1 is SYSCLK / AHB_PRESC / AHB1_PRESC. If 0x1A1 results in a baud rate of 9600, that suggests PCLK1 = 4MHz.
4MHz happens to be the default frequency of your processor (and PCLK1) at start-up when running from the internal MSI RC oscillator. So the most likely explanation is that you have not configured the clock tree, and are not running from the 16MHz HSE as you believe.
Either configure your clock tree to use the 16MHz source, or perform your calculations on the MSI frequency. The MSI precision is just about good enough over normal temperature range to maintain a sufficiently accurate baud rate, but it is not ideal.
I'm working with an STM32L073RZ CPU running MbedOS 5.11.2. Eventually I aim to get this working in a very low-power mode (STOP mode) that will be awoken with either an RTC interrupt or an interrupt from a peripheral device on pin PA_0 (WAKEUP_PIN_1). At the moment I am simply attempting to setup PA_0 as an interrupt using the STM32 HAL API. Please see my code below:
#include "mbed.h"
#define LOG(...) pc.printf(__VA_ARGS__); pc.printf("\r\n");
DigitalOut led(LED1);
Serial pc(USBTX, USBRX);
void HAL_GPIO_EXTI_Callback(uint16_t GPIO_Pin) {
led = !led;
}
int main()
{
pc.baud(9600);
led = 1;
// GPIO SETUP
LOG("Enabling GPIO port A clock");
__HAL_RCC_GPIOA_CLK_ENABLE();
GPIO_InitTypeDef GPIO_InitStruct;
GPIO_InitStruct.Pin = GPIO_PIN_0;
GPIO_InitStruct.Mode = GPIO_MODE_IT_RISING;
GPIO_InitStruct.Pull = GPIO_NOPULL;
LOG("Initialising PA_0");
HAL_GPIO_Init(GPIOA, &GPIO_InitStruct);
// NVIC SETUP
LOG("Setting IRQ Priority");
HAL_NVIC_SetPriority(EXTI0_1_IRQn, 0, 1); // Priorities can be 0, 1, 2, 3 with lowest being highest prio
LOG("Enabling IRQ");
HAL_NVIC_EnableIRQ(EXTI0_1_IRQn);
LOG("Going into STOP mode");
HAL_PWR_EnterSTOPMode(PWR_LOWPOWERREGULATOR_ON, PWR_STOPENTRY_WFE);
}
As you can see, the code is broken into two sections: GPIO setup and NVIC setup. My issue is as follows:
If I perform GPIO setup before NVIC setup then the program seems to hang on HAL_NVIC_SetPriority(), however, if I perform NVIC setup before GPIO setup then the code seems to hang on HAL_NVIC_EnableIRQ().
I am completely stumped as to what is causing this. Any insight is greatly appreciated.
You don't need to do this manually. As long as you run Mbed OS in tickless mode (set MBED_TICKLESS=1 macro in your mbed_app.json) the MCU will automatically enter stop mode whenever all threads are idle. If you want to wake up you can either use an InterruptIn on the pin or use a LowPowerTicker.
If you're looking for the absolute lowest power modes, you can use the standby feature (without RAM retention) for which there's a library here: stm32-standby-rtc-wakeup.
I am trying to implement my own SPI communication from FPGA to STM in which my FPGA serve as MASTER and generate Chip enable and clock for communication. FPGA transmit data at its rising edge and receive data at its falling edge my FPGA code works properly.
In STM side i capture this master clock on interrupts and receive data at its rising edge and transmit at its falling edge but communication not work properly if i increase clock speed from 250khz
According to my understand STM work at 168 Mega hz i set clock setting according to 168Mhz and handling of 1mhz interrupt is not a big problem so can you any guide how i handle this high speed clock in STM
My code is written below
/*
* Project name:
EXTI_interrupt (EXTI interrupt test)
* Copyright:
(c) Mikroelektronika, 2011.
* Revision History:
20111226:
- Initial release;
* Description:
This code demonstrates how to use External Interrupt on PD10.
PD10 is external interrupt pin for click1 socket.
receive data from mosi line in each rising edge.
* Test configuration:
MCU: STM32F407VG
http://www.st.com/st-web-
ui/static/active/en/resource/technical/document/datasheet/DM00037051.pdf
dev.board: EasyMX PRO for STM32
http://www.mikroe.com/easymx-pro/stm32/
Oscillator: HSI-PLL, 140.000MHz
Ext. Modules: -
SW: mikroC PRO for ARM
http://www.mikroe.com/mikroc/arm/
* NOTES:
receive 32 bit data from mosi line in each rising edge
*/
//D10 clk
//D2 ss
//C0 MOSI
//C1 FLAG
int read=0;
int flag_int=0;
int val=0;
int rec_data[32];
int index_rec=0;
int display_index=0;
int flag_dint=0;
void ExtInt() iv IVT_INT_EXTI15_10 ics ICS_AUTO {
EXTI_PR.B10 = 1; // clear flag
flag_int=1; //Flag on interrupt
}
TFT_Init_ILI9340();
void main() {
GPIO_Digital_Input(&GPIOD_BASE, _GPIO_PINMASK_10);
GPIO_Digital_Output(&GPIOD_BASE, _GPIO_PINMASK_13); // Set PORTD as
digital output
GPIO_Digital_Output(&GPIOD_BASE, _GPIO_PINMASK_12); // Set PORTD as
digital output
GPIO_Digital_Output(&GPIOD_BASE, _GPIO_PINMASK_14); // Set PORTD as
digital output
GPIO_Digital_Output(&GPIOD_BASE, _GPIO_PINMASK_15); // Set PORTD as
digital output
GPIO_Digital_Input(&GPIOA_IDR, _GPIO_PINMASK_0); // Set PA0 as
digital input
GPIO_Digital_Input(&GPIOC_IDR, _GPIO_PINMASK_0); // Set PA0 as
digital input
GPIO_Digital_Input(&GPIOC_IDR, _GPIO_PINMASK_2); // Set PA0 as
digital input
GPIO_Digital_Output(&GPIOC_IDR, _GPIO_PINMASK_1); // Set PA0 as
digital input
//interupt register
SYSCFGEN_bit = 1; // Enable clock for alternate pin
functions
SYSCFG_EXTICR3 = 0x00000300; // Map external interrupt on PD10
EXTI_RTSR = 0x00000000; // Set interrupt on Rising edge
(none)
EXTI_FTSR = 0x00000400; // Set Interrupt on Falling edge
(PD10)
EXTI_IMR |= 0x00000400; // Set mask
//NVIC_IntEnable(IVT_INT_EXTI15_10); // Enable External interrupt
while(1)
{
//interrupt is not enable until i push the button
if((GPIOD_ODR.B2==0)&&(flag_dint==0))
{ if (Button(&GPIOA_IDR, 0, 1, 1))
{
Delay_ms(100);
GPIOC_ODR.B1=1; //Status for FPGA
NVIC_IntEnable(IVT_INT_EXTI15_10); // Enable External interrupt
}
}
if(flag_int==1)
{
//functionality on rising edge
flag_int=0;
if(index_rec<31)
{
//display data on led
GPIOD_ODR.B13= GPIOC_IDR.B0;
//save data in an array
rec_data[index_rec]= GPIOC_IDR.B0;
//read data
index_rec=index_rec+1;
}
else
{
flag_dint=1;
NVIC_IntDisable(IVT_INT_EXTI15_10);
}
} // Infinite loop
}
}
Without getting into your code specific, see PeterJ_01's comment, the clock rate problem can be explained by a misunderstanding of throughput in your assumtions.
You assume that given that your STM device has a clock of 168Mhz it can sustain the same throughput of interrupts, which you seem to have conservatively relaxed to 1Mhz.
However the throughput of interrupts it will be able to support is given by the inverse of the time it takes the device to process each interrupt. This time includes both the time the processor takes to enter the service routing (ie detect the interrupt, interrupt the current code and resolve from the vector table where to jump to) plus the time taken to execute the service routine.
Lets be super optimistic and say that entering the routine takes 1 cycle and the routing itself takes 3 (2 for the flags you set and 1 for the jump out of the routine). This gives 4 cycles at 168Mhz is 23.81ns, taking the inverse 42Mhz. This can also be computed by dividing the maximum frequency you would achieve (168Mhz) by the number of cycles spent processing.
Hence our really optimistic bound is 42Mhz, but realistically will be lower. For a more accurate estimate you should test your implementation timings and dig into your device's documentation to see interrupt response times.
i have a tiva c micro controller the tm4c123gxl and i have been trying for a while now to use the I2C module on the board with a digital accelrometer with no result , i have been trying to set the MDR register with a certain value to send but it stays as 0
here is the code i am using for intialization till reaching part where i set the MDR register im using step by step debugging i run the code initially to the assignment step of I2C3_MDR_R = 0x2D;
void PortDInit(void)
{
volatile unsigned long delay=0;
SYSCTL_RCGCI2C_R|=0x8; //1-set clock of I2C of module 3
delay = SYSCTL_RCGC2_R; //2-delay to allow clock to stabilize
SYSCTL_RCGC2_R |= 0x00000008; //3-port D clock
delay = SYSCTL_RCGC2_R; //4-delay to allow clock to stabilize
GPIO_PORTD_AFSEL_R |= 0x03; //5-alternate function set for I2C mode
GPIO_PORTD_DEN_R |=0x03; //6-enable digital functionality for PA6 and PA7
GPIO_PORTD_ODR_R|=0x02; //7-enable open drain mode for I2CSDA register of port A
GPIO_PORTD_PCTL_R = 0x00000033; //8-set PCTL to I2C mode
I2C3_MCR_R= 0x00000010; // 9-intialize the i2c master
I2C3_MTPR_R = 0x00000007; // 10-number of system clock cycles in 1 scl period
I2C3_MSA_R = 0x3A // set slave address and read write bit
I2C3_MDR_R = 0x2D; // data to be sent BREAK POINT HERE using single step here yields MDR with same value = 0
I2C3_MCS_R = 0x00000003; // follow transmit condition
while(I2C3_MCS_R &= 0x40 == 1); // wait bus is busy sending data
if(I2C3_MCS_R&=0x04 ==1)
{
//handle error in communication
}
else
{
//success in transmission
}
what i have done to reach this code
carefully understood the I2C protocol how it works etc.
check the data sheet and follow the initalization steps mentioned there step by step which got me to this code
i know i should use tivaware library which will be easier but using
the registers helps me understand more of how everything is working ,
im still a student
at first i didnt have the digital enable line as it wasnt mentioned
to be activated for the I2C but its only logical it should be there
as we are using digital values i tried with both yielded the same
output mdr=0
i am using keil 4 as my IDE and im viewing the values of registers of
I2C module 3 to know whether data is placed in MDR or not
hope any one helps
thanks.
This is a long shot, but here goes:
in your comments, step 6 says
//6-enable digital functionality for PA6 and PA7
but it appears you are working on GPIO_PORTD...
maybe its a comment typo (you meant PD6 and PD7)
but just double check you are looking at the right pins...
Good luck!