I'm working with MDK-Pro and the File System library.
In my application, I require an SPI interface to the SD card. I've managed to setup the project properly, except that in the RTE_Components.h file that Keil generates the line #define RTE_Drivers_MCI0 which subsequently triggers a preprocessor error ("SDIO not configured in RTE_Device.h").
Although I can manually comment out this line in RTE_Components.h, every so often Keil updates this file and I get the above problem. Does anyone know what exactly generates this file, and how I can stop it from adding the SDIO-related definitions into the project?
The RTE_Components.h is not supposed to be modified and will always be automatically generated. That the stack tries to connect via MCI interface is related to your configuration made in the "FS_Config_MC_0.h".
// <o>Connect to hardware via Driver_MCI# <0-255>
// <i>Select driver control block for hardware interface
#define MC0_MCI_DRIVER 0
// <o>Connect to hardware via Driver_SPI# <0-255>
// <i>Select driver control block for hardware interface when in SPI mode
#define MC0_SPI_DRIVER 0
// <o>Memory Card Interface Mode <0=>Native <1=>SPI
// <i>Native uses a SD Bus with up to 8 data lines, CLK, and CMD
// <i>SPI uses 2 data lines (MOSI and MISO), SCLK and CS
// <i>When using SPI both Driver_SPI# and Driver_MCI# must be specified
// <i>since the MCI driver provides the control interface lines.
#define MC0_SPI 1
Related
I generated a code for "stm32f103c8t6" with CubeMX for USB VCP, when I add "CDC_Transmit_FS" command to send data, the port isn't recognized by windows10!
what should I do? Here is the code which is compiled without error:
#include "stm32f1xx_hal.h"
#include "usb_device.h"
#include "usbd_cdc_if.h"
int main(void)
{
uint8_t Text[] = "Hello\r\n";
while (1)
{
CDC_Transmit_FS(Text,6); /*when commented the port is recognized*/
HAL_Delay(1000);
}
}
There are three things you need to check in my experience:
startup_stm32f405xx.s --> Increase the Heap size. I use heap size 800 and stack size 800 as well.
usbd_cdc_if.c --> APP_RX_DATA_SIZE 64 and APP_TX_DATA_SIZE 64
usbd_cdc_if.c --> add below code to the CDC_Control_FS() function
Code:
case CDC_SET_LINE_CODING:
tempbuf[0]=pbuf[0];
tempbuf[1]=pbuf[1];
tempbuf[2]=pbuf[2];
tempbuf[3]=pbuf[3];
tempbuf[4]=pbuf[4];
tempbuf[5]=pbuf[5];
tempbuf[6]=pbuf[6];
break;
case CDC_GET_LINE_CODING:
pbuf[0]=tempbuf[0];
pbuf[1]=tempbuf[1];
pbuf[2]=tempbuf[2];
pbuf[3]=tempbuf[3];
pbuf[4]=tempbuf[4];
pbuf[5]=tempbuf[5];
pbuf[6]=tempbuf[6];
break;
and define the uint8_t tempbuf[7]; in the user private_variables section.
Without the increased heap size, Windows does not react at all.
Without the point 3, Windows will send the baud rate information and then read the baud rate, expecting to get back the same values. Since you do not return any values, the virtual com port remains as driver-not-loaded.
If you do all of that, the Windows 10 out-of-the-box VCP driver can be used. No need to install the very old ST VCP driver on your system.
PS: I read somewhere turning on VSense makes problems, too. Don't know, I have not configured it and all works like a charm.
Put delay before CDC_Transmit_FS call - it will wait for the initiatialization. Your code should be like this
int main(void)
{
uint8_t Text[] = "Hello\r\n";
HAL_Delay(1000);
while (1)
{
CDC_Transmit_FS(Text,6); /*when commented the port is recognized*/
HAL_Delay(1000);
}
}
I had similar issue. I couldn't connect to a port and the port appears as just "virtual com port". I added while loop to wait for USBD_OK from CDC_Transmit_FS. Then it stars work even with out it or a delay after init function. I am not sure what the issue was.
while(CDC_Transmit_FS((uint8_t*)txBuf, strlen(txBuf))!=USBD_OK)
{
}
you may have to install driver to get device recognized as com port
you can get it from st site
if not installed the device is listed with question or exclamation mark on device manager
note that you cannot send until device get connected to host!
not sure that CubeMX CDC_Transmit_FS is checking for this
also instead of delay to resend you shall check the CDC class data "TXSstate"
is 0 mean tx is over.
I know it's a bit late, but I stumbled upon this post and it was extremely helpful.
Here is what I needed to do:
do the Line-Coding (I think only necessary on Windows-Systems)
increase Heap (Stack was left at default 0x200)
Here is what wasn't necessary for me (on a STM32F405RGT6 Chip):
change APP_RX_DATA_SIZE / APP_TX_DATA_SIZE (left it at 2048)
add a delay befor running CDC_Tranmit_FS()
Also some things to consider that happened to me in the past:
be sure to use a USB-Cable with data lines (most charging-cables don't have them)
double check the traces/connections if you use a custom board
I've added a MAX7320 i2c expander chip to i2c bus 0 on my ARM Linux board.
The chip works correctly from userspace with commands such as /usr/sbin/i2cset -y 0 0x5d 0x02 and /usr/sbin/i2cget -y 0 0x5d.
There is a drivers/gpio/gpio-max732x.c file in the kernel source, which is compiled into the kernel that I'm running. (I've built it from source.)
How do I tell the kernel that it should instantiate the gpio-max732x driver on "i2c bus 0, chip id 0x5d"?
Do I need to modify the device tree .dts file and put a new .dtb file in /boot/dtbs/?
What would the clause for instantiating a gpio-max732x module look like?
P.S. I've seen https://lkml.org/lkml/2015/1/13/305 but I can't figure out how to get the patch files.
Device Tree
There must be appropriate Device Tree definition for your chip, in order for driver to instantiate. There are 2 ways to do so:
Modify .dts Device Tree file for your board (look in arch/arm/boot/dts/), then recompile it and re-flash it to your device.
This way is preferred in case when you have access you kernel sources for your board and you are able to re-flash .dtb file to your device.
Create Device Tree Overlay file, compile it and load it on your device.
This way is preferred when you don't have access to kernel sources for your board, or you are unable to flash new device tree blob to your device.
Your device definition in Device Tree should look like (according to Documentation/devicetree/bindings/gpio/gpio-max732x.txt):
&i2c0 {
expander: max7320#5d {
compatible = "maxim,max7320";
reg = <0x5d>;
gpio-controller;
#gpio-cells = <2>;
};
};
Kernel configuration
As your expander chip (MAX7320) has no input GPIOs, you don't need IRQ support for MAX732x. So you can disable CONFIG_GPIO_MAX732X_IRQ in your kernel configuration.
Matching device with driver
Once you have your Device Tree loaded (with definition for MAX7320), MAX732x driver will be matched with device definition, and instantiated. Below is explained how matching happens.
In Device Tree file you have compatible property:
compatible = "maxim,max7320";
In MAX732x driver you can see this table:
static const struct of_device_id max732x_of_table[] = {
...
{ .compatible = "maxim,max7320" },
...
When driver is being loaded, and when Device Tree blob is being loaded, kernel tries to find the match for each driver and Device Tree definition. Just by comparing strings above. If strings are matched -- kernel instantiates driver, passing corresponding device parameters to it. Look at i2c_device_match() function for details.
Obtaining patches
The best way is to use kernel sources that already have Device Tree support of MAX732x (v4.0+). But if it's not the case, then...
You can cherry-pick patches from upstream kernel to your kernel:
$ git remote add upstream git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
$ git fetch --all
$ git cherry-pick 43c4bcf9425e
$ git cherry-pick 479f8a5744d8
$ git cherry-pick 09afa276d52e
$ git cherry-pick 996bd13f28e6
And if you still want to apply patches manually (worst option, actually), here you can find direct links to patches. Click (patch) link to get a raw patch.
Also check later patches for gpio-max732x.c here.
Hardware concerns
To be sure that your chip has 0x5d I2C address, check that configuration pins are tied to next lines (as per datasheet):
Pin Line
-----------
AD2 V+
AD0 V+
This question may be so obvious it is stupid but I am failing to come up with an answer for it.
I am trying to make a simple makefile project for the sam4s xplained board from Atmel.
I am new to ARM and am feeling a bit lost in how to get stuff to work. Here is what I do trying to get the LEDs to work:
/* Enable clock for PIOC. */
PMC->PMC_WPMR = PMC_WPMR_WPKEY_PASSWD;
PMC->PMC_PCER0 = PMC_PCER0_PID13; /* PIOC clock enable. */
/* Enable output for LED. */
PIOC->PIO_WPMR = PIO_WPMR_WPKEY_PASSWD; /* Enable writing to registers. */
PIOC->PIO_PER = PIO_PER_P10 | PIO_PER_P17; /* Enable pio 10, 17. */
PIOC->PIO_OER = PIO_OER_P10 | PIO_OER_P17; /* Set pio10 and 17 as output. */
PIOC->PIO_SODR = PIO_SODR_P10; /* Set pio10. */
PIOC->PIO_CODR = PIO_CODR_P17; /* Clear pio17 . */
But absolutely nothing happens. Am I missing something?
There should be user LEDs at PIOC 10 and 17.
Board schematics:
http://www.atmel.com/webdoc/sam4s16xplained/sam4s16xplained.boardScematics.section_ggo_tyg_xf.html
The problem was not in the code but in Atmel's tools used to program the board. I had been using SAM-BA In-system Programmer to program the board but for some reason it failed to change the contents of the flash. Even setting a single manually in the memory view fails.
I instead tried Seggers JLink software and did the following steps:
Update the JLink driver on the board using Atmel Studio 6 (this step requires windows).
Downloaded the J-Link software package for Linux from Segger: https://www.segger.com/jlink-software.html.
Using JLinkExe to program the board, like so:
Make sure JP25 is disconnected - only needed for sam-ba.
Connect via usb with the jtag connector.
Start JLinkExe
In the JLink terminal do:
JLink> device at91sam4s16c
JLink> loadbin <target.bin>, 0x400000
Sometimes I need to reset the board before it works after programming it. Using the Segger tools debugging also works now. Start gdb server with JLinkGDBServer and connect with arm-none-eabi-gdb using:
(gdb) target remote :2331
(gdb) file <target.elf>
I have written a simple Uart driver for uart4 instance for omap-4460 panda board with just open,close,read and write functions.How will it be different from omap-serial.c.
Should I include platform_driver_register in the init?
Since this is also a class of character devices how will it be different ?
The STM32F2 micro-controller has build in capabilities to prevent readout of application code using a debug interface. It works fine and is accomplished pretty easily by configuring the read protection(RDP) level to '1' (!0xAA || !0xCC) or '2' (0xCC which is irreversible). Except trying to turn it off is where i run in to issues.
The expected behavior when the RDP level is lowered back to 0:
The chip will perform a mass flash erase.
Followed by clearing the protection flag.
System reset
Except after a power cycle the flash has been successfully erased but the protection flag remains on level '1' (0x55) keeping the debug interface disabled. And thus preventing me from writing any new application code. It is possible to fiddle around with the debugger and force the flag to level 0 (0xAA) manually though..
Is there anyone who have had the same or similar issues with the STM32F2xx series that can help me out? I'm using the STM32 standard peripheral drivers for programming the flash.
Enable
// Enable read out protection
FLASH_OB_Unlock();
FLASH_OB_RDPConfig(OB_RDP_Level_1);
FLASH_OB_Launch();
FLASH_OB_Lock();
// Restart platform
NVIC_SystemReset();
Disable
// Disable read out protection
FLASH_OB_Unlock();
FLASH_OB_RDPConfig(OB_RDP_Level_0);
FLASH_OB_Launch();
FLASH_OB_Lock();
// Restart platform
NVIC_SystemReset();
This is because before the clearing the protection flag, and in the middle of mass flash erase, you restart the chip.
The only way to recover the chip is to use the system bootloader.
Force boot0 pin to be 1 and force boot1 pin to be 0 at power up, start bootloader then connect USB and program the chip with DFU programmer.
You can download the DFU programmer here.
I used the library as follows (it was not working without FLASH_Unlock();):
// Flash Readout Protection Level 1
if (FLASH_OB_GetRDP() != SET) {
FLASH_Unlock(); // this line is critical!
FLASH_OB_Unlock();
FLASH_OB_RDPConfig(OB_RDP_Level_1);
FLASH_OB_Launch(); // Option Bytes programming
FLASH_OB_Lock();
FLASH_Lock();
}
No need for NVIC_SystemReset();.
Checking functionality worked best with STM32 ST-LINK utility CLI for me:
> "C:\Program Files (x86)\STMicroelectronics\STM32 ST-LINK Utility\ST-LINK Utility\ST-LINK_CLI.exe" -c SWD -rOB
STM32 ST-LINK CLI v3.0.0.0
STM32 ST-LINK Command Line Interface
ST-LINK SN : 51FF6D064989525019422287
ST-LINK Firmware version : V2J27S0
Connected via SWD.
SWD Frequency = 4000K.
Target voltage = 2.9 V.
Connection mode : Normal.
Device ID:0x422
Device flash Size : 256 Kbytes
Device family :STM32F302xB-xC/F303xB-xC/F358xx
Option bytes:
RDP : Level 1
IWDG_SW : 1
nRST_STOP : 1
nRST_STDBY : 1
nBoot1 : 1
VDDA : 1
Data0 : 0xFF
Data1 : 0xFF
nSRAM_Parity: 1
WRP : 0xFFFFFFFF
Not really a solution, but I hope this saves someone some time.