I am implementing a emulated EEPROM in flash memory on a STM32 microprocessor, mostly based on the Application Note by ST (AN2594 - EEPROM emulation in STM32F10x microcontrollers).
The basics outline there and in the respective Datasheet and Programming manual (PM0075) are quite clear. However, I am unsure regarding the implications of power-out/system reset on flash programming and page erasure operations. The AppNote considers this case, too but does not clarify what exactly happens when a programming (write) operations is interrupted:
Does the address have a arbitrary (random) value? OR
Are only part of the bits written? OR
Does it have the default erase value 0xFF?
Thanks for hints or pointers to the relevant documentation.
Arne
This is not really a software question (much less C++). It belongs on electronics.se, but there does not seem to be an option to migrate questions there… only to sites such as superuser or webmasters.se.
The short answer is that hardware is inherently unreliable. Something can always in theory go wrong that interrupts the write process or causes the wrong bit to be written.
The long answer is that Flash circuits are usually designed for maximum reliability. A sudden power loss on write will probably not cause corruption because the driver circuit may have enough capacitance or the capability to operate under a low-voltage condition long enough to finish draining the charge as necessary. A power loss on erasure might be trickier. You really need to consult the manufacturer.
For a "soft" system reset with no power interruption, it would be pretty surprising if the hardware didn't always completely erase whatever bytes it was immediately working on. Usually the bytes are erased in a predefined order, so you can use the first or last ones to indicate whether a page is full or empty.
#include "stm32f10x.h"
#define FLASH_KEY1 ((uint32_t)0x45670123)
#define FLASH_KEY2 ((uint32_t)0xCDEF89AB)
#define Page_127 0x0801FC00
uint16_t i;
int main()
{
//FLASH_Unlock
FLASH->KEYR = FLASH_KEY1;
FLASH->KEYR = FLASH_KEY2;
//FLASH_Erase Page
while((FLASH->SR&FLASH_SR_BSY));
FLASH->CR |= FLASH_CR_PER; //Page Erase Set
FLASH->AR = Page_127; //Page Address
FLASH->CR |= FLASH_CR_STRT; //Start Page Erase
while((FLASH->SR&FLASH_SR_BSY));
FLASH->CR &= ~FLASH_CR_PER; //Page Erase Clear
//FLASH_Program HalfWord
FLASH->CR |= FLASH_CR_PG;
for(i=0; i<1024; i+=2)
{
while((FLASH->SR&FLASH_SR_BSY));
*(__IO uint16_t*)(Page_127 + i) = i;
}
FLASH->CR &= ~FLASH_CR_PG;
FLASH->CR |= FLASH_CR_LOCK;
while(1);
}
If you are using the EEProm Emulation driver, you shouldn't worry too much about the flash corruption issues as the EEProm emulation driver always keeps a shadow copy in another page. Worst come worst, you will lose the most recent values that are being written into the flash. If you look closely on the emulation driver, you will notice that it is nothing but essentially a wrapper to stm32fxx_flash.c in the standard peripheral library.
If you look at the application note, you will see the times that the emulation library take for the flash operations. Erasing a page typically takes the longest time (tens of milliseconds on M0 core - this depends on the clock frequency).
If you are using the EEProm Emulation driver, you had bettern add a function such as check the data after write finished.
For example, if you have 10 data to save, so you need write 11 bytes to flash. The last byte is checksum. And check the data after read from flash.
Related
I have a Cincoze DE-1000 industrial PC, that features a Fintek F81866A chipset. I have to manage the DIO pins to read the input from a phisical button and to set on/off a LED. I have experience in C++ programming, but not at low/hardware level.
On the documentation accompanying the PC, there is the following C code:
#define AddrPort 0x4E
#define DataPort 0x4F
//<Enter the Extended Function Mode>
WriteByte(AddrPort, 0x87)
WriteByte(AddrPort, 0x87) //Must write twice to entering Extended mode
//<Select Logic Device>
WriteByte(AddrPort, 0x07)
WriteByte(DataPort, 0x06)
//Select logic device 06h
//<Input Mode Selection> //Set GP74 to GP77 input mode
WriteByte(AddrPort, 0x80) //Select configuration register 80h
WriteByte(DataPort, 0x0X)
//Set (bit 4~7) = 0 to select GP 74~77 as Input mode.
//<input Value>
WriteByte(AddrPort, 0x82) // Select configuration register 82h
ReadByte(DataPort, Value) // Read bit 4~7(0xFx)= GP74 ~77 as High.
//<Leave the Extended Function Mode>
WriteByte(AddrPort, 0xAA)
As far as I understood, the above code should read the value of the four input PINs (so it should read 1 for each PIN), but I am really struggling to understand how it actually works. I have understood the logic (selecting an address and reading/writing an hex value to it), but I cannot figure out what kind of C instructions WriteByte() and ReadByte() are. Also, I do not understand where Value in the line ReadByte(DataPort, Value) comes from. It should read the 4 PINs all together, so it should be some kind of "byte" type and it should contain 1 in its bits 4-7, but again I cannot really grasp the meaning of that line.
I have found an answer for a similar chip, but it did not help me in understanding.
Please advice me or point me to some relevant documentation.
That chip looks like a fairly typical Super I/O controller, which is basically the hub where all of the "slow" peripherals are combined into a single chipset.
Coreboot has a wiki page that talks about how to access the super I/O.
On the PC architecture, Port I/O is accomplished using special CPU instructions, namely in and out. These are privileged instructions, which can only be used from a kernel-mode driver (Ring 0), or a userspace process which has been given I/O privileges.
Luckily, this is easy in Linux. Check out the man page for outb and friends.
You use ioperm(2) or alternatively iopl(2) to tell the kernel to allow the user space application to access the I/O ports in question. Failure to do this will cause the application to receive a segmentation fault.
So we could adapt your function into a Linux environment like this:
/* Untested: Use at your own risk! */
#include <sys/io.h>
#include <stdio.h>
#define ReadByte(port) inb(port)
#define WriteByte(port, val) outb(val, port)
int main(void)
{
if (iopl(3) < 0) {
fprintf(stderr, "Failed to get I/O privileges (are you root?)\n");
return 2;
}
/* Your code using ReadByte / WriteByte here */
}
Warning
You should be very careful when using this method to talk directly to the Super IO, because your operating system almost certainly has device drivers that are also talking to the chip.
The right way to accomplish this is to write a device driver that properly coordinates with other kernel code to avoid concurrent access to the device.
The Linux kernel provides GPIO access to at least some Super I/O devices; it should be straightforward to port one of these to your platform. See this pull request for the IT87xx chipset.
WriteByte() and ReadByte() are not part of the C language. By the looks of things, they are functions intended to be placeholders for some form of system call for the OS kernel port IO (not macros doing memory mapped IO directly as per a previous version of this answer).
The prototypes for the functions would be something along the lines of:
#include <stdint.h>
void WriteByte(unsigned port, uint8_t value);
void ReadByte(unsigned port, uint8_t *value);
Thus the Value variable would be a pointer to an 8 bit unsigned integer (unsigned char could also be used), something like:
uint8_t realValue;
uint8_t *Value = &realValue;
Of course it would make much more sense to have Value just be a uint8_t and have ReadByte(DataPort, &Value). But then the example code also doesn't have any semicolons, so was probably never anything that actually ran. Either way, this is how Value would contain the data you are looking for.
I also found some more documentation of the registers here - https://www.electronicsdatasheets.com/download/534cf560e34e2406135f469d.pdf?format=pdf
Hope this helps.
I am trying to read 256 bits in less than 32 ms from SPI, the data frame is 16 bits. My problem here is SPI driver has a long idle time between each 16 bits. See this picture, as you can see I am reading 64 bits (highlighted by the red rectangle), and there are long pauses between each frame. I don't find anything in the SPI specification about it.
I am testing on an STM32F407 board from Keil and SPI is initialized by the Keil CMSIS default driver.
Is there any way to reduce this idle time?
You can show the code snippet for rest of people to be able to help, but anyways ı remember coming across the same problem.Here is how i reduce that extra time gap;
Use Direct Register Access instead of Functions, both for sending and flag checking.
If that also wont work, instead of flag checking you can use simple for loop for some amount of delay to keep to clock signal low enough but not much, check from ossilloscope to find the accurate iteration number.
Also make sure that no interrupt is being served while you send your message,there might be an ISR interrupting between the end of transmission and the code that pulls the CLK signal high. If you have ISRs that occur asynchronusly and big in size this is probably the case.
SPI1->DR = (uint16_t)Data;
while(!(SPI1->SR & SPI_SR_TXE));
PS: I ve done this on Cortex-M3,no RTOS,STD Peripheral Library.
Edit: Also try with busy Flag only,thus checking EOC only while(SPI1->SR & SPI_SR_BSY);
I am programming a microcontroller of the PIC24H family and using xc16 compiler.
I am relaying U1RX-data to U2TX within main(), but when I try that in an ISR it does not work.
I am sending commands to the U1RX and the ISR() is down below. At U2RX, there are databytes coming in constantly and I want to relay 500 of them with the U1TX. The results of this is that U1TX is relaying the first 4 databytes from U2RX but then re-sending the 4th byte over and over again.
When I copy the for loop below into my main() it all works properly. In the ISR(), its like that U2RX's corresponding FIFObuffer is not clearing when read so the buffer overflows and stops reading further incoming data to U2RX. I would really appreciate if someone could show me how to approach the problem here. The variables tmp and command are globally declared.
void __attribute__((__interrupt__, auto_psv, shadow)) _U1RXInterrupt(void)
{
command = U1RXREG;
if(command=='d'){
for(i=0;i<500;i++){
while(U2STAbits.URXDA==0);
tmp=U2RXREG;
while(U1STAbits.UTXBF==1); //
U1TXREG=tmp;
}
}
}
Edit: I added the first line in the ISR().
Trying to draw an answer from the various comments.
If the main() has nothing else to do, and there are no other interrupts, you might be able to "get away with" patching all 500 chars from one UART to another under interrupt, once the first interrupt has ocurred, and perhaps it would be a useful exercise to get that working.
But that's not how you should use an interrupt. If you have other tasks in main(), and equal or lower priority interrupts, the relatively huge time that this interrupt will take (500 chars at 9600 baud = half a second) will make the processor what is known as "interrupt-bound", that is, the other processes are frozen out.
As your project gains complexity, you won't want to restrict main() to this task, and there is no need to for it be involved at all, after setting up the UARTs and IRQs. After that it can calculate π ad infinitum if you want.
I am a bit perplexed as to your sequence of operations. A command 'd' is received from U1 which tells you to patch 500 chars from U2 to U1.
I suggest one way to tackle this (and there are many) seeing as you really want to use interrupts, is to wait until the command is received from U1 - in main(). You then configure, and enable, interrupts for RXD on U2.
Then the job of the ISR will be to receive data from U2 and transmit it thru U1. If both UARTS have the same clock and the same baud rate, there should not be a synchronisation problem, since a UART is typically buffered internally: once it begins to transmit, the TXD register is available to hold another character, so any stagnation in the ISR should be minimal.
I can't write the actual code for you, since it would be supposed to work, but here is some very pseudo code, and I don't have a PIC handy (or wish to research its operational details).
ISR
has been invoked because U2 has a char RXD
you *might* need to check RXD status as a required sequence to clear the interrupt
read the RXD register, which also might clear the interrupt status
if not, specifically clear the interrupt status
while (U1 TXD busy);
write char to U1
if (chars received == 500)
disable U2 RXD interrupt
return from interrupt
ISR's must be kept lean and mean and the code made hyper-efficient if there is any hope of keeping up with the buffer on a UART. Experiment with the BAUD rate just to find the point at which your code can keep up, to help discover the right heuristic and see how far away you are from achieving your goal.
Success could depend on how fast your micro controller is, as well, and how many tasks it is running. If the microcontroller has a built in UART theoretically you should be able to manage keeping the FIFO from overflowing. On the other hand, if you paired up a UART with an insufficiently-powered micro controller, you might not be able to optimize your way out of the problem.
Besides the suggestion to offload the lower-priority work to the main thread and keep the ISR fast (that someone made in the comments), you will want to carefully look at the timing of all of the lines of code and try every trick in the book to get them to run faster. One expensive instruction can ruin your whole day, so get real creative in finding ways to save time.
EDIT: Another thing to consider - look at the assembly language your C compiler creates. A good compiler should let you inline assembly language instructions to allow you to hyper-optimize for your particular case. Generally in an ISR it would just be a small number of instructions that you have to find and implement.
EDIT 2: A PIC 24 series should be fast enough if you code it right and select a fast oscillator or crystal and run the chip at a good clock rate. Also consider the divisor the UART might be using to achieve its rate vs. the PIC clock rate. It is conceivable (to me) that an even division that could be accomplished internally via shifting would be better than one where math was required.
Consider this code here:
// Stupid I/O delay routine necessitated by historical PC design flaws
static void
delay(void)
{
inb(0x84);
inb(0x84);
inb(0x84);
inb(0x84);
}
What is port 0x84? Why is it a design flaws? delay() is used in serial_putc() function:
static void
serial_putc(int c)
{
int i;
for (i = 0;
!(inb(COM1 + COM_LSR) & COM_LSR_TXRDY) && i < 12800;
i++)
delay();
outb(COM1 + COM_TX, c);
}
The file is from lab1 of the course Operating System Engineering from OCW.
The serial port is a piece of hardware with some semantic you have to accept. It usually has a shift register that makes the conversion from parallel to serial data. It can have a holding register for the next byte to send or even a FIFO for more than one byte. That's why you have to poll the line status register (LSR).
There are some hardware revisions out there that doesn't behave correctly. Your code looks like a workaround for a bug in old hardware. It shouldn't be necessary to read the port 0x84 here.
But the delay implementation can't be optimized out when you increase the compiler optimization level since it's accessing the I/O range. Running this code in a up-to-date hardware might be problematic if the run-time performance gives too little delay. You will have to verify that the maximum time that can be waited in the loop is sufficient to shift out one byte by the UART. Keep in mind that this is baudrate-dependend while you code example isn't.
The port 0x84 is used to access the "extra page register" (Overview). But reading this register should be a noop. Only the read operation itself is important to consume CPU cycles.
Upon reset of the 8051 microcontroller, all the port-pin latches are set to values of ‘1’. Now I am reading this book "Embedded C" and it states thr problem with the below code is that it can lull the developer into a false sense of security:
// Assume nothing written to port since reset
// – DANGEROUS!!!
Port_data = P1;
If, at a later date, someone modifies the program to include a routine for writing to all or part of the same port, this code will not generally work as required:
unsigned char Port_data;
P1 = 0x00;
. . .
// Assumes nothing written to port since reset
// – WON’T WORK BECAUSE SOMETHING WAS WRITTEN TO PORT ON RESET
Port_data = P1;
Anyone with knowledge of embedded c can explain to me why this code won't work? All it does is assign 0 to a char variable.
Potential issues.
1) The data direction register (DDR) associated with the port may not be set as expected, thus on power-up, the DDR may be set to "input". So writing the port to 0 may unexpectedly not read read 0.
2) The data direction register associated with the port may have been set to "output" and "reading" the data may not have a clear meaning. Depending on architecture, phantom bits may be needed to shadow the output bits for read-back.
3) Power-up code may get entered via a reset command that is nothing more than a jump to "reset vector". So any hardware specific action associated with a "cold" power-up did not occur as this is a "warm" power-up.
Solution:
On power-up code, explicitly set the DDR and output values (and shadow bits as needed).
May not apply to 8051 - speaking to embedded processor in general.
I was reading the same book and having the same confusion few months ago. Latter working on projects with PIC18 and M0+ and being kind of figuring out what it's really about.
Actually, this is not a software/programming issue but rather a hardware/electronic one. If your 805X code want to be able to read both 1 and 0 from an outside input on a pin, the code has to write 1 to the pin in advance. If your code write 0 to the pin in advance, the outside peripheral won't be able to pull the pin high and allow the code to read 1. Why?? electronic stuff!! Imagining that if you want to enjoy the wind outside the window, you have to open the window first.
If you are really interested, google "pin value vs latch value" by yourself. I think it's okay for programmer to leave that to hardware engineer.I believe 805Xs don't have DDR as advanced ones. Switching a pin between input and output mode may be easy but confusing.