serial flash emulation as EEPROM - c

I am developing an application on the existing board.
The application requires frequent data(just 10 bytes) storage,
So I am thinking have the External flash emulation as EEPROM
because my board doesn't have the EEPROM.We have the External spi flash is with us.
any one can help me in this emulation or please suggest me the any
other approach to full fill my application requirement.

There are a number of libraries that can do what you want. Some years ago i used the Intel library that they provided for their parallel flash chips.
The technique they use is to use additional bytes to designate which value in the flash is valid, and how many bytes the value it occupies. For example the first time a value is written the valid flag is high. When the bye is re-written, the valid flag on the old data is set low, and the new value and flag/byte count data is written in the flash immediately after the old value. When the value is read, you start at the first position, and if the valid flag is low, you use the count to move to where the newer value was stored, and so on until you find a value with the valid flag high.
When an entire sector has been used this way, you need to read the current value, erase the sector, and re-write he current value at the start.
This technique works because a flash bit can be changed form a high to low, but not changed back from a low to a high without erasing.
This explanation is a little simplified, I am sure there will be a tutorial on the web somewhere.

Related

Send data from a microcontroller to another

I'm making a project that require send data from one microcontroller to another via wireless (I'm going to use 433Mhz RF modules or 2.4Ghz, didn't decide it yet). Specifically, I'm making a joystick which control 4 dc motors. So my question is: When I write the code, should I put the command to accelerate 'x' motor into the receiver's microcontroller (the board that control the motors) or into the transmitter's microcontroller (the joystick's board)? For example, if I put the joystick to the left to accelerate motor 3 and 4, where would I wrote this code?
I'm doing this project in arduino (ATmega328 with arduino bootloader).
Well this is most definitely a "primarily opinion based" question and may very well get closed...but...
This is your design, you need to design it. There isnt a wrong answer in this case, you can have the joystick board simply pass the joystick switch state or analog state depending on what kind they are...OR...you can have the joystick board using timing and how long the joystick is held in a position and perhaps compute what you think the devices position should be and just pass that (you need to rotate m steps this way and n steps that way). Or depending on your overall system design could come up with other solutions to balance the load between the microcontrollers. I assume what you are really wanting to do here is "simply" use the wireless and the extra mcu as a way to have the joystick not wired. If the joystick were wired and you did it with one mcu and these are switch based (not analog resistance) joysticks then that one mcu would simply be reading the switch state at some interval, perhaps debouncing. So then when you add wireless to it you would want the wireless solution to simply pass the same switch state, not in the same register space and not necessarily the same bit positions in whatever data, but just the switch state. As needed you have the added benefit of the joystick mcu can debounce. Also perhaps only send state changes and not have to bother the motor mcu every N units of time.
I tried modules like these recently and was very disappointed, but am considering trying them again or a different vendor/build of them. They should "feel" like a wire for say serial data at some slow rate 4800 baud, 2400 baud, etc. First try I had to do things like constantly send a pattern say the same byte 0xAA, then every so often have a small payload show up in there so search for 0xAA then when you see something else count out so many bytes that is the payload. It went downhill from there when I used separate computers/supplies to send the data as if most of my success although marginal was perhaps through ground noise not actually wireless. They sell tons of these and folks use them so no doubt it was something I did or bought garbage modules, or maybe I shouldnt have bought the ones with antennas, and soldered those on, etc.
I would recommend two things. First get both mcus up and running the application using a wired connection, just some jumper wires. You might have to come back to this as you perhaps change your protocol or baud rate, etc if needed. Perhaps (most likely) you want to send the payload out often even if the data has not changed if the wireless connection is not perfectly reliable, allowing the occasional lost data packet to be handled on the next one. Second learning to use the wireless connection is a separate experiment, the code on both sides should just be working that one problem how to I talk over this interface, no motor drivers no joysticks, just code to talk from one to the other.
Likely a good idea, esp if you dont need to be constantly transmitting, is to design your own packet. Ideally one or more features, a sync pattern and length or a sync pattern and checksum or sync pattern, length and checksum (as well as a payload). So maybe a 0x7E byte as a sync pattern, perhaps you are always going to send 32 bits or 4 bytes of payload so dont need a length and then maybe a checksum of the payload or checksum of the whole thing sync pattern and payload. your reciever then takes in bytes from the uart into a circular buffer basically or not even that just use a very simple state machine, if searching for sync and byte is not 0x7E then discard, if 0x7e then go to payload state with a count of zero, receive the next four bytes (payload) summing them into a checksum as each arrives. after payload the next byte is checksum, if it matches the the whole thing is good pass the payload to the control system to update the switch state and/or just shove it into the global variable that the control system is watching as if it were a direct connection to the joystick switch state.
You can get as complicated or not as you want, but I would never in this kind of a design assume that both sides are always in sync and every N bytes is the N bytes sent in the right position, even if you never drop a byte, this is a one direction communication path so if the receiver comes up while the transmitter is mid payload the receiver comes up in the middle of the payload/packet and is always off in its interpretation of the data. You can do a slip/ppp thing whichever it is and insure the sync pattern is never in the data by adding more work. Or not, I doubt you need to be that complicated as I assume you are likely only sending one or a few bytes of actual data per update.
why I tried the continuous stream as way back the first time i tried products like these, long before they were a buck or so from china on ebay, we had to insure "transition density" you could not string more than a couple bit cells of either state in a row, couldnt have more than say two zeros in a row or two ones in a row basically had to do a biphase-l (manchester II) in software before shoving the bytes into the uart, then undoing it on the other side. Hopefully the hardware does that for you and you dont have to, but push comes to shove you might have to take every byte and only every other bit in the byte is payload the other bit next to it is the inverse might have to send the byte 0x01 (00000001) as (0101010101010110) 0x55 0x56. Hopefully you dont have to do that, it is not much fun.
Your microcontroller development toolbox should include some usb to serial solutions, can get them for a buck or two on ebay from china or from adafruit or sparkfun for 10-15 bucks. The china ones sometimes have a 5v or 3.3v jumper or switch. One way to start with these modules is to hook one to one of these usb to serial breakouts 5V or 3.3 as needed for that rx or tx module. For tx put the uart out (tx) on the data pin, for rx the rx in on the data pin. And then fire up two copies of minicom or whatever dumb terminal program you use, set both to some slow speed like 1200baud, and type in one and see if what you type comes out the other. Hold down a character, U is a good one as with 8n1 it produces a square wave. does that come through clean? what if you slow down both sides? what if you speed up. From simple experiments like that you can start to develop your protocol. If you have a scope (having one is easier these days but can still be pricy, a few hundred bucks, but access to one is pretty important) you can do this even easier, generate the signal either with microcontroller or usb to serial or whatever then look at what shows up on the other side with the scope, how ugly/clean is it? Can you make it better looking by changing the data or by pounding the interface with continuous data.

c166 bootloader write to internal flash

I'm writing a bootloader for a c166 chip, to be exact, the 169FH. The bootloader can currently open a TCP/IP Connection so a PC can send an Intel hex file to the bootloader. This intel hex file is saved in the RAM. After receiving the hex file it is read line by line to set the bytes to the correct location in the flash. The flash location where the bootloader is stored is ofcourse different from where the main program can be saved.
This are the first two lines of the intel hex file:
:0200000400C03A
:20100000CA11EE111212341258127A129A12BC12DE12001322134413601388138813881349
The first line is to get the highest 16 bits of the 32bit flash address, which is in this case 0x00C0
in the second line are the lower 16 bits of the 32 bit flash address, which is in this case 0x1000. This creates to total address of 0x00C01000, the byte written to that address should be 0xCA.
I'm trying to write the byte to that address using the following code:
uint8_t u8Byte = (uint8_t )XHugeStringToDec((const __huge char *)&Ext_RamFlashSave.Byte[u32Idx], 9+(u8ByteIdx*2), 2);
uint8_t *u8Address = (uint8_t*)((uint32_t)(u32ExtendedLinearAddress << 16) + (uint32_t)u16BaseAddress + (uint32_t)u8ByteIdx);
*u8Address = (u8Byte);
XHugeStringToDec() is a function to get the hexadecimal value from the intel hex string. I know this is going correct.
Ext_RamFlashSave.Byte is the array where the intel hex file is storedin.
The u32ExtendedLinearAddress variable is 0x0C00, and is set earlier. The u16BaseAddress is 0x1000 and is also set earlier in code.
The problem is in the last line:
*u8Address = (u8Byte);
I have verified that u8Address is indeed 0x0C01000 and u8Byte is indeed 0xCA. But when I monitor my flash address, I do not see the byte written.
I can imagine that it is some kind of write protection, but I cannot find out how to work around this, or do I need a different way to write to the Flash address?
More info of how the intel-hex file is build is described here:
https://en.wikipedia.org/wiki/Intel_HEX
I am not familier with the chip you said.
But for writing to flash, Generally, following algorithm is used:
Unlock the flash. This is usually sending specific data sequence. The flash I use right now has 0xA0A0-> delay of 1msec -> 0x0505. This will enable writing to flash
Clear the error flags. Flash has some error flags like Write error or read error. Also, check for Busy flag to make sure Flash is ready
Erase the Page. Yes... You have to erase the page before writing to it. Even if you want to change a single bit, You have to erase entire page.
Write the data (Finally) But make sure that endien-ness is correct. Sometimes you Controller is Little Endien and Flash is Big Endien. So you have to match to Flash's.
Lock the Flash. This is usually done by sending same sequence which used for unlocking. (Refer Datasheet)
You cannot write to a flash directly... Have to go through entire process. That's why Flash are slow.
Some flash support 8bit writing while some only 16bit or 32 bit etc. So, you have to send those many bits while writing.
When you have to modify a small portion of a page, Read the page in a buffer. Modify data in the buffer. Erase the page and write entire buffer.
If you are modifying a

embedded c and the 8051 microcontroller

Upon reset of the 8051 microcontroller, all the port-pin latches are set to values of ‘1’. Now I am reading this book "Embedded C" and it states thr problem with the below code is that it can lull the developer into a false sense of security:
// Assume nothing written to port since reset
// – DANGEROUS!!!
Port_data = P1;
If, at a later date, someone modifies the program to include a routine for writing to all or part of the same port, this code will not generally work as required:
unsigned char Port_data;
P1 = 0x00;
. . .
// Assumes nothing written to port since reset
// – WON’T WORK BECAUSE SOMETHING WAS WRITTEN TO PORT ON RESET
Port_data = P1;
Anyone with knowledge of embedded c can explain to me why this code won't work? All it does is assign 0 to a char variable.
Potential issues.
1) The data direction register (DDR) associated with the port may not be set as expected, thus on power-up, the DDR may be set to "input". So writing the port to 0 may unexpectedly not read read 0.
2) The data direction register associated with the port may have been set to "output" and "reading" the data may not have a clear meaning. Depending on architecture, phantom bits may be needed to shadow the output bits for read-back.
3) Power-up code may get entered via a reset command that is nothing more than a jump to "reset vector". So any hardware specific action associated with a "cold" power-up did not occur as this is a "warm" power-up.
Solution:
On power-up code, explicitly set the DDR and output values (and shadow bits as needed).
May not apply to 8051 - speaking to embedded processor in general.
I was reading the same book and having the same confusion few months ago. Latter working on projects with PIC18 and M0+ and being kind of figuring out what it's really about.
Actually, this is not a software/programming issue but rather a hardware/electronic one. If your 805X code want to be able to read both 1 and 0 from an outside input on a pin, the code has to write 1 to the pin in advance. If your code write 0 to the pin in advance, the outside peripheral won't be able to pull the pin high and allow the code to read 1. Why?? electronic stuff!! Imagining that if you want to enjoy the wind outside the window, you have to open the window first.
If you are really interested, google "pin value vs latch value" by yourself. I think it's okay for programmer to leave that to hardware engineer.I believe 805Xs don't have DDR as advanced ones. Switching a pin between input and output mode may be easy but confusing.

How do I obtain PCI Region size in Windows?

I needed to scan my PCI bus and obtain information for specific devices from specific vendors.
My goal is to find the PCI Region size for the AMD Graphics card, in order to map the PCI memory of that card to userspace in order to do i2c transfers and view information from various sensors.
For scanning the PCI bus I downloaded and compiled pciutils 3.1.7 for Windows x64 around a year ago. It supposedly uses DirectIO.
This is my code.
int scan_pci_bus()
{
struct pci_access *pci;
struct pci_dev *dev;
int i;
pci = pci_alloc();
pci_init(pci);
pci_scan_bus(pci);
for(dev = pci->devices; dev; dev = dev->next)
{
pci_fill_info(dev, PCI_FILL_IDENT | PCI_FILL_CLASS | PCI_FILL_IRQ | PCI_FILL_BASES | PCI_FILL_ROM_BASE | PCI_FILL_SIZES | PCI_FILL_PHYS_SLOT);
if(dev->vendor_id == 0x1002 && dev->device_id == 0x6899)
{
//Vendor is AMD, Device ID is a AMD HD5850 GPU
for(i = 0; i < 6; i++)
{
printf("Region Size %d %x ID %x\n", dev->size[i], dev->base_addr[i], dev->device_id);
}
}
}
pci_cleanup(pci);
return 0;
}
As you see in my printf line, I try to print some data, I am successfully printing device_id and base_addr however size which should contain the PCI region size for this device is always 0. I expected, at least one of the cycles from the loop to display a size > 0.
My code is based on a Linux application which uses the same code, though it uses the pci.h headers that come with Linux(pciutils apparenltly has the same APIs).
Apparently, Windows(that is Windows 7 x64 in my case) does not show this information or the at the very least is not exposed to PCIUtils.
How do you propose I obtain this information? If there are alternatives to pciutils for Windows and provide this information, I'd be glad to obtain a link to them.
EDIT:I have still found no solution. If there are any solutions to my problem and also work for 32-bit Windows, It would be deeply appreciated.
How this works is pretty complicated. PCI devices use Base Address Registers to let the BIOS and Operating System decide where to locate their memory regions. Each PCI device is allowed to specify several memory or IO regions it wants, and lets the BIOS/OS decide where to put it. Complicating matters, there's only one register that is used both to specify the size AND the address. How does this work?
When the card is first powered up, it's 32-bit address register will have something like 0xFFFF0000 in it. Any binary 1 means "the OS can change this", any binary 0 means "must stay zero". So this is telling the OS that any of the top 16 bits can be set to whatever the OS wants, but the bottom 16 bits have to stay zero. Which also means that this memory region takes up 16 bits of address space, or 64k. Because of this, memory regions have to be aligned to their size. If a card wants 64K of address space, the OS can only put it on memory addresses that are a multiple of 64K. When the OS has decided where it wants to locate this card's 64K memory space, it writes it back into this register, overwriting the initial 0xFFFF0000 that was in there.
In other words, the card tells the OS what size/alignment it needs for the memory, then the OS overwrites that same register/variable with the address for the memory. Once it's done this, you can't get the size back out of the register without resetting the address.
This means there's no portable way of asking a card how big its region is, all you can ask it is WHERE the region is.
So why does this work in Linux? Because it's asking the kernel for this information. The kernel has an API to provide this stuff, the same way that lspci works. I am not a Windows expert, but I'm unaware of any way for an application to ask the Windows kernel this information. There may be an API to do this somehow, or you may need to write something that runs on the kernel side to pass this information back to you. If you look in the libpci source, for windows it calls the "generic" version of pci_fill_info(), which returns:
return flags & ~PCI_FILL_SIZES;
which basically means "I'm returning everything you asked for, but the sizes."
BUT, this may not matter anyway. If all you're doing is wanting to read/write to the I2C registers, they're usually (always?) in the first 4K of the control/configuration region. You can probably just map 4K (one page) and ignore the fact that there might be more. Also be warned that you may need to take additional steps to stop the real driver for this card from reading/writing while you are. If you're bit-banging the I2C bus manually, and the driver tries to at the same time, it's likely to cause a mess on the bus.
There also may be an existing way to ask the radeon driver to do I2C requests for you, which might avoid all of this.
(also note I'm simplifying and glossing over a lot of details with how the BARs work, including 64 bit addresses, I/O space, etc, read PCI documentation if you want to learn more)
Well whamma gave a very good answer [but] there's one thing he was wrong about, which is region sizes. Region sizes are pretty easy to find, here i will show two ways, the first by deciphering it from the address of the bar, the second through Windows user interface.
Let's assume that E2000000 is the address of the Base Register. If we convert that to binary we get:
11100010000000000000000000000000
Now there are 32 bits here in total, you can count them if you must. Now if you are not familiar with how the bits in
a BAR are layed out, look here -> http://wiki.osdev.org/PCI , specifically "Base Address Registers" and more specifically
the image that reads "Memory Space BAR Layout". Now lets start reading the bits from the right end to the left end and use
the image in the link i pointed to you above as a guide.
So the first bit(Bit 0) starting from the right is 0, indicating that this is a memory address BAR.
Bits(1-2) are 0, indicating that it's a 32-bit(note this is not the size) memory BAR.
Bit 3 is 0, indicating that it's not Prefetchable memory.
Bits 4-31 represent the address.
The page documents the PCI approved process:
To determine the amount of address space needed by a PCI device, you
must save the original value of the BAR, write a value of all 1's to
the register, then read it back. The amount of memory can then be
determined by masking the information bits, performing a bitwise NOT
('~' in C), and incrementing the value by 1. The original value of the
BAR should then be restored. The BAR register is naturally aligned and
as such you can only modify the bits that are set.
The other way is using Device Manager:
Start->"Device Manager"->Display Adapters->Right Click your video card->Properties->Resources. Each resource type marked
"Memory Range" should be a memory BAR and as you can see it says [start address] to [end address]. For example lets say
it read [00000000E2000000 - 00000000E2FFFFFF], to get the size you would take [start address] from [end address]:
00000000E2FFFFFF - 00000000E2000000 = FFFFFF, FFFFFF in decimal = 16777215 = 16777215 bytes = 16MB.

Writing Flash on STM32

I am implementing a emulated EEPROM in flash memory on a STM32 microprocessor, mostly based on the Application Note by ST (AN2594 - EEPROM emulation in STM32F10x microcontrollers).
The basics outline there and in the respective Datasheet and Programming manual (PM0075) are quite clear. However, I am unsure regarding the implications of power-out/system reset on flash programming and page erasure operations. The AppNote considers this case, too but does not clarify what exactly happens when a programming (write) operations is interrupted:
Does the address have a arbitrary (random) value? OR
Are only part of the bits written? OR
Does it have the default erase value 0xFF?
Thanks for hints or pointers to the relevant documentation.
Arne
This is not really a software question (much less C++). It belongs on electronics.se, but there does not seem to be an option to migrate questions there… only to sites such as superuser or webmasters.se.
The short answer is that hardware is inherently unreliable. Something can always in theory go wrong that interrupts the write process or causes the wrong bit to be written.
The long answer is that Flash circuits are usually designed for maximum reliability. A sudden power loss on write will probably not cause corruption because the driver circuit may have enough capacitance or the capability to operate under a low-voltage condition long enough to finish draining the charge as necessary. A power loss on erasure might be trickier. You really need to consult the manufacturer.
For a "soft" system reset with no power interruption, it would be pretty surprising if the hardware didn't always completely erase whatever bytes it was immediately working on. Usually the bytes are erased in a predefined order, so you can use the first or last ones to indicate whether a page is full or empty.
#include "stm32f10x.h"
#define FLASH_KEY1 ((uint32_t)0x45670123)
#define FLASH_KEY2 ((uint32_t)0xCDEF89AB)
#define Page_127 0x0801FC00
uint16_t i;
int main()
{
//FLASH_Unlock
FLASH->KEYR = FLASH_KEY1;
FLASH->KEYR = FLASH_KEY2;
//FLASH_Erase Page
while((FLASH->SR&FLASH_SR_BSY));
FLASH->CR |= FLASH_CR_PER; //Page Erase Set
FLASH->AR = Page_127; //Page Address
FLASH->CR |= FLASH_CR_STRT; //Start Page Erase
while((FLASH->SR&FLASH_SR_BSY));
FLASH->CR &= ~FLASH_CR_PER; //Page Erase Clear
//FLASH_Program HalfWord
FLASH->CR |= FLASH_CR_PG;
for(i=0; i<1024; i+=2)
{
while((FLASH->SR&FLASH_SR_BSY));
*(__IO uint16_t*)(Page_127 + i) = i;
}
FLASH->CR &= ~FLASH_CR_PG;
FLASH->CR |= FLASH_CR_LOCK;
while(1);
}
If you are using the EEProm Emulation driver, you shouldn't worry too much about the flash corruption issues as the EEProm emulation driver always keeps a shadow copy in another page. Worst come worst, you will lose the most recent values that are being written into the flash. If you look closely on the emulation driver, you will notice that it is nothing but essentially a wrapper to stm32fxx_flash.c in the standard peripheral library.
If you look at the application note, you will see the times that the emulation library take for the flash operations. Erasing a page typically takes the longest time (tens of milliseconds on M0 core - this depends on the clock frequency).
If you are using the EEProm Emulation driver, you had bettern add a function such as check the data after write finished.
For example, if you have 10 data to save, so you need write 11 bytes to flash. The last byte is checksum. And check the data after read from flash.

Resources