I am using m25p40 flash memory with jn5148 MCU.In datasheet of this flash , it is written that:
Erase capability:
Sector erase: 512Kb in 0.6 s (TYP)
Bulk erase: 4Mb in 4.5 s (TYP)
I am facing problem in overwriting data stored in one page of sector. So , how can I erase one page, and write new data in that page? Is there any solution to erase one page of sector, without erasing other pages of same sector?
According to the datasheet:
The memory can be programmed 1 to 256 bytes at a time using the PAGE
PROGRAM command. It is organized as 8 sectors, each containing 256
pages. Each page is 256 bytes wide.
Although I don't know if it actually works, and I cannot test it, I also found that someone already did this with an avr µC, which should give you an example function write(address, word) if you don't want to read the page program sequence (datasheet p.27) and write your own.
Here is a sector erase procedure quoted from the documentation on m24p40
The SECTOR ERASE command sets to 1 (FFh) all bits inside the chosen
sector. Before the SECTOR ERASE command can be accepted, a WRITE
ENABLE command must have been executed previously. After the WRITE
ENABLE command has been decoded, the device sets the write enable
latch (WEL) bit. The SECTOR ERASE command is entered by driving chip
select (S#) LOW, followed by the command code, and three address bytes
on serial data input (DQ0). Any address in- side the sector is a valid
address for the SECTOR ERASE command. S# must be driven LOW for the
entire duration of the sequence. S# must be driven HIGH after the
eighth bit of the last address byte has been latched in. Otherwise the
SECTOR ERASE command is not executed. As soon as S# is driven HIGH,
the self-timed SECTOR ERASE cycle is initiated; the cycle's duration
is t SE . While the SECTOR ERASE cycle is in progress, the status
register may be read to check the value of the write in progress (WIP)
bit. The WIP bit is 1 during the self-timed SECTOR ERASE cycle, and is
0 when the cycle is completed. At some unspecified time before the
cycle is completed, the WEL bit is reset. A SECTOR ERASE command is
not executed if it applies to a sector that is hardware or software
protected.
You cannot rewrite one page. You must rewrite one sector at least.
So if you want change a.k.a rewrite at least one byte in any page in choosen sector you can do following:
Read ALL sector to RAM.
Erase this sector.
Change needed data in RAM.
Write back changed data to flash's sector.
YOU MUST READ THIS ARTICLE: Five things you never knew about flash drives
Related
I have been asked to help out on an embedded firmware project where they are trying to mount a file system on an SPI flash chip (Cypress S25FL512S) with a 256KB (Kilo Byte) Erase sector size.
My past experience with file systems is that the file system has a block size upto 4Kbytes which is mapped onto erase sectors of 512bytes to 4Kbytes
The embedded controller is a small NXP device running at 180MHz with 512KBytes of RAM so I cannot even cache an erase sector. I note that the chip family does have pin compatible devices with smaller erase sectors.
My general question is how do you mount a file system with a block/cluster size that is smaller than the flash erase sector size? I've not been able to find any articles addressing this.
You can't do this in any sensible way. Your specification needs to be modified.
Possible solutions are:
Pick a flash/eeprom circuit with smaller erase size.
Pick a flash/eeprom with more memory and multiple segments, so that you can back-up the data in one segment while programming another.
Add a second flash circuit which mirrors the first one, erase one at a time and overwrite with contents of the other.
Pick a MCU with more RAM.
Backup the flash inside MCU flash (very slow and likely defeats the purpose of having external flash to begin with).
I am using Spansion's flash memory of 16MB. The sector size is 256KB. I am using the flash to read/write/delete 30 byte blocks(structures). I have found in the data sheet of the IC that minimum erasable size is 256KB. One way of deleting a particular block is to
Read the sector containing the block to delete into a temporary array.
Erase that sector.
Delete the required block in temporary array
Write back the temporary array into Flash.
I want to ask is there any better alternative logic to this.
There is no way to erase less than the minimum erasable sector size in flash.
However, there is a typical way to handle invalidating small structures on a large flash sector. Simply add a header to indicate the state of the data in that structure location.
Simple example:
0xffff Structure is erased and available for use.
0xa5a5 Structure contains data that is valid.
0x0000 Structure contains data that is not valid.
The header will be 0xffff after erasing. When writing new data to a structure, set the header to 0xa5a5. When that data is no longer needed, set the header to 0x0000.
The data won't actually be erased, but it can be detected as invalid. This allows you to wait until the sector is full and then clean up the invalid records and perhaps compact the valid ones.
Firstly, check the device datasheet again. Generally Spansion devices will let you have a 64kB page size instead of 256kB. This may or may not help you, but generally increased granularity will help you.
Secondly, you cannot avoid the "erase before write" cycle where you want to change bits from 0 to 1. However, you can always change bits from 1 to 0 on a byte-by-byte basis.
You can either rethink your current 3-byte structure to see if this is of any use to you, or move to a 32-byte size structure (which is a power of 2 and so slightly more sane IMO). Then, to delete, you can simply set the first byte to 0x00 from 0xFF which a normal erased byte will be set to. That means you'll end up with empty slots.
Like how a garbage collector works, you can then re-organise to move any pages that have deleted blocks on so that you create empty pages (full of deleted blocks). Make sure you move good blocks to a blank page before deleting them from their original page! You can then erase the empty page that was full of deleted, or re-organised blocks.
When you're working with flash memory, you have to think out your read/erase/write strategy to work with the flash you have available. Definitely work it out before you start coding or locking down memory structures, because generally you'll need to reserve at least one byte as a validity byte and usually you have to take advantage of the fact that you can always changes bits that are set to 1 to 0 in any byte at any time without an erase cycle.
I have ported FATFS for Free RTOS on STM32F103 SPI Flash of 32 Mbit. In a demo Application I have successfully created a file, written a file, and read back from the file. My requirement is like I have to store multiple files (images) in SPI flash and read it back when required.
I have the following conditions/queries.
I have set the Sector Size to 512 bytes, and block erase size for SPI flash is 4K. As in the SPI Flash, block needs to be erased before written. Do I need to keep track on whether a particular block is erased or not or its the file System who is managing this?
How can I verify that the sector, which I am writing in erased or not? What I am currently doing is, Erase the Complete Block for the sector, which I am going to write?
How can I make sure, The Block for SPI flash I am going to erase will not affect any Sector containing useful data?
Thanking in an Anticipation,
Regards,
AK
The simplest solution is to define the "cluster" size to 4K, the same as the page size of your flash. That mean each file, even if only 1 byte, takes 4K, which is 8 consecutive sectors of 512 bytes each.
As soon as you need to reserve one more cluster, when the file grow above 4096 bytes, you pick a free cluster, chain it to the FAT, and write the next byte.
For performance reason and to increase the durability of the flash, you should avoid to erase of flash sector when not needed. It is many order of magnitude faster to read then erasing. So, as you select a free cluster, you can start a loop to read each of the 8 sectors. As soon as you find even a single byte not equal to 0xFF, then you abort the loop and call the flash erase for that sector.
A further optimization is possible if the flash controller is able to perform the blank test directly. Such test could be done in a few microsecond while reading 8 sectors and looping to check each of the 4096 bytes is probably slower.
I just read two articles over this topic which provide infomration inconsistent, so I want to know which one is correct. Perhaps both are correct, but under what context?
The first one states that we fetch a page size a time
The cache controller is always observing the memory positions being loaded and loading data from several memory positions after the memory position that has just been read.
To give you a real example, if the CPU loaded data stored in the address 1,000, the cache controller will load data from ”n“ addresses after the address 1,000. This number ”n“ is called page; if a given processor is working with 4 KB pages (which is a typical value), it will load data from 4,096 addresses below the current memory position being load (address 1,000 in our example). In following Figure, we illustrate this example.
The second one states that we fetch sizeof(cache line) + sizeof(prefetcher) a time
So we can summarize how the memory cache works as:
The CPU asks for instruction/data stored in address “a”.
Since the contents from address “a” aren’t inside the memory cache, the CPU has to fetch it
directly from RAM.
The cache controller loads a line (typically 64 bytes) starting at address “a” into the memory
cache. This is more data than the CPU requested, so if the program continues to run sequentially
(i.e. asks for address a+1) the next instruction/data the CPU will ask will be already loaded in the
memory cache.
A circuit called prefetcher loads more data located after this line, i.e. starts loading the contents
from address a+64 on into the cache. To give you a real example, Pentium 4 CPUs have a 256-byte
prefetcher, so it loads the next 256 bytes after the line already loaded into the cache.
Completely hardware implementation dependent. Some implementations load a single line from main memory at a time — and cache line sizes vary a lot between different processors. I've seen line sizes from 64 bytes all the way up to 256 bytes. Basically what the size of a "cache line" means is that when the CPU requests memory from main RAM, it does so n bytes at a time. So if n is 64 bytes, and you load a 4-byte integer at 0x1004, the MMU will actually send 64 bytes across the bus, all the addresses from 0x1000 to 0x1040. This entire chunk of data will be stored in the data cache as one line.
Some MMUs can fetch multiple cache lines across the bus per request -- so that making a request at address 0x1000 on a machine that has 64 byte caches actually loads four lines from 0x1000 to 0x1100. Some systems let you do this explicitly with special cache prefetch or DMA opcodes.
The article through your first link, however, is completely wrong. It confuses the size of an OS memory page with a hardware cache line. These are totally different concepts. The first is the minimum size of virtual address space the OS will allocate at once. The latter is a detail of how the CPU talks to main RAM.
They resemble each other only in the sense that when the OS runs low on physical memory, it will page some not-recently-used virtual memory to disk; then later on, when you use that memory again, the OS loads that whole page from disk back into physical RAM. This is analogous (but not related) to the way that the CPU loads bytes from RAM, which is why the author of "Hardware Secrets" was confused.
A good place to learn all about computer memory and why caches work the way they do is Ulrich Drepper's paper, What Every Programmer Should Know About Memory.
I am considering using a FAT file system for an embedded data logging application. The logger will only create one file to which it continually appends 40 bytes of data every minute. After a couple years of use this would be over one million write cycles. MY QUESTION IS: Does a FAT system change the File Allocation Table every time a file is appended? How does it keep track where the end of the file is? Does it just put an EndOfFile marker at the end or does it store the length in the FAT table? If it does change the FAT table every time I do a write, I would ware out the FLASH memory in just a couple of years. Is a FAT system the right thing to use for this application?
My other thought is that I could just store the raw data bytes in the memory card and put an EndOfFile marker at the end of my data every time I do a write. This is less desirable though because it means the only way of getting data out of the logger is through serial transfers and not via a PC and a card reader.
FAT updates the directory table when you modify the file (at least, it will if you close the file, I'm not sure what happens if you don't). It's not just the file size, it's also the last-modified date:
http://en.wikipedia.org/wiki/File_Allocation_Table#Directory_table
If your flash controller doesn't do transparent wear levelling, and your flash driver doesn't relocate things in an effort to level wear, then I guess you could cause wear. Consult your manual, but if you're using consumer hardware I would have thought that everything has wear-levelling somewhere.
On the plus side, if the event you're worried about only occurs every minute, then you should be able to speed that up considerably in a test to see whether 2 years worth of log entries really does trash your actual hardware. Might even be faster than trying to find the relevant manufacturer docs...
No, a flash file system driver is explicitly designed to minimize the wear and spread it across the memory cells. Taking advantage of the near-zero seek time. Your data rates are low, it's going to last a long time. Specifying a yearly replacement of the media is a simple way to minimize the risk.
If your only operation is appending to one file it may be simpler to forgo a filesystem and use the flash device as a data tape. You have to take into account the type of flash and its block size, though.
Large flash chips are divided into sub-pages that are a power-of-two multiple of 264 (256+8) bytes in size, pages that are a power-of-two multiple of that, and blocks which are a power-of-two multiple of that. A blank page will read as all FF's. One can write a page at a time; the smallest unit one can write is a sub-page. Once a sub-page is written, it may not be rewritten until the entire block containing it is erased. Note that on smaller flash chips, it's possible to write the bytes of a page individually, provided one only writes to blank bytes, but on many larger chips that is not possible. I think in present-generation chips, the sub-page size is 528 bytes, the page size is 2048+64 bytes, and the block size is 128K+4096 bytes.
An MMC, SD, CompactFlash, or other such card (basically anything other than SmartMedia) combines a flash chip with a processor to handle PC-style sector writes. Essentially what happens is that when a sector is written, the controller locates a blank page, writes a new version of that sector there along with up to 16 bytes of 'header' information indicating what sector it is, etc. The controller then keeps a map of where all the different pages of information are located.
A SmartMedia card exposes the flash interface directly, and relies upon the camera, card reader, or other device using it to perform such data management according to standard methods.
Note that keeping track of the whereabouts of all 4,000,000 pages on a 2 gig card would require either having 12-16 megs of RAM, or else using 12-16 meg of flash as a secondary lookup table. Using the latter approach would mean that every write to a flash page would also require a write to the lookup table. I wouldn't be at all surprised if slower flash devices use such an approach (so as to only have to track the whereabouts of about 16,000 'indirect' pages).
In any case, the most important observation is that flash write times are not predictable, but you shouldn't normally have to worry about flash wear.
Did you check what happens to the FAT file system consistency in case of a power failure or reset of your device?
When your device experience such a failure you must not lose only that log entry, that you are just writing. Older entries must stay valid.
No, FAT is not the right thing if you need to read back the data.
You should further consider what happens, if the flash memory is filled with data. How do you get space for new data? You need to define the requirements for this point.