Can anybody explains the usage of EN4B command of micron SPI chips.
I want to know the difference between 3 byte and 4 byte address mode in SPI.
I was going through the SPI drivers where I found this commands.
Thanks in Advance !!
From a legacy point of view, SPI commands have always used 3 bytes for the address interested by their operation.
This was fine as with 24 bits it is possible to address up to 128MiB.
When the Flashes grew larger it was needed to switch from 3 bytes to 4 bytes addressing.
Whenever you have any doubts regarding the hardware you can find the answers in the proper datasheet, I don't know what specific chip you are referring to however.
I found the Micron N25Q512A NOR Flash, which is 512MiB so it needs a form of 4 bytes addressing; from it you can learn that
There are 3 bytes legacy commands and new 4 bytes commands.
For example 03h and 13h for the single read.
You can supply a default fourth address byte with a specific register.
The Extended Address Register let you choose the region of the flash for the legacy commands.
You can enable 4 bytes addressing for legacy command.
Either write the appropriate bit in the Nonvolatile Configuration Register or use the ENTER / EXIT 4-BYTE ADDRESS MODE (opcodes B7h and E9h respectively)
This Linux patch also have some insights, basically telling that some chips only support one of the three points above.
Macronix seems to have first opted for the number 3 only and Spansion for the number 1.
Checking some datasheet of theirs seems to suggests that now both support all three methods.
Related
I'm using a STM32F103C8 microcontroller. According to the STM32F10xxx flash memory programming manual (p.20) there are 2 option bytes which can be used to store user data. These bytes are called Data0 and Data1 and they are stored at address 0x1FFF F804.
Can these 2 bytes be used for permanently and reliably storing information (e.g. the power state of the microcontroller) after the power supply of the microcontroller has been cut off?
Can these 2 bytes be used for permanently and reliably storing information (e.g. the power state of the microcontroller) after the power supply of the microcontroller has been cut off?
Yes, just like flash.
I'm writing a few kernel modules (as part of porting Linux to a new ARM board).
Most of what I want to expose to userspace fits nicely into the /sys/class/gpio and /sys/class/leds subsystems.
I have a couple of 12 bit ADCs which I want to expose to userspace.
Which subsystem should I use for this?
(Worst-case is exposing them as a character devices which stream the values as newline separated decimal values. The two ADCs are part of a multi-function-device, accessed over an i2c bus by reading two 16-bit registers.)
I tried returning 42 from a gpio module's gpio_chip.get function pointer, but it was changed to 1 by the time it reached userspace.
I'm writing a bootloader for a c166 chip, to be exact, the 169FH. The bootloader can currently open a TCP/IP Connection so a PC can send an Intel hex file to the bootloader. This intel hex file is saved in the RAM. After receiving the hex file it is read line by line to set the bytes to the correct location in the flash. The flash location where the bootloader is stored is ofcourse different from where the main program can be saved.
This are the first two lines of the intel hex file:
:0200000400C03A
:20100000CA11EE111212341258127A129A12BC12DE12001322134413601388138813881349
The first line is to get the highest 16 bits of the 32bit flash address, which is in this case 0x00C0
in the second line are the lower 16 bits of the 32 bit flash address, which is in this case 0x1000. This creates to total address of 0x00C01000, the byte written to that address should be 0xCA.
I'm trying to write the byte to that address using the following code:
uint8_t u8Byte = (uint8_t )XHugeStringToDec((const __huge char *)&Ext_RamFlashSave.Byte[u32Idx], 9+(u8ByteIdx*2), 2);
uint8_t *u8Address = (uint8_t*)((uint32_t)(u32ExtendedLinearAddress << 16) + (uint32_t)u16BaseAddress + (uint32_t)u8ByteIdx);
*u8Address = (u8Byte);
XHugeStringToDec() is a function to get the hexadecimal value from the intel hex string. I know this is going correct.
Ext_RamFlashSave.Byte is the array where the intel hex file is storedin.
The u32ExtendedLinearAddress variable is 0x0C00, and is set earlier. The u16BaseAddress is 0x1000 and is also set earlier in code.
The problem is in the last line:
*u8Address = (u8Byte);
I have verified that u8Address is indeed 0x0C01000 and u8Byte is indeed 0xCA. But when I monitor my flash address, I do not see the byte written.
I can imagine that it is some kind of write protection, but I cannot find out how to work around this, or do I need a different way to write to the Flash address?
More info of how the intel-hex file is build is described here:
https://en.wikipedia.org/wiki/Intel_HEX
I am not familier with the chip you said.
But for writing to flash, Generally, following algorithm is used:
Unlock the flash. This is usually sending specific data sequence. The flash I use right now has 0xA0A0-> delay of 1msec -> 0x0505. This will enable writing to flash
Clear the error flags. Flash has some error flags like Write error or read error. Also, check for Busy flag to make sure Flash is ready
Erase the Page. Yes... You have to erase the page before writing to it. Even if you want to change a single bit, You have to erase entire page.
Write the data (Finally) But make sure that endien-ness is correct. Sometimes you Controller is Little Endien and Flash is Big Endien. So you have to match to Flash's.
Lock the Flash. This is usually done by sending same sequence which used for unlocking. (Refer Datasheet)
You cannot write to a flash directly... Have to go through entire process. That's why Flash are slow.
Some flash support 8bit writing while some only 16bit or 32 bit etc. So, you have to send those many bits while writing.
When you have to modify a small portion of a page, Read the page in a buffer. Modify data in the buffer. Erase the page and write entire buffer.
If you are modifying a
I have developed an embedded solution which communicates over a Multi Drop Bus and now I would like to develop a PC based application which monitors traffic on the bus.
MDB supports true 9 data bits (plus start/stop/parity - and *no fudging* by using the parity bit as a 9th data bit) whereas standard Windows and Linux libraries offer a maximum of 8 data bits.
I have a StarTech PCI2S950 PC serial port card which supports 9-data bits, but am not sure how to code my monitoring app & have googled a lot to no great avail.
I would prefer to code in C (or Delphi, or C++). I have a slight preference for Cygwn, but am willing to use straightforward Windows or Linux.
Just anything to read/write 9 data bits over that PC serial port card.
Can anyone help?
The document at http://www.semiconductorstore.com/pdf/newsite/oxford/ox16c950b.pdf describes the differences between various UARTs.
While your StarTech board includes the 16C950, which is RS-485 (and 9-bit) capable, it uses it in RS-232 compatible (550) mode, similar to 16550/8250 from IBM-PC days, and supports max 8 bit data.
You need a board with the same chip (16C950) but that exposes the RS-485 compatible 950 mode that supports 9 bit data as per the spec. And any board claiming such support would have to come with custom drivers for Windows, since Microsoft's is 8 bit only.
There are several other chips that can do 9-bit RS-485 mentioned here but again finding Windows driver support will be tricky. And, of course, many boards use 16C950 but only in 8-bit and/or RS-232 only mode, and without appropriate drivers.
In answer to your related question on Superuser, sawdust suggested the Sealevel 7205e, which looks like a good choice, with Windows driver support. It is pricey but they specifically mention 9-bit, RS-485 support, and Windows drivers. It may well be your best option.
The card you selected is not suitable for this application. It has just plain RS-232 ports, it is not suitable for a multi-drop bus. You'll need to shop elsewhere for an EIA-485 style bus interface, you could only find those at industrial electronics suppliers. By far the best way is to go through the National Automatic Merchandising Association, the industry group that owns the MDB specification.
The 9-bit data format is just a trick and is used in the MDB protocol to mode-switch between address bytes and data bytes. All ports on the bus listen to address bytes, only the addressed port listens to data bytes.
The 9th bit is simply the parity bit that any UART can generate. The fundamental data size is still 8 bits. An UART auto-generates the parity bit from the way it was initialized, you can choose between mark, space, odd and even parity.
Now this is easy to do in a micro-controller that has an UART, the kind of processor used on a bus like this. You simply re-program the UART on-the-fly, telling it to generate mark parity when you send the address bytes. And re-program it again to space parity when you send the data bytes. Waiting for the fifo to empty will typically be necessary although it depends on the actual UART chip.
That is a lot harder to do on a regular Windows or Linux machine, there's a driver between the user mode program and the UART. The driver generates a "transmit buffer empty" status bit, like WaitCommmEvent() for EV_TXEMPTY on Windows, but this doesn't include the fifo empty status, it only indicates that the buffer is empty. A workaround would be to wait for the buffer empty status and then sleep() long enough to ensure that the fifo is emptied. A fifo is typically 16 bytes deep so sleep for 16 times the bit time. You'll need the datasheet for the UART on the card you selected to know these details for sure.
Under Win32 serial ports are just files, so you create a handle for it with CreateFile and then use a DCB structure to set up the configuration options (the members are documented here and include number of data bits as ByteSize).
There's a good walk through here:
http://www.codeproject.com/Articles/3061/Creating-a-Serial-communication-on-Win32
The link provided shows the card supports 9 data bits and Windows 8, so I would presume all the cards features are available to the application through the standard Windows API.
Apart from setting the correct data format in a DCB and opening the port, I would have thought the standard ReadFile would work. I wonder if the data read in would actually be 2*8 bit bytes which represent the 9 data bits, rather than a continuous 9 bits streamed in (which you will need to decode at a later date).
Is the 9th bit used for some purpose other than data?
I needed to scan my PCI bus and obtain information for specific devices from specific vendors.
My goal is to find the PCI Region size for the AMD Graphics card, in order to map the PCI memory of that card to userspace in order to do i2c transfers and view information from various sensors.
For scanning the PCI bus I downloaded and compiled pciutils 3.1.7 for Windows x64 around a year ago. It supposedly uses DirectIO.
This is my code.
int scan_pci_bus()
{
struct pci_access *pci;
struct pci_dev *dev;
int i;
pci = pci_alloc();
pci_init(pci);
pci_scan_bus(pci);
for(dev = pci->devices; dev; dev = dev->next)
{
pci_fill_info(dev, PCI_FILL_IDENT | PCI_FILL_CLASS | PCI_FILL_IRQ | PCI_FILL_BASES | PCI_FILL_ROM_BASE | PCI_FILL_SIZES | PCI_FILL_PHYS_SLOT);
if(dev->vendor_id == 0x1002 && dev->device_id == 0x6899)
{
//Vendor is AMD, Device ID is a AMD HD5850 GPU
for(i = 0; i < 6; i++)
{
printf("Region Size %d %x ID %x\n", dev->size[i], dev->base_addr[i], dev->device_id);
}
}
}
pci_cleanup(pci);
return 0;
}
As you see in my printf line, I try to print some data, I am successfully printing device_id and base_addr however size which should contain the PCI region size for this device is always 0. I expected, at least one of the cycles from the loop to display a size > 0.
My code is based on a Linux application which uses the same code, though it uses the pci.h headers that come with Linux(pciutils apparenltly has the same APIs).
Apparently, Windows(that is Windows 7 x64 in my case) does not show this information or the at the very least is not exposed to PCIUtils.
How do you propose I obtain this information? If there are alternatives to pciutils for Windows and provide this information, I'd be glad to obtain a link to them.
EDIT:I have still found no solution. If there are any solutions to my problem and also work for 32-bit Windows, It would be deeply appreciated.
How this works is pretty complicated. PCI devices use Base Address Registers to let the BIOS and Operating System decide where to locate their memory regions. Each PCI device is allowed to specify several memory or IO regions it wants, and lets the BIOS/OS decide where to put it. Complicating matters, there's only one register that is used both to specify the size AND the address. How does this work?
When the card is first powered up, it's 32-bit address register will have something like 0xFFFF0000 in it. Any binary 1 means "the OS can change this", any binary 0 means "must stay zero". So this is telling the OS that any of the top 16 bits can be set to whatever the OS wants, but the bottom 16 bits have to stay zero. Which also means that this memory region takes up 16 bits of address space, or 64k. Because of this, memory regions have to be aligned to their size. If a card wants 64K of address space, the OS can only put it on memory addresses that are a multiple of 64K. When the OS has decided where it wants to locate this card's 64K memory space, it writes it back into this register, overwriting the initial 0xFFFF0000 that was in there.
In other words, the card tells the OS what size/alignment it needs for the memory, then the OS overwrites that same register/variable with the address for the memory. Once it's done this, you can't get the size back out of the register without resetting the address.
This means there's no portable way of asking a card how big its region is, all you can ask it is WHERE the region is.
So why does this work in Linux? Because it's asking the kernel for this information. The kernel has an API to provide this stuff, the same way that lspci works. I am not a Windows expert, but I'm unaware of any way for an application to ask the Windows kernel this information. There may be an API to do this somehow, or you may need to write something that runs on the kernel side to pass this information back to you. If you look in the libpci source, for windows it calls the "generic" version of pci_fill_info(), which returns:
return flags & ~PCI_FILL_SIZES;
which basically means "I'm returning everything you asked for, but the sizes."
BUT, this may not matter anyway. If all you're doing is wanting to read/write to the I2C registers, they're usually (always?) in the first 4K of the control/configuration region. You can probably just map 4K (one page) and ignore the fact that there might be more. Also be warned that you may need to take additional steps to stop the real driver for this card from reading/writing while you are. If you're bit-banging the I2C bus manually, and the driver tries to at the same time, it's likely to cause a mess on the bus.
There also may be an existing way to ask the radeon driver to do I2C requests for you, which might avoid all of this.
(also note I'm simplifying and glossing over a lot of details with how the BARs work, including 64 bit addresses, I/O space, etc, read PCI documentation if you want to learn more)
Well whamma gave a very good answer [but] there's one thing he was wrong about, which is region sizes. Region sizes are pretty easy to find, here i will show two ways, the first by deciphering it from the address of the bar, the second through Windows user interface.
Let's assume that E2000000 is the address of the Base Register. If we convert that to binary we get:
11100010000000000000000000000000
Now there are 32 bits here in total, you can count them if you must. Now if you are not familiar with how the bits in
a BAR are layed out, look here -> http://wiki.osdev.org/PCI , specifically "Base Address Registers" and more specifically
the image that reads "Memory Space BAR Layout". Now lets start reading the bits from the right end to the left end and use
the image in the link i pointed to you above as a guide.
So the first bit(Bit 0) starting from the right is 0, indicating that this is a memory address BAR.
Bits(1-2) are 0, indicating that it's a 32-bit(note this is not the size) memory BAR.
Bit 3 is 0, indicating that it's not Prefetchable memory.
Bits 4-31 represent the address.
The page documents the PCI approved process:
To determine the amount of address space needed by a PCI device, you
must save the original value of the BAR, write a value of all 1's to
the register, then read it back. The amount of memory can then be
determined by masking the information bits, performing a bitwise NOT
('~' in C), and incrementing the value by 1. The original value of the
BAR should then be restored. The BAR register is naturally aligned and
as such you can only modify the bits that are set.
The other way is using Device Manager:
Start->"Device Manager"->Display Adapters->Right Click your video card->Properties->Resources. Each resource type marked
"Memory Range" should be a memory BAR and as you can see it says [start address] to [end address]. For example lets say
it read [00000000E2000000 - 00000000E2FFFFFF], to get the size you would take [start address] from [end address]:
00000000E2FFFFFF - 00000000E2000000 = FFFFFF, FFFFFF in decimal = 16777215 = 16777215 bytes = 16MB.