I have an older embedded device (PHYTEC phyCORE-LPC3250) that runs the ancient U-Boot 1.3.3.
The Linux kernel uImage gets copied to the NAND flash at 0x200000, then booted with: nboot 80100000 0 200000;bootm
This works fine if the uImage is derived from the self-expanding zImage, but according to this mailing list post it is preferable to have U-Boot perform the decompression itself.
So I have tried creating a uImage that contains a gzipped version of the normal kernel image, but decompression fails:
Image Name: Poky (Yocto Project Reference Di
Image Type: ARM Linux Kernel Image (gzip compressed)
Data Size: 4057248 Bytes = 3.9 MB
Load Address: 80008000
Entry Point: 80008000
Verifying Checksum ... OK
Uncompressing Kernel Image ... Error: inflate() returned -3
GUNZIP: uncompress or overwrite error - must RESET board to recover
This scenario is described in the FAQ, which suggests that the problem is running out of RAM. But I have 128 MB of RAM, starting at 0x80000000, and the uncompressed kernel is only 8 MB.
(I validated that the data in the uImage is in fact gzipped.)
First U-Boot copies the uImage from NAND to RAM at the specified address 0x80100000. Then it unpacks the kernel to the load address specified in the uImage header, 0x80008000.
Since our kernel is about 8 MB uncompressed, that means the kernel's memory from 0x80008000 to approximately 0x80800000 overlaps where we copied the uImage at 0x80100000.
If the uImage is not compressed, unpacking the kernel can use memmove which handles overlapping address ranges without issue. (If the load address is the address where we copied it in RAM, the kernel gets executed in-place.)
But for compressed uImages, if we overwrite the compressed data while decompressing, decompression will obviously fail.
Related
Can get size of disk sector via the Linux API/ABI? It's about the quantum of I/O disk, normally it's equal 512 bytes, but others values can be too (usually multiple 512 bytes).
Also it should not confuse to size of logical block or to size of sector of a file system.
A block device is reflected as file in a file system of an UNIX (/dev/sda, /dev/sr etc.) It means, can open that file and make some manipulations to its content like with content of the corresponded block device.
So specifically the work to a true block device similar the work to a virtual hard disk (the .vhd format for instance).
But i don't know how to get size of sector in general case.
At moment i've single solution: get the maximal CHS address and size of hard drive, both action via BIOS. But i think, it's bad idea, because portability lost
I have been asked to help out on an embedded firmware project where they are trying to mount a file system on an SPI flash chip (Cypress S25FL512S) with a 256KB (Kilo Byte) Erase sector size.
My past experience with file systems is that the file system has a block size upto 4Kbytes which is mapped onto erase sectors of 512bytes to 4Kbytes
The embedded controller is a small NXP device running at 180MHz with 512KBytes of RAM so I cannot even cache an erase sector. I note that the chip family does have pin compatible devices with smaller erase sectors.
My general question is how do you mount a file system with a block/cluster size that is smaller than the flash erase sector size? I've not been able to find any articles addressing this.
You can't do this in any sensible way. Your specification needs to be modified.
Possible solutions are:
Pick a flash/eeprom circuit with smaller erase size.
Pick a flash/eeprom with more memory and multiple segments, so that you can back-up the data in one segment while programming another.
Add a second flash circuit which mirrors the first one, erase one at a time and overwrite with contents of the other.
Pick a MCU with more RAM.
Backup the flash inside MCU flash (very slow and likely defeats the purpose of having external flash to begin with).
I follow some document to boot embedded Linux on ARM board (ex: Freescale Vybrid tower) via sdcard. in the document there are steps to build uImage and write u-boot into sdcard as below:
sudo dd if=u-boot.imx of=/dev/sdX bs=512 seek=2
mkimage -A arm64 -O linux -T kernel -C none -a 0x81000000 -e 0x81000000 -n
“Linux” -d Image uImage
What I would like to know is from which datasheet/UM/RM or any document they get the number: bs=512 seek=2, -a 0x81000000 (Load address), -e 0x81000000 (Entry point)
Please also explain what Load address/entry point address mean?
What I would like to know is from which datasheet/UM/RM or any document they get the number: bs=512 seek=2, -a 0x81000000 (Load address), -e 0x81000000 (Entry point)
The bs=512 seek=2 specification should be from the NXP/Freescale reference manual for the SoC (e.g. the "Expansion Device: SD, eSD and SDXC" section of the System Boot chapter).
When configured to boot from an SDcard, the ROM boot program (of the SoC) will look for a program image (e.g. U-Boot) at byte offset 0x400 (or 2 * 512 = 1024), which is the third 512-byte sector.
The first sector is presumed to be the MBR, and the second sector is reserved for an optional Secondary Image Table (using terminology from NXP document).
Allwinner SoCs use a similar booting scheme for SDcard (i.e. the U-Boot image is at a fixed location in raw sectors not part of a partition), but the image starts at the 17th sector.
Instead of loading raw sectors, some SoCs (e.g. Atmel) boot from SDcard by loading a file from a FAT partition.
Please also explain what Load address/entry point address mean?
These values are specified to the mkimage utility so that they can be installed in the uImage header. U-Boot will then use these values when the uImage is loaded and unpacked.
The load address specifies to U-Boot the required memory address to locate the image. The image is copied to that memory address.
The entry point specifies to U-Boot the memory address to jump/branch to in order to execute the image. This value is typically the same address as the load address.
For an ARM Linux kernel the recommended load and entry-point addresses are 0x8000 from the start of physical memory, according to (Vincent Sanders') Booting ARM Linux.
See Building kernel uImage using LOADADDR for more details.
Please also explain what Load address/entry point address mean?
Load address : Refers to from where the kernel is loaded. This is the kernel "load address". U-Boot shall copy the image to that region of memory. The address is dependent on the board design/architecture. In general design, this shall refer to RAM address. You need to check your board specification.
Entry point : This is where the control/execution is transferred once the image is written into RAM. (The code at this location shall will be executed first when the kernel in RAM is invoked by bootloader.)
What I would like to know is from which datasheet/UM/RM or any document they get the number: bs=512 seek=2, -a 0x81000000 (Load address), -e 0x81000000 (Entry point)
Please also explain what Load address/entry point address mean?
The bs=512 seek=2 is to skip the first sector of the SD card. This has some sort of boot information (MBR - master boot record or partition table are similar concepts) and you will brick the card if you overwrite this information (or at least need other tools to fix it). It is defined in an MMC/SD card standard. I think the JEDEC web sight has it.
The load address is where to move the SD card image to memory (Ie SDRAM). The entry point is where to hand control once the image is loaded. Often they are the same if the boot code is written in assembler and a linker is used. However, sometimes a hard coded vector table is at the start of the image and the entry point is somewhere in the middle. In any case, both are physical addresses. It could be 'IRAM' (internal static ram) for the case of a smaller kernel but must be SDRAM for Linux (which requires your SDRAM to be working). You may have issue with this if it is a custom board and not an off the shelf Vybrid Tower. Also, there are different Tower board revisions and they work differently. Check the errata on them. Finally, different U-boot versions support different boot modes. Ie, where is u-boot stored and executed from? The address are in the Vybrid TRM in the physical memory map for the Cortex-A5 CPU.
RAM_HIGH_ADRS is a parameter defined in config.h and in the makefile. As I understand it, it defines the adress on which the program's data+text+bss segments will be written in the RAM.
Which means, for example, that if the cpu has 64 Mb of RAM, and RAM_HIGH_ADRS equals to 0x00A00000 (10 Mb), the entire program has 54 Mb to work with in terms of storing text+data+bss+heap+stack.
The reason I'm questioning this is I am working on a project where I expanded the data segment by a large margin which caused the cpu to not boot. I then increased RAM_HIGH_ADRS, which at this point allowed the cpu to boot. This confuses me since the only thing that is written between RAM_LOW_ADRS and RAM_HIGH_ADRS, to my understanding, is the VxWorks image, so increasing the RAM_HIGH_ADRS should only lower the available size for the data segment.
If you are using Vxworks bootrom to boot the board, then here is how it works.
Bootrom gets placed at RAM_HIGH_ADRS. Bootrom then loads the VxWorks Kernel image from network (or any other place based on from you are fetching the vxWorks Kernel image), and place it in RAM starting from RAM_LOW_ADRS.
First it places .text segment and then right after that it places .rodata, .data, and .bss. Therefore there has be enough space between RAM_LOW_ADRS and RAM_HIGH_ADRS that can accommodate .text+.rodata_.data+.bss.
If space is not enough then user will see the symptom that you have seen. In such case set RAM_HIGH_ADRS to some higher value so that .text+.rodata_.data+.bss can fit between the RAM_LOW_ADRS and RAM_HIGH_ADRS.
from vxworks-bsps-6.7.pdf page 6:
High RAM address. When the bootrom is used, the boot loader places the
small VxWorks kernel (the bootrom) at high RAM. The
RAM_LOW_ADRS..RAM_HIGH_ADRS is used by the bootrom kernel to store the
VxWorks kernel fetched from the network before booting. Usually set to
half main memory + 0x3000, for example 0x40203000 on a system with 4Mb
RAM.
I am writing an embedded application based on the ARM 9 v5 processor, and am using 64MB NAND. My problem is that when I copy the text or binary files of size 3-4 MB, the free physical memory gets reduced by only few KB, whereas ls -l shows the file size in MB.
By repeating the same process I reached one point where df command shows me 10MB size is free and du shows the total size as 239MB.
I have only 64MB of NAND, how am I able to add files up to 239MB of size?
JFFS2 is a compressed filesystem, so it is keeping the files compressed in the disk, which leads to this conflict. du lists the disk usage and df is the available capacity as seen by the filesystem.