I am using a custom embedded rtos and booting with uboot via a FIT image. "bootm" works very nicely with this .itb container.
I would like to include in the Sha signed image .itb file upgrades to my filesystem. That way uboot (via scripts) can upgrade files in the flash fat filesystem. The docs file update_uboot.its gives a hint as to what I need with:
update#1 {
description = "U-Boot binary";
data = /incbin/("./u-boot.bin");
compression = "none";
type = "firmware";
load = <FFFC0000>;
hash#1 {
algo = "sha1";
};
};
Where presumably this is a packaged binary upgrade for u-boot. And I can generate the image file with mkimage. However I cannot find any directions of how to use this or flash this, etc.
And generically there are several very interesting FIT types like "filesystem" which imply there is a way to package up a complete filesystem. That is very close to what I want, but I cannot find any way to extract and flash the filesystem from u-boot scripts.
I tried to do a "cp $loadaddr:update#1 0x90000000 10" but cp does not copy the data from the itb file. And from the source it looks like do_mem_cp does not do any special FIT syntax handling.
I am using a fairly recent TI Sitara uboot (u-boot-2016.05+gitAUTOINC+2f757e5b2c-g2f757e5b2c), do I need to modify the source to do what I want?
Thanks, Steve
You're looking for the imxtract command to extract parts of a FIT image in memory.
Related
I need to have different variants of a device tree passed to my linux kernel dependant on a board revision that can only be determined at run time.
What's the established way of setting up the boot of the kernel to deal with various hardware layouts that can only be determined at boot time from within u-boot?
The bootm command is taking three parameters:
bootm ${kernel_addr} ${ramdisk_addr} ${fdt_addr}
While the third one is the address of the flattened device tree blob in the memory. So if you have different device trees, either load them into different memory addresses and pass them to bootm, or load that memory address with different blobs.
Late answer, but i recently add to deal with the same problem.
Using u-boot , you can actually write a macro for that.
The u-boot environment variable for the device tree file is "fdtfile".
From there, you can define a macro that set this variable according to your specific need for example :
setenv findfdt '
if test $mycondition = value1; then setenv fdtfile devicetree1.dtb; fi;
if test $mycondition = value2; then setenv fdtfile devicetree2.dtb; fi;
..'
Then you can just create a .txt file to register this macro and then use mkimage tool to create a binary image .img for u-boot to load :
mkimage -T script -d macros.txt macros.img
You can of course wrap this macro with a more sophisticated one that will be executed at each boot.
I have a program which compiles and runs scripts.
To create a standalone version of the script, I reserve a large static buffer to hold the compiled script. The compiled script is copied into a copy of the program and it can then be run from that copy.
This works fine. It has some disadvantages however:
the buffer is static and takes up space if there's no compiled
program in it.
if the script to be included exceeds the buffer's size, I need to build a new version with a larger buffer.
I'd like to add the compiled script to the end of the program, but naively doing so doesn't work as the exe loader chokes on the new file size.
Is there a way to manipulate the exe so it would be acceptable for the loaders (mind this is a cross platform program)?
would be acceptable for the loaders (mind this is a cross platform program)?
I would think that this is unlikely to be possible without being platform specific. Time for a common interface with different implementations (so the code that saves/loads the script is common, but the executable manipulation is specific).
On Windows you'll hit the problem that a running executable file is locked against modification. By working on copies this can be worked around (but the only way to rename back in a completely deterministic way it is perform the move on boot, but scheduling a job might be acceptable).
On Windows the easiest way to add data to an image (executable or dll) is using resources. Define a custom resource type and add into the image (UpdateResource function) and later retrieve with LoadResource.
You said "script", so I suppose you have a separate file containing the script (a text file?). You could write a simple program that reads the script file and convert it in a compilable form (e.g. a C source containing the initialization of an array of byte). There are also tools you can use to convert an arbitrary file into a linkable object (.o or .obj). In the past I have used the command "objcopy" from GNU bimutils. In particular, on linux:
objcopy -I binary -O elf32-i386 mydata mydata.o
This command creates an object and three public symbols you can use to find the start, the end and the size of your data block:
_binary_mydata_start
_binary_mydata_end
_binary_mydata_size
Something similar may work also on Windows, provided that you install a Windows version of GNU binutils (e.g. cygwin).
I have a C program built using Autotools. In src/Makefile.am, I define a macro with the path to installed data files:
AM_CPPFLAGS = -DAM_INSTALLDIR='"$(pkgdatadir)"'
The problem is that I need to run make install before I can test the binary (since it needs to be able to find the data files).
I can define another macro with the path of the source tree so the data files can be located without installing:
AM_CPPFLAGS = -DAM_INSTALLDIR='"$(pkgdatadir)"' -DAM_TOPDIR='"$(abs_top_srcdir)"'
Now, I would like the following behavior:
If the binary was installed via make install, use AM_INSTALLDIR to fetch data files.
If the binary was not installed, use AM_TOPDIR to fetch data files.
Is this possible? Is there a better approach to this problem?
What I do (in https://http://rhdunn.github.com/cainteoir/) is:
const char *basedir = getenv("CAINTEOIR_DATADIR");
if (!basedir)
basedir = DATADIR "/" PACKAGE; // e.g. /usr/share/cainteoir-engine
and then run it (in tests/harness.py) as:
CAINTEOIR_DATADIR=`pwd`/data src/apps/metadata/metadata test_file.epub
This then allows the user to change the location of where to get the data if they wish.
Making the program able to use a run-time configuration as proposed by reece is a good solution. If for some reason you do not want it to be configurable at run-time, a common solution is to build a test binary differently than the installed binary (there are other problems associated with this, in particular ensuring that the program you are testing has behavior that is consistent with the program that is installed.) An easy way to do that is something like:
bin_PROGRAMS = foo
check_PROGRAMS = test-foo
test_foo_SOURCES = $(foo_SOURCES)
AM_CPPFLAGS = -DINSTALLDIR='"$(pkgdatadir)"'
test_foo_CPPFLAGS = -DINSTALLDIR='"$(abs_top_srcdir)"'
Rather than using a binary with a different name, you might want to have a dedicated tests directory and build the program using the same name as the original.
Note that I've changed the name from AM_INSTALLDIR to INSTALLDIR. Automake reserves names
beginning with "AM_" for its own use, and by using that name you are stomping on Automake's
namespace.
A bit of additional information first: The data files are under active development, and I have various scripts that need to call binaries using local data files, whereas installed binaries should use stable, installed data files.
My original solution made use of an environment variable, as proposed by reece. But I didn't want to manage setting up environment variables in various places, and I didn't want any risk of the wrong data files being picked up due to a mistake.
So the solution I ended up with was to define macros for both locations at build time, and add a flag (-local) to the binaries to force local data files to be used.
I am trying to understand the different file extensions for the pfxplus powerflex database. Could someone please help telling me briefly what each file is for?
.k1
.k2
.k3
...
.k13
.k14
.k15
.fd
.def
.hdr
.prc
.pc3
Data files:
OK, so .dat is the data file.
.k1 -> .k15 are index files.
These are the critical data files for runtime. (Combined with filelist.cfg or pffiles.tab similar to define what files are available overall).
.fd is the file definition, needed for compiling programs
.tag (which you did not mention) is needed only if you need to access field names at run time (such as using a generic report tool)
.def is the file definition in human readable form, and is not needed by any process but is produced so a programmer or user can understand the file structure.
Run time:
The .ptc files are the compiled threads interpreted by the powerflex runtime.
The .prc file is a resource file that is used at runtime in conjunction with the .ptc file - it defines how a character based program is to look in a gui environment in "g-mode". It was the cheap way to upgrade character based programs when windows first started getting popular usage.
.hdr and .pc3 escape me at the moment, but are vaguely familiar - .hdr is probably another data file used with compression or special field types for later versions of pfxplus. .pc3 may in fact be the .ptc files...
I'm trying to figure out how to detect whether a binary has been compressed with UPX. I am using a simple CRC to detect whether my app was in any way changed and if the CRC failed on the size due to a packer I would like to detect that as OK.
Right now I am starting with UPX.
So, is there any marker on the binary? are there any specific JMP or other instructions that I should search?
This will mainly be tested in Windows, but in the future I might add it to Linux as well.
Any help (and code) is appreciated.
ADDED:
I found that in the 10 binaries I checked the
AddressOfEntryPoint
Import Directory RVA
Resouce Directory RVA
either point to UPX or have an offset that is set by UPX. Any information on this?
Thanks
Download upx source code from UPX Homepage and open src/p_w32pe.cpp file; the function you are looking for is;
int PackW32Pe::canUnpack()
This function checks if the file is compressed with win32 upx.
You might try checking the section names of the executable. UPX changes them to UPX0, UPX1, UPX2, I believe.