I want to upgrade my systems in the field using the uboot FIT images.
My system is a custom firmware, booted by uboot. So far the FIT filesystem works very good. It provides a shasum verified upload. I am using uboot scripts to update stuff on the target.
One intriguing type defined in uboot docs is type "filesystem". The actual content could be several things, like maybe tar'ed bunch of files, or an actual collection of separate individual files in one chunk in the FIT.
In another FIT question, Tom Rini implied that a filesystem is really just a binary blob. What goes into it is my problem and that uboot could then just mmc write ... or usb write ... to create the new filesystem on some partition. Is this really the case?
How can I build a filesystem (say FAT), on a host build computer for packaging with FIT?
Thanks, Steve
The creation of a filesystem image will depend on the filesystem itself. In many cases, build systems such as OpenEmbedded or buildroot can help you here as they will create the images for you.
Related
I am confused a bit about .bin files. Basically in Linux we use elf, .ko type of files for upgrading the box or to copy in it . But, while upgrading a NAND flash in router or any Networking Gaint products why always .bin files is preferred. Is this is something like converged mix of all the OS related files. Does it possible to see the contents of a bin file. How to play with it. It is something like contents of BootROM. How is is prepared? How do we create and test on this. How Linux support for this. Any historical reasons behind this?
Speaking about routers, those files are usually just snapshots of a router's flash memory, probably compressed and with some headers added. Typical things are a compressed squashfs image or simply gzip'ed snapshot of memory.
There is no such thing as .bin format, it's just a custom array of bytes and every vendor interprets it in some vendor-specific way. Basically this extension means “it's not your business what's in the file, our device/software will handle it”. You can try to identify (thnk, reverse-engineer) what's actually in those files by using file utility or just looking at those files through a hex editor and trying to guess what's going on.
I need to build a custom Suse Linux NFS Server that does compression on certain files that are stored on the disk, and decompresses files as they are read from the disk. This needs to be transparent to the remote users of the file system, meaning that if a user saves a 10MB file named XYZZY.tif on /archiveDirectoryOnNFSServer, that when they do a ls -l on that mounted directory, they will see a 10MB file called XYZZY.tif, even though the actual file stored on the disk on the NFS server will be XYZZY.tif.compressed, and it will be 2MB in size.
I'm expecting that I need to build this as a driver that sits below the NFS Server software stack, but, I'm having difficulty finding where to start. Are there existing NFS Servers that provide this level of customization through APIs? Will I need to modify source of an open source NFS Server, and, if so, is there one that would be easiest to start with, and are they modularly structured such that this will be straight forward? I'm having difficult locating relevant content on the internet, and any pointers will be greatly appreciated.
IMO that kind of functionality is absolutely not the NFS server's responsibility (an nfs server should, well, serve files over nfs), but the underlying filesystem's. However, there's not that much choice in Linuxland but you could start by checking out fusecompress and btrfs.
This post is a bit old so you may already be aware of some options here, but there are a couple others (both for server side).
http://zfsonlinux.org/
zfs filesystem has built-in compression. I typically use lzjb as it is the fastest compression algorithm and does a reasonable job (MySQL DB's get 2-4x compression, filesystems with non-compressed data get around 4). you have a choice of algorithm depending on how much CPU time you wish to offer the compression.
if you want different file types compressed then you may consider laying gluster on top of a set of zfs filesystems.
gluster will allow you to store certain file types (by extension) on different underlying filesystems.
in this case, you specify the underlying filesystem as a zfs volume with the particular options you need (for example, .zip and .png go on an uncompressed filesystem, while things you write once and read many like static html files might go on a higher compression--you'll pay once when it's written but reads should be really fast since it scans fewer disk blocks and decompression is very fast)
zfs will manage the nfs mounts if you use it as your nfs server--you wont want this if you lay gluster on top.
it's easy to specify dynamically other attributes per filesystem (atime/noatime, # of copies if you want redundancy other than your normal raid, you can add SSD's as cache devices to get more performance).
in these solutions, you still send the full uncompressed files over the wire, so it doesn't make up for network performance but gives a lot of options if you're trying to speed up Disk IO or get more utilization out of your drives.
I used Sphinx4 for some time which really fits my needs. I load a recognizer, pass the audio data to it and use the recognized String in my application.
Right now I'm working on a C application (C++ is unfortunately not an option) where I need something similar and thought that I could use Sphinx3 which is written in C.
The problem is that I don't really know how it is used inside an application and there is no "Hello World"-example as Sphinx4 provides it.
I already compiled and installed sphinxbase and sphinx3 and now I can include the sphinx header files in my application.
Now to my questions:
Is there a "simple" and well documented example application that uses sphinx3 from a C environment?
How can I load up the sphinx3 engine and call a recognizer with my binary audio data?
OR: Do I need to start an application like "sphinx3_decode" and call it from my own application? If so, is there an example application for that?
Thank you in advance!
Best regards,
Robert
It's not recommended to use Sphinx3. From the website:
Sphinx-3 is CMU’s large vocabulary speech recognition system. It’s
older C based decoder that we continue to maintain. It’s planned to
make it obsolete in the future, it’s still most accurate decoder for
large vocabulary tasks. We are using it as a baseline to check the
recognizer accuracy. This decoder is only intended for researchers who
want to evaluate bleeding edge methods in ASR like tree search method.
If you need to use a decoder you should use pocketsphinx. You can find the tutorial and the API documentation on the website
http://cmusphinx.sourceforge.net/wiki/tutorialpocketsphinx
http://cmusphinx.sourceforge.net/api/pocketsphinx/pocketsphinx_8h.html
I Recently worked on an Intregated Project on Punjabi Language.
Here are some steps that we used...
First we recorded the punjabi audio data in a vaccumed room in 16000 hz sample rate.
Then we took the recorded data and segmented it using Praat Software into small wav and raw files of 2 to 30 sec and saved them in a folder named train.
Then we took a system having Linux ie. Ubuntu and installed the required plug in like autoconfig, automake etc and untarred Sphinx 3 along with 4 packages that are cmuclmtk, pocketsphinx, sphinxbase, sphinxtrain.
Then according to the small wav files we made many files like transcription, dic, phone, filler, file id, ccs etc.
Then we opened the terminal and typed –"sphinx_fe” to check the whether the sphinx is functional or not.
Then we created an folder named “man” and then in terminal wrote its path.
Then we run the command- “sphinxtrain –t man setup”. By running this command an folder named “etc” will be formed in “man” folder containing files “feat_paramas” & ”config”.
Changes were made in the in the config file according to our data.
Then we moved all the files that we created before ie. transcription, dic in the etc folder in that is located in man folder.
Then we placed ‘lang1.sh” script in etc folder and remaining 4 scripts in man folder.
Then we opened the path for etc folder in terminal and run command- “lang1.sh”
Then we run series of commands in terminal – “mfcgen2.sh” then “verify3.sh” then “hmm4.sh” and at last “end-test.sh” to get the final result.
Rest if you have worked on Sphinx 4 then you may know about the files that are mentioned above in the steps. I hope this helps you.
I am working at an OS independent file manager, in C. I managed to copy files, links and directories, but I am not sure how to copy devices. I would appreciate any help.
To create a device file, use the mknod(2) syscall. The struct stat structure will give you the major and minor device numbers for an existing device in st_rdev.
Having said that, there is little value in "copying" a device because a device doesn't contain anything useful. The major and minor numbers are specific to the OS on which they exist.
It's not really a useful feature, IMHO. tar(1) needs to be able to do it as part of backing up a system, and setup programs need to be able to create them for you when setting up your system, but few people need to deal directly with device files these days.
Also, modern Linux systems are going to dynamic device files, created on the fly. You plug in a device and the device files appear; you unplug it and they disappear. There is really no use in being able to copy these dynamic files.
dd is your friend (man dd)
dd if=/dev/sda1 of=/some_file_or_equally_sized_partition bs=8192
if you want to copy the device-file itself, do this:
cp -p device-filename new-filename
e.g.:
cp -p /dev/sda1 /tmp/sda1
those are both equivalent device files, and can be used to access the device.
If you're want to do this from C, use mknod() .. see "man 2 mknod"
This might be useful
cp -dpR devices /destination_directory
cp -dpR console /mnt/dev
You don't. Just filter them out of the view such that it can't be done.
Use the stat function to determine the file type.
Check if you've the udev package, if you do, chances are that devices are generated on the fly, from the package description:
udev - rule-based device node and kernel event manager
udev is a collection of tools and a daemon to manage events received from the
kernel and deal with them in user-space. Primarily this involves creating and
removing device nodes in /dev when hardware is discovered or removed from the
system.
Events are received via kernel netlink messaged and processed according to
rules in /etc/udev/rules.d and /lib/udev/rules.d, altering the name of the
device node, creating additional symlinks or calling other tools and programs
including those to load kernel modules and initialise the device.
Which configuration management tool is the best for FPGA designs, specifically Xilinx FPGA's programmed with VHDL and C for the embedded (microblaze) software?
There isn't a "best", but configuration control solutions that work for software will be OK for FPGAs - the flow is very similar. I use Subversion at work and git at home, and wrote a little on 'why' at my blog.
In other answers, binary files keep getting mentioned - the only binary files I deal with are compilation products (equivalent to software object and executables), so I don't keep them in the version control repository, I keep a zipfile for each release/tag that I create with all the important (and irritatingly slow to reproduce) ones in.
I don't think it much matters what revision control tool you use -- anything that you would consider good in general will probably be OK here. I personally use Git for a sizable Verilog + software project, and I'm quite happy with it.
What will bite you in the ass -- no matter what version control you use -- is this: The Xilinx tools don't generally respect a clean division between "input" and "output" or between (human edited) "source" and (opaque) "binary." Many of the tools like to store some state information, like a last-run time or a hash value, in their "input" files meaning that you'll get lots of false changes. Coregen does this to its .xco files, and project navigator (the main GUI) does this to its .xise files. Also, both tools have a habit of inserting or removing lines for default-valued parameters, seemingly at random.
The biggest issue I've encountered is the work-flow with Coregen: In many cases, at least one of the following is true:
You have to manually edit the HDL files produced by Coregen.
The parameters that went into Coregen are stored somewhere other than the .xco file (usually in what looks like an output file).
You have to copy-and-paste the output from Coregen into your top-level design.
This means that there is no single logical source/master location for your input to the core-generating process. So even if you have the .xco file under version control, there's no expectation that the design you're running corresponds to it. If you re-generate "the same" core from its nominal inputs, you probably won't get the right outputs. And don't even think about merging.
I suggest CM tools that support version labeling and binary files. Most Software CM applications are fine with ASCII text files. They may just store a "difference" file rather than the entire file for updates.
My recommendations: PVCS, ClearCase and Subversion. DO NOT USE Microsoft SourceSafe. I don't like it because it only supports one label per revision.
I've seen Perforce and Subversion used in a couple of FPGA-intensive companies.
We use Perforce, and its great. You can have your code that lives in Linux-land checked in side-by-side with your Specs and Docs that live in Windows-land. And you get branching, labels, etc.
I've seen everything from Clearcase to RCS used, and it is really all okay for this kind of thing. The important thing is to get a good set of check-in policies established for your group, and make sure they stick to it.
And have automated nightly regressions. That way, when someone breaks the rules, they can be identified and publicly shamed.
I have personally used Perforce, Subverion, git and ClearCase for FPGA projects. Since VHDL and C are just text files, any works fine. However be sure to capture the other project and contraint files and any libraries you use.
Also think about what to do with the outputs, e.g. log file and bitstreams. Both tend to be big and the bitstreams are binaries.
Previously I used Subversion but have switched to git two years ago. Git handles FPGA design files just as well as it handles every other text and binary file. Git is all you need for version controlling your files and artifacts.
For building the designs, I recommend just using a single ISE project called "ise" (living in a subdirectory called "ise/"). You can take a look at my (very modest) FPGA open-source project on github for the file layout. I don't bother storing the ISE files at all since they are easy to regenerate. The only things I save are the Verilog files and some ISIM waveform config files. In other projects that use coregen I save the coregen.cgp project file and all of the *.xco scripts for regenerating cores. Then I use a Makefile for actually running coregen on the *.xco files. There are a few other Xilinx-specific files you should version control too: *.ucf, *.coe, *.xcf, etc.
I experimented with using Makefiles and the Xilinx command-line tools but found that ISE did a much better job tracking dependencies and calling the tools with the right arguments. Just don't make the mistake of trying to version control your ise/ project files or you will go mad. Xilinx has something like 300 different file types which change every release. If you want to save a file, you can try the ISE project file itself with a .xise extension. Anything that is hard to recreate, like the golden bitfile that you know works and took 6 hours to build, you might want to copy that and configuration manage it explicitly.