How to recreate a rootfs with an .ext4 file - filesystems

I have an embedded device with an emmc and a qspi-flash, both of which have an operating system on them.
From the OS running in the qspi-flash, I have a rootFS.ext4 file, the entire root filesystem for the OS on the emmc. From the qspi, I can see /dev/mmcblk1p3, which is the rootfs partition of the emmc.
I am trying to do
dd if=root.ext4 of=/dev/mmcblk1p3 bs=1M
Unfortunately when i then boot from the emmc, the kernel is complaining that it cannot mount to mmcblk1p3.
What is the correct way to completely erase the contents of the original filesystem and overlay the new filesystem image into the partition? Am I missing a step?
The mmc partitions are gpt.

The rootfs file I was using was incorrect.
I also needed to clear the exisiting filesystem by using mkfs.ext4.
Now the dd worked and the new rootfs was copied. I was able to successfully boot up from the emmc and see the new version of the rootfs.

Related

I2c eeprom file missing in user-space - SFP module

I have some linux kernel & SFP/I2C driver issue.
I am using a buildroot linux kernel for an embedded board.
I need to be able to read the eeprom file of the SFP i2c device.
1. working case:
When SFP module is inserted in my development unit board from the start (before the kernel loads up) then when startup completed i can see and read the eeprom file in the path: /sys/class/i2c-adapter/i2c-1/1-0050/eeprom
kernel prints on startup the i2c device scan result:
2. not working case:
If there is no SFP module inserted on startup,and kernel completes the boot procces, then when i'm inserting the SFP module in,i observe that the path:
/sys/class/i2c-adapter/i2c-1/1-0050/ DOESN'T include the eeprom file.
The device tree part of the sfp-eeprom code:
My guess is the SFP driver is responsible for that trigger that should happen once the SFP module is inserted, and should trigger the creation of eeprom file.
Would like to ask you what am i missing ?
some binding code from sfp driver to trigger the i2c scan or something?
Any suggestion?
Thanks in advance.
A possible workaround for this issue was found.
to use the ethtool -m interface.
from ethtool man page:
-m --dump-module-eeprom
Retrieves and if possible decodes the EEPROM from plugin modules, e.g SFP+, QSFP

File extracted from ZIP not recognized until re-save

Q: Why would re-saving a file be different vs a direct extraction from a zip file? Particularly on Windows?
Context
I have an angular application that prepares a text file for import into a commercial machine. For user convenience, we provide the file inside a zip file so that the required folder structure can be provided to the user. They write this file to a USB drive and use that to import into the machine.
Problem
If the downloaded zip file is extracted directly onto the USB (to get the file and the required folder structure), the machine cannot recognize the embedded text file.
Troubleshooting
If I open the file in any text editor, add a space, delete the space, and re-save the file on the USB, then the machine will recognize the file. Alternatively, if I extract the zip onto the local file system, then copy the folder structure from the local file system to the USB, then the machine also will recognize it.
If I switch to Linux, then a 'write out' from nano will fix the file. If I use the touch command on the file, the problem remains.
Suspecting a whitespace/line-ending issue, I've tried several diff tools which reveal no apparent differences:
$ diff original.txt resaved.txt (Linux)
$ vbindiff original.txt resaved.txt (Linux)
> fc /b original.txt resaved.txt (Windows 7)
Other info:
Angular version: 5.2.10
Zip Utility in angular: JSZip 3.1.5
Unzip Utils: 7-Zip and Native Windows Explorer extract
JSZip code:
const zip = new JSZip();
zip.folder('FolderA/FolderB/FolderC').file('FILE.TXT', new File([contentString], 'TEMP.TXT', { type: 'text/plain' }));
zip.generateAsync({ type: 'blob' })
.then(function (content) {
saveAs(content, 'ZipFile.ZIP');
});
At this point, I'm out of ideas. Hoping someone here may have some insight into this odd behavior.
TL;DR: Check the file attributes (e.g. Archive, Read-Only, Hidden, System, etc).
Our system was specifically looking for the Archive bit and modifying the file in any way set this bit.
This was an ugly one to ferret out, but chatting with our embedded systems programmer for a bit led to the answer.
Our machine was specifically searching for the archive bit (Windows file attribute) when it was searching for files to import. This bit is a relic from Windows NTFS and is near obsolete. For all intents and purposes it is a dirty flag used to point out files that should be archived/backed up in the next backup run. There are much better ways to do this, so it has fallen out of style.
However, for whatever reason, our system is searching only for files with that bit set. That's why opening/copying/moving the file would fix the problem, because altering it in any way set this archive bit (dirty flag).
If you want to learn more about it, see here and here.
So, the moral of the story is to check these file attributes if you have a similar issue.
We are using the Harmony USB driver from Microchip, so this may be a nuance of that tool (or maybe just an artifact from one of the online examples).
You can see it this using the file properties in Windows Explorer or with the > attrib <file> command in Windows command prompt.
To fix:
Windows: You can set the value from the command prompt using > attrib +a <file> or remove it using > attrib -a <file>.
If using node.js on a Windows host, you can use the winattr library from NPM to manipulate these attributes.
Linux: You can use $ getfattr and $ setfattr to set the bit (see here and here).
Note: the answers I linked say to use $ setfattr -h -v 0x00000020 -n system.ntfs_attrib_be <target-file> but I got an operation not supported when I tried to do the same. I ended up using the java solution, but when I inspected the file afterward, it seemed the equivalent command would have been $ setfattr -n user.DOSATTRIB -v 0sMHgyMAA= <target-file>. Your mileage may vary but I offer it in case it helps anyone.
Java: You can also use Java from any system.

How to setup Flink Local Cluster

I am trying to use Flink local on Linux and Windows, for my bachelor
thesis. I have found these steps for local setup:
https://ci.apache.org/projects/flink/flink-docs-release-1.1/quickstart/setup_quickstart.html#start-a-local-flink-cluster
When I try this I got only errors like this:
-bash: bin/start-local.sh: No such file or directory
When I go to the directory of the start-local.sh file then I got
/flink-1.1.2/flink-dist/src/main/flink-bin/conf/flink-conf.yaml: No such file or directory
Same problem with Windows.
What do I have to change so that it works?
It seems that you have downloaded the sources. It is necessary to download one of binaries from here: https://flink.apache.org/downloads.html#binaries. Then, follow the given instructions for local setup.
Of course if you want to build Flink from sources, use this guide: https://github.com/apache/flink#building-apache-flink-from-source.

Porting eCos to i386

I am trying to port eCos on an i386 PC.
I have downloaded prebuilt redboot.bin from
http://ecos.sourceware.org/ecos/boards/redbootbins/x86pc/
I boot it onto usb disk, using
dd conv=sync if/redboot.bin of=/dev/sdb1
After booting target from usb, I get "IA2!" string on the target monitor always, and on serial port on 38400 8n1 configurations, I receive nothing.
I tried using i386-elf-gdb, but it is not able to connect to the target and starts printing "Ignoring error packet, Continuing..."
I also tried to build redboot using configtool for i386, but it is only able to build library, when I try Tests, It gives ERROR: multiple definition of cyg_start()
I am very new to eCos, and I don't know what I am doing wrong!!.
Ok, I figured out how to boot Redboot on a target i386 pc with RealteK RTL8139 ehternet card.
install grub on usb stick,
mkdir /mnt/USB && mount /dev/sdx1 /mnt/USB
grub-install --force --no-floppy --boot-directory=/mnt/USB/boot /dev/sdx
Build Redboot using ecosconfig, make sure the number of pci bus are less than 8 or more, if more, then need to increase the pci bus range from from 8 inside pci.h, I had my realtek ethernet card on bus 10 dev 10, I had to increase the bus to 11, so that redboot finds realtek card on bootup.
ecosconfig new pc redboot
configtool ecos.ecc
add common ethernet support
Build Library
copy redboot.elf on usb.
on grub startup menu,
insmod multiboot
multiboot /redboot.elf
boot
Thats it, redboot will use BOOTP and provide IP Address, then I can test redboot commands like ip_address, reset, ping, version etc.

/dev/ttyS0 can't be opened in Qemu

I am working with QEMU 1.1.0, emulating Versatile Express board with ARM Cortex-A9. I have managed to launch simple "Hello World" example following this instructions:
http://balau82.wordpress.com/2012/03/31/compile-linux-kernel-3-2-for-arm-and-emulate-with-qemu/
but now I want to create filesystem by myself.
I decided to use buildroot, version 2012.05. and I've configured it to create toolchain, kernel and filesystem image for ARM Cortex-A9 target.
Kernel is of version 3.3.7 and for the filesystem I've selected to be cpio, non-compressed. The initrd argument in call to qemu-system-arm is pointing to
/output/images/rootfs.cpio
When I launch QEMU kernel boots, but then I get this message:
Initializing random number generator... done.
Starting network...
can't open /dev/ttyS0: No such device or address
can't open /dev/ttyS0: No such device or address
can't open /dev/ttyS0: No such device or address
...
All I can do is to terminate QEMU.
I have checked the contents of rootfs.cpio like this:
cpio -t < rootfs.cpio
and saw that there is /dev/ttyS0.
Have I missed something in configuring the filesystem? Or should I use filesystem in
/output/target
to somehow create device(s) there (Buildroot does not do that), and then rebuild the filesystem?
I'm new to Buildroot, so any hint or suggestion is more than welcome.
Extract rootfs and type ls -all /dev/ttyS0 and check it's major and minor number. Because if your major number is not the required one then it will not invoke respective kernel functionality and in that case it will only be a junk character device.
Also can you post the whole log file (copy all those dmesg and post those somewhere and give link here.)
And if you are sure that /dev/ttyS0 is there then do the following steps :
extract(unpack using cpio) rootfs
find out which init file kernel is using as parent process. If you are lucky then it would be lying in root directory. named init or initrc
open init file in your favorite editor.
starting few lines of your init would be like
::respawn:/sbin/getty -L 38400 tty1
::respawn:/sbin/getty -L 38400 tty2
::respawn:/sbin/getty -L 38400 tty3
::respawn:/sbin/getty -L 38400 tty4
add ::respawn:/sbin/ls -all /dev and save the file. (We have added list command to see what is there inside /dev directory)
reboot your system and check the dmesg. See if /dev/ttyS0 is really missing ?

Resources