How to make loadable file system over the read only rootfs? - filesystems

I am working on embedded linux. I am trying to protect my rootfs by making it read only and mount fs file from the sdcard over the root.
I need both fs to be merged.
Any writes can be redirected to the fs file onto the sdcard.
Reading from the read only rootfs still possible.
I tried the following:
$ cd /media/sdcard
$ mount userfs /
$ cd /
$ echo a > a.txt
But I receive error:
-sh: a.txt: Read-only file system
Can any one help me to implement the needed functionality ?

To complete Ross answer, this is how I added overlayfs for /var/log:
add_overlayfs_mount() {
mkdir -p ${IMAGE_ROOTFS}/data/overlay/log
mkdir -p ${IMAGE_ROOTFS}/data/work/log
echo '/dev/sda4 /data ext4 defaults 0 0' >> ${IMAGE_ROOTFS}/etc/fstab
echo 'ofslog /var/log overlay defaults,x-systemd.requires=data,lowerdir=/var/log,upperdir=/data/overlay/log,workdir=/data/work/log 0 2' >> ${IMAGE_ROOTFS}/etc/fstab
}
ROOTFS_POSTPROCESS_COMMAND += "add_overlayfs_mount ; "
You can also use VOLATILE_BINDS in some situations:
VOLATILE_BINDS_append = " \
/data/etc/hostname /etc/hostname \n\
"

Yes, overlayfs is exactly what you want.

Related

Output command result to file

Windows cmd file has the following line
wget "http://ftp.gnu.org/gnu/wget/wget-1.5.3.tar.gz" -P C:\temp >> dld.log
After executing the dld.log file is empty.
What is wrong with output redirection?
It is necessary that the output of wget execution is written to the dld.log file
wget redirects output to stderr mostly to split off from the results data.
So direct answer to to make the current redirect code work is to use 2>&1 to direct sterr stream to stdout as in:
(wget "http://ftp.gnu.org/gnu/wget/wget-1.5.3.tar.gz" -P "C:\temp")>>dld.log 2>&1
However, wget has log functions built in which makes more sense. The switch is --output-file
wget "http://ftp.gnu.org/gnu/wget/wget-1.5.3.tar.gz" -P "C:\temp" --output-file=dld.log
See the wget manual for more info.

How to check availability of each directoriy (directory tree) in a file path. Unix/Bash

Given a text file with paths (on each line a new path).
Need to create directories from paths. This is done easily like mkdir -p is/isapp/ip/ipapp. Then chgrp group1 is/isapp/ip/ipapp. But the problem is that access only changes for the final ipapp directory. And you need to change access for all newly created directories, while not changing access for directories that already existed before the mkdir -p command. Therefore, you need to check which directories already exist and change permissions only for newly created directories. Below I tried to split the path from file and gradually increase it until the moment when the search does not find the directory. And then chgrp -R with the path to the directory that was not found. Below are my code sketches. I would be grateful for any help.
#!/bin/bash
FILE=$1 /(file with paths(in each line new path))
while read LINE; do
IFS='/' read -ra my_array <<< "$my_string"
if ! [ -d "${my_array[0]}" ]; then
mkdir -p "${my_array[0]}"
else -d "${my_array[0]}"/"${my_array[#]}"
done
fi
Something like this would work: (basically for each directory level try to cd up, and if you can't create the directory with the proper permissions).
#!/bin/bash
MODE=u+rw
ROOT=$(pwd)
while read PATH; do
IFS='/' read -r -a dirs <<< "${PATH}"
for dir in "${dirs[#]}"; do
[ -d "${dir}" ] || mkdir "${dir}" -m ${MODE} || exit 1
cd "${dir}"
done
cd "${ROOT}"
done
Note: this is reading from stdin (so you would have to pipe your file into the script), or alternatively add < ${FILE} right after the done to pipe it in manually). The quotes around the "${dir}" and "${dirs[#]}" are required in case there are any whitespace characters in the filenames.
The exit 1 saves you in case the mkdir fails (say there's a file with the name of the directory you want to create).

Using loop to convert multiple files into separate files

I used this command to convert multiple pcap log files to text using tcpdump :
$ cat /home/dalya/snort-2.9.9.0/snort_logs/snort.log.* | tcpdump -n -r - > /home/dalya/snort-2.9.9.0/snort_logs2/bigfile.txt
and it worked well.
Now I want to separate the output, each converted file in a separate output file using loop like this :
for f in /home/dalya/snort-2.9.9.0/snort_logs/snort.log.* ; do
tcpdump -n -r "$f" > /home/dalya/snort-2.9.9.0/snort_logs2/"$f.txt" ;
done
But it gave me :
bash: /home/dalya/snort-2.9.9.0/snort_logs2//home/dalya/snort-2.9.9.0/snort_logs/snort.log.1485894664.txt: No such file or directory
bash: /home/dalya/snort-2.9.9.0/snort_logs2//home/dalya/snort-2.9.9.0/snort_logs/snort.log.1485894770.txt: No such file or directory
bash: /home/dalya/snort-2.9.9.0/snort_logs2//home/dalya/snort-2.9.9.0/snort_logs/snort.log.1487346947.txt: No such file or directory
I think the problem in $f, Where did I go wrong?
If you run
for f in /home/dalya/snort-2.9.9.0/snort_logs/snort.log.* ; do
echo $f
done
You'll find that you're getting
/home/dalya/snort-2.9.9.0/snort_logs/snort.log.1485894664
/home/dalya/snort-2.9.9.0/snort_logs/snort.log.1485894770
/home/dalya/snort-2.9.9.0/snort_logs/snort.log.1487346947
You can use basename
To get only the filename, something like this:
for f in /home/dalya/snort-2.9.9.0/snort_logs/snort.log.* ; do
base="$(basename $f)"
echo $base
done
Once you're satisfied that this is working, remove the echo statement and use
tcpdump -n -r "$f" > /home/dalya/snort-2.9.9.0/snort_logs2/"$base.txt"
instead.
Edit: tcpdump -n -r "$base" > ... should have been tcpdump -n -r "$f" > ...; you only want to use $base in the context of creating the new filename, not in the context of reading the existing data.

Copy SRC directory with adding a prefix to all c-library functions

I have a embedded C static library that communicates with a hardware peripheral. It currently does not support multiple hardware peripherals, but I need to interface to a second one. I do not care about code footprint rightnow. So I want to duplicate that library; one for each hardware.
This of course, will result in symbol collision. A good method is to use objcopy to add a prefix to object files. So I can get hw1_fun1.o and hw2_fun1.o. This post illustrates it.
I want to add a prefix to all c functions on the source level, not the object. Because I will need to modify a little bit for hw2.
Is there any script, c-preprocessor, tool that can make something like:
./scriptme --prefix=hw2 ~/src/ ~/dest/
I'll be grateful :)
I wrote a simple bash script that does the required function, or sort of. I hope it help someone one day.
#!/bin/sh
DIR_BIN=bin/ext/lwIP/
DIR_SRC=src/ext/lwIP/
DIR_DST=src/hw2_lwIP/
CMD_NM=mb-nm
[ -d $DIR_DST ] || ( echo "Destination directory does not exist!" && exit 1 );
cp -r $DIR_SRC/* $DIR_DST/
chmod -R 755 $DIR_DST # cygwin issue with Windows7
sync # file permissions. (Pure MS shit!)
funs=`find $DIR_BIN -name *.o | xargs $CMD_NM | grep " R \| T " | awk '{print $3}'`
echo "Found $(echo $funs | wc -w) functions, processing:"
for fun in $funs;
do
echo " $fun";
find $DIR_DST -type f -exec sed -i "s/$fun/hw2_$fun/g" {} \;
done;
echo "Done! Now change includes and compile your project ;-)"

Script or Application that will do md5 checking

Is there a program, or a script out there that can compare the md5 checksum of files I tried to create my own, but i'm having problems with any files that have a space in them, so I was wondering if it'd be easier to just use an application. md5deep is something I downloaded that returns the checksum.
rm md5mastervalue
for i in `ls /media/disk`; do md5deep -rb /media/disk/$i >> md5mastervalue; done
for d in 1 3 ; do cp -rf /media/disk/ /media/disk-$d & done
wait
rm md5valuet1
rm md5valuet3
for k in `ls /media/disk`
do
for f in 1 3; do md5deep -rb /media/disk-$f/$k >> md5valuet$f; done
done
for n in 1 3; do diff md5mastervalue md5valuet$n; done
echo Finished
are you on linux? if so, you can use md5sum, or sha512sum (better security). Example, create a base line of your files
$ sha512sum * > baseline.txt
then, the next time you want to check, just use -c option, eg
$ sha512sum -c baseline.txt

Resources