I have a number of raspberry PIs attached to various networks distributed over a large area so this will have to be a remote process. I need to expand the file system so it fills the full 8Gb (currently 2Gb). We use Puppet to distribute updates but I am not sure what the process of commands is.
I know this can be achieved locally using raspi-config but I will need to create a script or send a command to do this over the network.
raspi-config is a shell script. The section on memory expansion is listed below. Here are the basic steps:
Verify that the desired expansion is on a SD card, not external device, and not more than the two normal partitions.
Determine the exact partition and desired partition size. (parted)
Change the size of the partition in the partition table. (This usually requires a reboot to take effect.) (fdisk)
Expand the filesystem to the complete size of the partition (which was resized in step 3 above). This is setup as a shell script to run after reboot. (resize2fs)
Because there are minor differences in the size of SD cards, even different models from the same manufacturer, it would be extremely difficult to give a more specific set of commands.
#!/bin/sh
# Part of raspi-config http://github.com/asb/raspi-config
# ...
if ! [ -h /dev/root]; then
whiptail --msgbox '/dev/root does not exist or is not a symlink. Don't know how to expand" 20 60 2
return 0
fi
ROOT_PART=$(readlink /dev/root)
PART_NUM=${ROOT_PART#mmcblk0p}
if [ "$PART_NUM" = "$ROOT_PART" ]; then
whiptail --msgbox "/dev/root is not an SD card. Don't know how to expand" 20 60 2
return 0
fi
# NOTE: the NOOBS partition layout confuses parted. For now, let's only
# agree to work with a sufficiently simple partition layout
if [ "$PART_NUM" -ne 2 ]; then
whiptail msgbox "Your partition layout is not currently supported by this tool. You rae probably using NOOBS, in which case your root filesystem is already expanded anyway." 20 60 2
return 0
fi
LAST_PART_NUM=$(parted /dev/mmcblk0 -ms unit s p | tail -n 1 | cut -f 1 -d:)
if [ "$LAST_PART_NUM" != "$PART_NUM" ]; then
whiptail --msgbox "/dev/root is not the last partition. Don't know how to expand" 20 60 2
return 0
fi
# Get the starting offset of the root partition
PART_START=$(parted /dev/mmcblk0 -ms unit s p | grep "^${PART_NUM}" | cut -f 2 -d:)
[ "$PART_START" ] || return 1
# Return value will likely be error for fdisk as it fails to reload the
# partition table because the root fs is mounted
fdisk /dev/mmdblk0 <<EOF
p
d
$PART_NUM
n
p
$PART_NUM
$PART_START
p
w
EOF
ASK_TO_REBOOT=1
# now set up an init.d script
cat <<\EOF > /etc/init.d/resize2fs_once &&
#!/bin/sh
### BEGIN INIT INFO
# Provides: resize2fs_once
# Required-Start:
# Required-Stop:
# Default-Start: 2 3 4 5 S
# Default-Stop:
# Short-Description: Resize the root filesystem to fill partition
### END INIT INFO
. /lib/lsb/init-functions
case "$1" in
start)
log_daemon_msg "Starting resize2fs_once" &&
resize2fs /dev/root &&
rm /etc/init.d/resize2fs_once &&
update-rc.d resize2fs_once remove &&
log_end_msg $?
;;
*)
echo "Usage $0 start" >&2
exit 3
;;
esac
EOF
chmod +x /etc/init.d/resize2fs_once &&
update-rc.d resize2fs_once defaults &&
if [ '$INTERACTIVE" = True ]; then
whiptail --msgbox "Root partition has been resized.\nThe filesystem will be enlarged upon the next reboot" 20 60 2
fi
Related
i am creating a script that will help updating the GoLang compile binary in a GNU/Linux system.
but it fail
#!/usr/bin/env bash
# -*- coding: utf-8 -*-
# set -e
# ==============================================================================
# title : Semi Automatic Update GoLang
# description : Script for Install, setup path, permission and update golang
# author : Walddys Emmanuel Dorrejo Céspedes
# usage : bash up_install.sh
# notes : Execute this Script will ask sudo password
# dependencies : wget awk sed curl tar
# ==============================================================================
## Get Golang Versions from repository
declare -a go_jversion go_sversion
readarray -t go_jversion < <(curl -s https://go.googlesource.com/go/+refs?format=JSON | grep -Eo "go[0-9]\.[^\"]+" | sort -V)
## Delete go_versions RC and Beta from the pool.
for i in "${!go_jversion[#]}"; do
if [[ "${go_jversion[i]}" =~ (rc|beta) ]]; then
unset "go_jversion[i]"
fi
done
unset go_sversion # Not allow to grow indefinitely the pool when re-execute the scripts
for i in "${!go_jversion[#]}"; do
set -vx
## Create an array of the stables versions (Those versions that repeat more than or equal to 2 are stables)
# if [[ "${go_jversion[i]}" == "${go_jversion[i + 1]}" ]] && [[ "${go_sversion[i - 1]}" != "${go_jversion[i + 1]}" ]]; then
# go_sversion+=("${go_jversion[i]}")
# fi
In this section i am comparing major version + minimum version, to exclude the patch version of the array, but the condition after the "&&", the array "${go_sversion[$i -1]}" is expanding null in each cycle of the loop, when i am assigning a value in a cycle before.
## Create an array of the stables versions (Those versions that repeat more than or equal to 2 are stables) (second version)
if [[ "${go_jversion[$i]}" == "${go_jversion[$i + 1]}" && "${go_sversion[$i - 1]}" != "${go_jversion[$i]}" ]]; then
go_sversion+=("${go_jversion[$i]}")
echo "${!go_sversion[$i]}"
fi
set +vx
done
echo "${go_sversion[#]}"
echo "${!go_sversion[#]}"
My issue is in the section where "${go_sversion[$i -1]}", why is not expanding?
assign value to "${go_sversion[$i -1]}" the value display in the next cycle of the loop
Arrays in bash are allowed to be sparse, meaning their indices are not required to be strictly sequential. For example:
arr=(1 2 3)
echo "${arr[#]}" # prints 1 2 3
echo "${!arr[#]}" # prints 0 1 2
unset arr\[1\]
echo "${arr[#]}" # prints 1 3
echo "${!arr[#]}" # prints 0 2
When you unset the RC and Beta values you could be creating these types of gaps in your jversion array, but you're assigning to the sversion array sequentially. This means the indices do not align between the arrays.
If your jversion looks like my array above, you might put something into sversion[0] from jversion[0], then process jversion[2] and attempt to match it against sversion[1] which doesn't exist yet.
One simple way to de-sparsify the array is to reassign it:
go_jversion=( "${go_jversion[#]}" )
This will reassign the contents of the array to itself in sequential order without any gaps in the indices.
If this is unviable for some reason, you'll have to write code that is aware of the possible sparseness of the array. For example, instead of blinding looking at go_sversion[i-1] you could look at go_sversion[-1] which will always give you the last item in the array.
The utility 'sas2ircu' can output multiple lines for every hard drive attached to the host. A sample of the output for a single drive looks like this:
Enclosure # : 5
Slot # : 20
SAS Address : 5003048-0-185f-b21c
State : Ready (RDY)
I have a bash script that executes the sas2ircu command and does the following with the output:
identifies a drive by the RDY string
reads the numerical value of the enclosure (ie, 5) into an array 'enc'
reads the numerical value of the slot (ie, 20) into another array 'slot'
The code I have serves its purpose, but I'm trying to figure out if I can combine it into a single line and run the sas2ircu command once instead of twice.
mapfile -t enc < <(/root/sas2ircu 0 display|grep -B3 RDY|awk '/Enclosure/{print $NF}')
mapfile -t slot < <(/root/sas2ircu 0 display|grep -B2 RDY|awk '/Slot/{print $NF}')
I've done a bunch of reading on awk but I'm still quite novice with it and haven't come up with anything better than what I have. Suggestions?
Should be able to eliminate the grep and combine the awk scripts into a single awk script; the general idea is to capture the enclosure and slot data and then if/when we see State/RDY we print the enclosure and slot to stdout:
awk '/Enclosure/{enclosure=$NF}/Slot/{slot=$NF}/State.*(RDY)/{print enclosure,slot}'
I don't have sas2ircu so I'll simulate some data (based on OP's sample):
$ cat raw.dat
Enclosure # : 5
Slot # : 20
SAS Address : 5003048-0-185f-b21c
State : Ready (RDY)
Enclosure # : 7
Slot # : 12
SAS Address : 5003048-0-185f-b21c
State : Ready (RDY)
Enclosure # : 9
Slot # : 23
SAS Address : 5003048-0-185f-b21c
State : Off (OFF)
Simulating thw sas2ircu call:
$ cat raw.dat | awk '/Enclosure/{enclosure=$NF}/Slot/{slot=$NF}/State.*(RDY)/{print enclosure,slot}'
5 20
7 12
The harder part is going to be reading these into 2 separate arrays and I'm not aware of an easy way to do this with a single command (eg, mapfile doesn't provide a way to split an input file across 2 arrays).
One idea using a bash/while loop:
unset enc slot
while read -r e s
do
enc+=( ${e} )
slot+=( ${s} )
done < <(cat raw.dat | awk '/Enclosure/{enclosure=$NF}/Slot/{slot=$NF}/State.*(RDY)/{print enclosure,slot}')
This generates:
$ typeset -p enc slot
declare -a enc=([0]="5" [1]="7")
declare -a slot=([0]="20" [1]="12")
What linux settings could result in a C++ program run as:
jellyfish count -m 31 -t 40 -C -s 105 -o k_u_hash_0 pe.cor.fa
working fine when that command is executed in a terminal, but crashing in a bash script? In the latter case it asks for 411428571480 bytes before it exits - immediately. This is odd because when run interactively top shows it with just 10's of Gb of Virt and Res memory many minutes after it started running.
ulimit -a
in both environments shows:
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 2067197
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 1024
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
/proc/meminfo shows:
CommitLimit: 615693660 kB
Committed_AS: 48320500 kB
and even stranger, a small C test program, which just calloc's ever larger memory blocks than releases them, when run in either environment, including immediately before jellyfish, did this:
calloc_test
Testing 4294967296
Testing 8589934592
Testing 17179869184
Testing 34359738368
Testing 68719476736
Testing 137438953472
Testing 274877906944
Testing 549755813888
Testing 1099511627776
FAILED
That is, it was able to allocate a 549 Gb block, which is larger than the one jellyfish asked for, and memory allocation behaved the same in both environments.
There is no LD_LIBRARY_PATH set in either case.
Can anybody suggest what might differ in the two environments to account for the difference in the action of a subset of memory allocating programs? (A subset of one at this point.)
Thank you.
As requested, this is the script (only down to the point of failure plus 3 extra lines):
#!/bin/bash
# assemble.sh generated by masurca
CONFIG_PATH="/home/mathog/do_masurca/project.cfg"
CMD_PATH="/home/mathog/MaSuRCA/bin/masurca"
# Test that we support <() redirection
(eval "cat <(echo test) >/dev/null" 2>/dev/null) || {
echo >&2 "ERROR: The shell used is missing important features."
echo >&2 " Run the assembly script directly as './$0'"
exit 1
}
# Parse command line switches
while getopts ":rc" o; do
case "${o}" in
c)
echo "configuration file is '$CONFIG_PATH'"
exit 0
;;
r)
echo "Rerunning configuration"
exec perl "$CMD_PATH" "$CONFIG_PATH"
echo "Failed to rerun configuration"
exit 1
;;
*)
echo "Usage: $0 [-r] [-c]"
exit 1
;;
esac
done
set +e
# Set some paths and prime system to save environment variables
save () {
(echo -n "$1=\""; eval "echo -n \"\$$1\""; echo '"') >> environment.sh
}
GC=
RC=
NC=
if tty -s < /dev/fd/1 2> /dev/null; then
GC='\e[0;32m'
RC='\e[0;31m'
NC='\e[0m'
fi
log () {
d=$(date)
echo -e "${GC}[$d]${NC} $#"
}
fail () {
d=$(date)
echo -e "${RC}[$d]${NC} $#"
exit 1
}
signaled () {
fail Interrupted
}
trap signaled TERM QUIT INT
rm -f environment.sh; touch environment.sh
# To run tasks in parallel
run_bg () {
semaphore -j $NUM_THREADS --id masurca_$$ -- "$#"
}
run_wait () {
semaphore -j $NUM_THREADS --id masurca_$$ --wait
}
export PATH="/home/mathog/MaSuRCA/bin:/home/mathog/MaSuRCA/bin/../CA/Linux-amd64/bin:$PATH"
save PATH
export PERL5LIB=/home/mathog/MaSuRCA/bin/../lib/perl${PERL5LIB:+:$PERL5LIB}
save PERL5LIB
NUM_THREADS=40
save NUM_THREADS
log 'Processing pe library reads'
rm -rf meanAndStdevByPrefix.pe.txt
echo 'pe 400 20' >> meanAndStdevByPrefix.pe.txt
run_bg rename_filter_fastq 'pe' <(exec expand_fastq '/home/mathog/SPUR_datasets/pe_400_R1.fastq' | awk '{if(length($0>200)) print substr($0,1,200); else print $0;}') <(exec expand_fastq '/home/mathog/SPUR_datasets/pe_400_R2.fastq' | awk '{if(length($0>200)) print substr($0,1,200); else print $0;}' ) > 'pe.renamed.fastq'
run_wait
head -q -n 40000 pe.renamed.fastq | grep --text -v '^+' | grep --text -v '^#' > pe_data.tmp
export PE_AVG_READ_LENGTH=`awk '{if(length($1)>31){n+=length($1);m++;}}END{print int(n/m)}' pe_data.tmp`
save PE_AVG_READ_LENGTH
echo "Average PE read length $PE_AVG_READ_LENGTH"
KMER=`for f in pe.renamed.fastq;do head -n 80000 $f |tail -n 40000;done | perl -e 'while($line=<STDIN>){$line=<STDIN>;chomp($line);push(#lines,$line);$line=<STDIN>;$line=<STDIN>}$min_len=100000;$base_count=0;foreach $l(#lines){$base_count+=length($l);push(#lengths,length($l));#f=split("",$l);foreach $base(#f){if(uc($base) eq "G" || uc($base) eq "C"){$gc_count++}}} #lengths =sort {$b <=> $a} #lengths; $min_len=$lengths[int($#lengths*.75)]; $gc_ratio=$gc_count/$base_count;$kmer=0;if($gc_ratio<0.5){$kmer=int($min_len*.7);}elsif($gc_ratio>=0.5 && $gc_ratio<0.6){$kmer=int($min_len*.5);}else{$kmer=int($min_len*.33);} $kmer++ if($kmer%2==0); $kmer=31 if($kmer<31); $kmer=127 if($kmer>127); print $kmer'`
save KMER
echo "choosing kmer size of $KMER for the graph"
KMER_J=$KMER
MIN_Q_CHAR=`cat pe.renamed.fastq |head -n 50000 | awk 'BEGIN{flag=0}{if($0 ~ /^\+/){flag=1}else if(flag==1){print $0;flag=0}}' | perl -ne 'BEGIN{$q0_char="#";}{chomp;#f=split "";foreach $v(#f){if(ord($v)<ord($q0_char)){$q0_char=$v;}}}END{$ans=ord($q0_char);if($ans<64){print "33\n"}else{print "64\n"}}'`
save MIN_Q_CHAR
echo MIN_Q_CHAR: $MIN_Q_CHAR
JF_SIZE=`ls -l *.fastq | awk '{n+=$5}END{s=int(n/50); if(s>80000000000)printf "%.0f",s;else print "80000000000";}'`
save JF_SIZE
perl -e '{if(int('$JF_SIZE')>80000000000){print "WARNING: JF_SIZE set too low, increasing JF_SIZE to at least '$JF_SIZE', this automatic increase may be not enough!\n"}}'
log Creating mer database for Quorum.
quorum_create_database -t 40 -s $JF_SIZE -b 7 -m 24 -q $((MIN_Q_CHAR + 5)) -o quorum_mer_db.jf.tmp pe.renamed.fastq && mv quorum_mer_db.jf.tmp quorum_mer_db.jf
if [ 0 != 0 ]; then
fail Increase JF_SIZE in config file, the recommendation is to set this to genome_size*coverage/2
fi
log Error correct PE.
quorum_error_correct_reads -q $((MIN_Q_CHAR + 40)) --contaminant=/home/mathog/MaSuRCA/bin/../share/adapter.jf -m 1 -s 1 -g 1 -a 3 -t 40 -w 10 -e 3 -M quorum_mer_db.jf pe.renamed.fastq --no-discard -o pe.cor --verbose 1>quorum.err 2>&1 || {
mv pe.cor.fa pe.cor.fa.failed && fail Error correction of PE reads failed. Check pe.cor.log.
}
log Estimating genome size.
jellyfish count -m 31 -t 40 -C -s $JF_SIZE -o k_u_hash_0 pe.cor.fa
export ESTIMATED_GENOME_SIZE=`jellyfish histo -t 40 -h 1 k_u_hash_0 | tail -n 1 |awk '{print $2}'`
save ESTIMATED_GENOME_SIZE
echo "Estimated genome size: $ESTIMATED_GENOME_SIZE"
Here's (some) useless memory and CPU cycle waste in the script:
cat pe.renamed.fastq | head -n 50000 | ...
You should instead avoid the useless use of cat with
head -n 50000 pe.renamed.fastq | ...
Here is the complete code. In BER_SB, values of K,SB passed to rand-src command and value of sigama passed to transmit command are being calculated in main. Vlues written to BER array by BER_SB are being further used in main.
BER_SB()
{
s=$1
mkdir "$1"
cp ex-ldpc36-5000a.pchk ex-ldpc36-5000a.gen "$1"
cd "$1"
rand-src ex-ldpc36-5000a.src $s "$K"x"$SB"
encode ex-ldpc36-5000a.pchk ex-ldpc36-5000a.gen ex-ldpc36-5000a.src ex-ldpc36-5000a.enc
transmit ex-ldpc36-5000a.enc ex-ldpc36-5000a.rec 1 awgn $sigma
decode ex-ldpc36-5000a.pchk ex-ldpc36-5000a.rec ex-ldpc36-5000a.dec awgn $sigma prprp 250
BER="$(verify ex-ldpc36-5000a.pchk ex-ldpc36-5000a.dec ex-ldpc36-5000a.gen ex-ldpc36-5000a.src)"
echo $BER
}
export BER
export -f BER_SB
K=5000 # No of Message Bits
N=10000 # No of encoded bits
R=$(echo "scale=3; $K/$N" | bc) # Code Rate
# Creation of Parity Check and Generator files
make-ldpc ex-ldpc36-5000a.pchk $K $N 2 evenboth 3 no4cycle
make-gen ex-ldpc36-5000a.pchk ex-ldpc36-5000a.gen dense
# Creation of file to write BER values
echo>/media/BER/BER_LDPC36_5000_E.txt -n
S=1; # Variable to control no of blocks of source messages
for Eb_No in 0.5 1.0
do
B=$(echo "10^($S+1)" | bc)
# No of Blocks are increased for higher Eb/No values
S=$(($S+1))
# As we have four cores in our PC so we will divide number of source blocks into four subblocks to process these in parallel
SB=$(echo "$B/4" | bc)
# Calculation of Noise Variance from Eb/No values
tmp=$(echo "scale=3; e(($Eb_No/10)*l(10))" | bc -l)
sigma=$(echo "scale=3; sqrt(1/(2*$R*$tmp))" | bc)
# Calling of functions to process the each subbloc
parallel BER_SB ::: 1 2 3 4
BER_T= Here I want to process values of BER variables returned by BER_SB function
done
It is not very clear what you want done. From what you write it seems you want the same 3 lines run 4 times in parallel. That is easily done:
runone() {
mkdir "$1"
cd "$1"
rand-src ex-ldpc36-5000a.src 0 5000 1000
encode ex-ldpc36-5000a.pchk ex-ldpc36-5000a.gen ex-ldpc36-5000a.src ex-ldpc36-5000a.enc
transmit ex-ldpc36-5000a.enc ex-ldpc36-5000a.rec 1 awgn .80
}
export -f runone
parallel runone ::: 1 2 3 4
But that does not use the '1 2 3 4' for anything. If you want the '1 2 3 4' used for anything you will need to describe better what you really want.
Edit:
It is unclear whether you have:
Read the examples: LESS=+/EXAMPLE: man parallel
Walked through the tutorial: man parallel_tutorial
Watched the intro videos: https://www.youtube.com/playlist?list=PL284C9FF2488BC6D1
and whether I can assume that the material covered in those are known to you.
In your code you use BER[1]..BER[4], but they are not initialized. You also use BER[x] in the function. Maybe you forgot that a sub-shell cannot pass values in an array back to its parent?
If I were you I would move all the computation in the function and call the function with all needed parameters instead of passing them as environment variables. Something like:
parallel BER_SB ::: 1 2 3 4 ::: 0.5 1.0 ::: $S > computed.out
post process computed.out >>/media/BER/BER_LDPC36_5000_E.txt
To keep the arguments in computed.out you can use --tag. That may make it easier to postprocess.
I'm writing a script using multiple dialog screens in bash. One of the functions of the script should be killing specific processes. I load all my PID values in one array called pid:
pid = ('1234' '1233' '1232' '1231' '1230')
Then I present the user with a dialog checklist that contains a list of the processes.
After they select some of them dialog returns the checklist entry number, for example 0,2,4.
My initial plan was to store the selected entries in a second array and then use it to get specific PID's from the first array, but so far, nothing I've tried worked, in this case that would be: 1234, 1232, 1230. So I can kill those specific processes.
Does anyone have a better solution? I want the easiest way of killing processes based on selections made by the user at the dialog checklist.
Here is the function in question:
stop_tunnel() {
local tunnels
local pid
declare -a tunnels
declare -a pid
#this is executed on a remote system in the real script
ps aux | grep -w ssh > $_temp
awk -F "ssh" '{print "ssh" $2}' $_temp > $_temp1
awk '{print $2}' $_temp > $_temp2
IFS='
'
tunnels=( $( < $_temp1 ) )
pid=( $( < $_temp2 ) )
dialog --checklist "Select tunnel to stop:" 10 72 0 \
0 "${tunnels[0]}" off \
1 "${tunnels[1]}" off \
2 "${tunnels[2]}" off \
3 "${tunnels[3]}" off \
4 "${tunnels[4]}" off \
2>$_temp
nr=$( < $_temp )
dialog --title " Tunnel stop " --msgbox "\nYou stopped these tunnels: ${nr[#]}" 6 44
}
The nr array holds the users selection. And I wanted to use that to pull specific members out of the pid array.
Select might be what you need:
select p in ${pid[#]} ; do echo "kill" $p && break; done
Note that blanks around the assignment won't work:
# wrong:
pid = ('1234' '1233' '1232' '1231' '1230')
# right:
pid=('1234' '1233' '1232' '1231' '1230')
To allow to kill multiple processes in sequence:
select p in ${pid[#]} ; do
if [[ -n $p ]]
then echo "kill" $p
else break
fi
done
1) 1234
2) 1233
3) 1232
4) 1231
5) 1230
#? 3
kill 1232
#? 4
kill 1231
#? 6
The echo is of course just for testing.
A hint for the user, that an invalid index will terminate the process killing seems appropriate. A second approach could be an explicit termination case:
pid=('1234' '1233' '1232' '1231' '1230' 'terminate')
which you would handle with the break.
If you just want to iterate over the selections, being made:
sel=(0 2 4)
for n in ${sel[#]} ; do echo kill ${pid[$n]}; done
update towards you comment:
I don't have dialog installed, but I guess zenity is similar. There you capture the output of a list seleciton:
selection=($(zenity --list --text "kill something" --multiple --column "kill" --separator " " --checklist --column ps TRUE foo FALSE bar FALSE baz TRUE fozboa))