Is there a way to validate a .env file? - ini

I'm getting errors when trying to parse a .env file I have, but I have no way of figuring out where it's erring out. Is there an easy way to lint/validate the file, online or otherwise?
Many thanks!

You can try https://github.com/dotenv-linter/dotenv-linter.
It's a lightning-fast linter for .env files. Written in Rust.

It depends on the syntax you are using. Looking at the Docker and NPM documentation, different tools seem to have a different scope on what they are able to parse.
I use a simple grep to validate if I have a <key>=<value> pattern, where key and value are non-empty. You can adapt the patterns to match your context, ensuring upper case keys for example.
#!/bin/bash
for envfile in $(find . -maxdepth 1 -type f -name '.env.*'); do
for line in $(cat ${envfile}); do
# exclude comments
if [[ "${line:0:1}" == "#" ]]; then
continue
fi
match_line=$(echo ${line} | grep -E "^[A-Za-z0-9_].+=.+$")
if [[ ${match_line} == "" ]]; then
echo "Error in file: ${envfile}: line: ${line}"
fi
done
done
Alternatively, look at your language loadenv library to see if you can catch specific parsing exceptions, if available, to narrow down the specific line that causes the error.

Related

Parsing a settingsfile in bashscript where some special settings are arrays

I am still quite new to bash scripting and I am somehow stuck.
I am looking for a clean and easy way to parse a settingsfile, where some special (and known) settings are arrays.
So the settings file looks like this.
foo=(1 2 3 4)
bar="foobar"
The best solution I came up with so far is:
#!/bin/bash
while IFS== read -r k v; do
if [ "$k" = "foo" ]
then
IFS=' ' read -r -a $k <<< "$v"
else
declare "$k"="$(echo $v | tr -d '""')"
fi
done < settings.txt
But I am obviously mixing up array types. As far is I understood and tried out for the bar="foobar" part this actually declares an array, and could be accessed by echo ${bar[0]} but as well as echo $bar. So I thought this would be a indexed array, but the error log clearly states something different:
cannot convert associative to indexed array
Would be glad if somebody could explain me a little bit how to find a proper solution.
Is it safe for you to just source the file?
. settings.txt
That will insert all the lines of the file as if they were lines of your current script. Obviously, there are security concerns if the file isn't as secure as the script file itself.

How can I store the "find" command results as an array in Bash

I am trying to save the result from find as arrays.
Here is my code:
#!/bin/bash
echo "input : "
read input
echo "searching file with this pattern '${input}' under present directory"
array=`find . -name ${input}`
len=${#array[*]}
echo "found : ${len}"
i=0
while [ $i -lt $len ]
do
echo ${array[$i]}
let i++
done
I get 2 .txt files under current directory.
So I expect '2' as result of ${len}. However, it prints 1.
The reason is that it takes all result of find as one elements.
How can I fix this?
P.S
I found several solutions on StackOverFlow about a similar problem. However, they are a little bit different so I can't apply in my case. I need to store the results in a variable before the loop. Thanks again.
Update 2020 for Linux Users:
If you have an up-to-date version of bash (4.4-alpha or better), as you probably do if you are on Linux, then you should be using Benjamin W.'s answer.
If you are on Mac OS, which —last I checked— still used bash 3.2, or are otherwise using an older bash, then continue on to the next section.
Answer for bash 4.3 or earlier
Here is one solution for getting the output of find into a bash array:
array=()
while IFS= read -r -d $'\0'; do
array+=("$REPLY")
done < <(find . -name "${input}" -print0)
This is tricky because, in general, file names can have spaces, new lines, and other script-hostile characters. The only way to use find and have the file names safely separated from each other is to use -print0 which prints the file names separated with a null character. This would not be much of an inconvenience if bash's readarray/mapfile functions supported null-separated strings but they don't. Bash's read does and that leads us to the loop above.
[This answer was originally written in 2014. If you have a recent version of bash, please see the update below.]
How it works
The first line creates an empty array: array=()
Every time that the read statement is executed, a null-separated file name is read from standard input. The -r option tells read to leave backslash characters alone. The -d $'\0' tells read that the input will be null-separated. Since we omit the name to read, the shell puts the input into the default name: REPLY.
The array+=("$REPLY") statement appends the new file name to the array array.
The final line combines redirection and command substitution to provide the output of find to the standard input of the while loop.
Why use process substitution?
If we didn't use process substitution, the loop could be written as:
array=()
find . -name "${input}" -print0 >tmpfile
while IFS= read -r -d $'\0'; do
array+=("$REPLY")
done <tmpfile
rm -f tmpfile
In the above the output of find is stored in a temporary file and that file is used as standard input to the while loop. The idea of process substitution is to make such temporary files unnecessary. So, instead of having the while loop get its stdin from tmpfile, we can have it get its stdin from <(find . -name ${input} -print0).
Process substitution is widely useful. In many places where a command wants to read from a file, you can specify process substitution, <(...), instead of a file name. There is an analogous form, >(...), that can be used in place of a file name where the command wants to write to the file.
Like arrays, process substitution is a feature of bash and other advanced shells. It is not part of the POSIX standard.
Alternative: lastpipe
If desired, lastpipe can be used instead of process substitution (hat tip: Caesar):
set +m
shopt -s lastpipe
array=()
find . -name "${input}" -print0 | while IFS= read -r -d $'\0'; do array+=("$REPLY"); done; declare -p array
shopt -s lastpipe tells bash to run the last command in the pipeline in the current shell (not the background). This way, the array remains in existence after the pipeline completes. Because lastpipe only takes effect if job control is turned off, we run set +m. (In a script, as opposed to the command line, job control is off by default.)
Additional notes
The following command creates a shell variable, not a shell array:
array=`find . -name "${input}"`
If you wanted to create an array, you would need to put parens around the output of find. So, naively, one could:
array=(`find . -name "${input}"`) # don't do this
The problem is that the shell performs word splitting on the results of find so that the elements of the array are not guaranteed to be what you want.
Update 2019
Starting with version 4.4-alpha, bash now supports a -d option so that the above loop is no longer necessary. Instead, one can use:
mapfile -d $'\0' array < <(find . -name "${input}" -print0)
For more information on this, please see (and upvote) Benjamin W.'s answer.
Bash 4.4 introduced a -d option to readarray/mapfile, so this can now be solved with
readarray -d '' array < <(find . -name "$input" -print0)
for a method that works with arbitrary filenames including blanks, newlines, and globbing characters. This requires that your find supports -print0, as for example GNU find does.
From the manual (omitting other options):
mapfile [-d delim] [array]
-d
The first character of delim is used to terminate each input line, rather than newline. If delim is the empty string, mapfile will terminate a line when it reads a NUL character.
And readarray is just a synonym of mapfile.
The following appears to work for both Bash and Z Shell on macOS.
#! /bin/sh
IFS=$'\n'
paths=($(find . -name "foo"))
unset IFS
printf "%s\n" "${paths[#]}"
If you are using bash 4 or later, you can replace your use of find with
shopt -s globstar nullglob
array=( **/*"$input"* )
The ** pattern enabled by globstar matches 0 or more directories, allowing the pattern to match to an arbitrary depth in the current directory. Without the nullglob option, the pattern (after parameter expansion) is treated literally, so with no matches you would have an array with a single string rather than an empty array.
Add the dotglob option to the first line as well if you want to traverse hidden directories (like .ssh) and match hidden files (like .bashrc) as well.
you can try something like
array=(`find . -type f | sort -r | head -2`) , and in order to print the array values , you can try something like echo "${array[*]}"
None of these solutions suited me because I didn't feel like learning readarray and mapfile. Here is what I came up with.
#!/bin/bash
echo "input : "
read input
echo "searching file with this pattern '${input}' under present directory"
# The only change is here. Append to array for each non-empty line.
array=()
while read line; do
[[ ! -z "$line" ]] && array+=("$line")
done; <<< $(find . -name ${input} -print)
len=${#array[#]}
echo "found : ${len}"
i=0
while [ $i -lt $len ]
do
echo ${array[$i]}
let i++
done
You could do like this:
#!/bin/bash
echo "input : "
read input
echo "searching file with this pattern '${input}' under present directory"
array=(`find . -name '*'${input}'*'`)
for i in "${array[#]}"
do :
echo $i
done
In bash, $(<any_shell_cmd>) helps to run a command and capture the output. Passing this to IFS with \n as delimiter helps to convert that to an array.
IFS='\n' read -r -a txt_files <<< $(find /path/to/dir -name "*.txt")

Bash scripting testing for existing file with arbitrary extension in a while or for loop

I've been trying to figure this one out for a while. I've read through multiple threads, and feel like I'm close, but the script just isn't coming together.
Scenario:
I have a media server and thousands of movie files. Each movie file has various accessory files such as the Cover artwork, Database info, Fanart, and trailer. While everything in the directory has it's coverart and database info, some files may or may not have their respective fanart or trailer. For these files I'm trying to get this script working which will create an empty "dummy" file in place of the file that should be there. Then when I actually have the time I can go back and search out just the dummy files and work to fill in the gaps where I can.
Here is what I have so far.
#!/bin/bash
find . -type f -print0 | while read -d $'\0' movie ;
do
echo $movie
moviename=${movie%\.*} #remove the extension from the string
moviename1=`echo $moviename | sed 's/\ /\\ /'` #add escaped spaces to the string
echo $moviename1 #echo the string (for debugging)
if [ ! -f $moviename-fanart* ]; #because the fanart could be .jpg, or .png, etc
then
echo "Creating $moviename-fanart.dummy"
touch "$moviename-fanart.dummy"
fi
if [ ! -f $moviename-trailer* ]; #because tralers could be .mp4, .mov, .mkv, .avi, etc
then
echo "Creating $moviename-trailer.dummy"
touch "$moviename-trailer.dummy"
fi
done
This should be pretty simple, but I think that I'm not getting the proper formating for the input string going into the test operators.
Any help would be greatly appreciated.
Thanks
Line-by-line analysis:
find . -type f -print0 | while read -d $'\0' movie; do
OK, but with bash4 you can just use shopt -s globstar to operate recursively on a directory.
moviename=${movie%\.*} #remove the extension from the string
You don't need the backslash.
moviename1=`echo $moviename | sed 's/\ /\\ /'` #add escaped spaces to the string
This line is suspect because if you quote the name, escaped spaces become doubly-escaped. You're confusing the value of the string with the representation you see of it.
if [ ! -f $moviename-fanart* ]; then #because the fanart could be .jpg, or .png, etc
Quote the string or use bash's [[ test keyword. It's a little dangerous to expand a glob inside the test expression because if it matches multiple results you'll get an error. That said, if you're sure there can be only one you can quote up to the glob. "$moviename-fanart"*.
touch "$moviename-fanart.dummy"
Here, you quote it. So essentially you're dealing with a different name now.
fi
if [ ! -f $moviename-trailer* ]; then #because tralers could be .mp4, .mov, .mkv, .avi, etc
echo "Creating $moviename-trailer.dummy"
touch "$moviename-trailer.dummy"
fi
Same thing.
done

Append some text to the end of multiple files in Linux

How can I append the following code to the end of numerous php files in a directory and its sub directory:
</div>
<div id="preloader" style="display:none;position: absolute;top: 90px;margin-left: 265px;">
<img src="ajax-loader.gif"/>
</div>
I have tried with:
echo "my text" >> *.php
But the terminal displays the error:
bash : *.php: ambiguous redirect
I usually use tee because I think it looks a little cleaner and it generally fits on one line.
echo "my text" | tee -a *.php
You don't specify the shell, you could try the foreach command. Under tcsh (and I'm sure a very similar version is available for bash) you can say something like interactively:
foreach i (*.php)
foreach> echo "my text" >> $i
foreach> end
$i will take on the name of each file each time through the loop.
As always, when doing operations on a large number of files, it's probably a good idea to test them in a small directory with sample files to make sure it works as expected.
Oops .. bash in error message (I'll tag your question with it). The equivalent loop would be
for i in *.php
do
echo "my text" >> $i
done
If you want to cover multiple directories below the one where you are you can specify
*/*.php
rather than *.php
BashFAQ/056 does a decent job of explaining why what you tried doesn't work. Have a look.
Since you're using bash (according to your error), the for command is your friend.
for filename in *.php; do
echo "text" >> "$filename"
done
If you'd like to pull "text" from a file, you could instead do this:
for filename in *.php; do
cat /path/to/sourcefile >> "$filename"
done
Now ... you might have files in subdirectories. If so, you could use the find command to find and process them:
find . -name "*.php" -type f -exec sh -c "cat /path/to/sourcefile >> {}" \;
The find command identifies what files using conditions like -name and -type, then the -exec command runs basically the same thing I showed you in the previous "for" loop. The final \; indicates to find that this is the end of arguments to the -exec option.
You can man find for lots more details about this.
The find command is portable and is generally recommended for this kind of activity especially if you want your solution to be portable to other systems. But since you're currently using bash, you may also be able to handle subdirectories using bash's globstar option:
shopt -s globstar
for filename in **/*.php; do
cat /path/to/sourcefile >> "$filename"
done
You can man bash and search for "globstar" for more details about this. This option requires bash version 4 or higher.
NOTE: You may have other problems with what you're doing. PHP scripts don't need to end with a ?>, so you might be adding HTML that the script will try to interpret as PHP code.
You can use sed combined with find. Assume your project tree is
/MyProject/
/MyProject/Page1/file.php
/MyProject/Page2/file.php
etc.
Save the code you want to append on /MyProject/. Call it append.txt
From /MyProject/ run:
find . -name "*.php" -print | xargs sed -i '$r append.txt'
Explain:
find does as it is, it looks for all .php, including subdirectories
xargs will pass (i.e. run) sed for all .php that have just been found
sed will do the appending. '$r append.txt' means go to the end of the file ($) and write (paste) whatever is in append.txt there. Don't forget -i otherwise it will just print out the appended file and not save it.
Source: http://www.grymoire.com/unix/Sed.html#uh-37
You can do (Work even if there's space in your file path) :
#!/bin/bash
# Create a tempory file named /tmp/end_of_my_php.txt
cat << EOF > /tmp/end_of_my_php.txt
</div>
<div id="preloader" style="display:none;position: absolute;top: 90px;margin-left: 265px;">
<img src="ajax-loader.gif"/>
</div>
EOF
find . -type f -name "*.php" | while read the_file
do
echo "Processing $the_file"
#cp "$the_file" "${the_file}.bak" # Uncomment if you want to save a backup of your file
cat /tmp/end_of_my_php.txt >> "$the_file"
done
echo
echo done
PS: You must run the script from the directory you want to browse
Inspired from #Dantastic answer :
echo "my text" | tee -a file1.txt | tee -a file2.txt

Calculate checksum of audio files without considering the header

I want to programmatically create a SHA1 checksum of audio files (MP3, Ogg Vorbis, Flac).
The requirement is that the checksum should be stable even if the header (eg. ID3) changes.
Note: The audio files don't have CRCs
This is what I tried by now:
1) Reading + Hashing all MPEG frames using Perl and MPEG::Audio::Frame
my $sha1 = Digest::SHA1->new;
while (my $frame = MPEG::Audio::Frame->read(\*FH)) {
$sha1->add($frame->content());
}
2) Decoding + Hashing all MPEG frames using Python and libmad (pymad)
mf = mad.MadFile(path)
sha1 = hashlib.sha1()
while 1:
buf = mf.read()
if (buf is None):
break
sha1.update(buf)
3) Using mp3cat
> mp3cat - - < file.mp3 | sha1sum
However, none of those methods provided a stable checksum. Namely, in some cases the checksum changed after retagging the file with picard.
Are there any libraries that already provide what I want?
I don't care about the programming language…
Update:
I debugged the case a bit further.
The libmad checksum inconsitency seems to happen in cases where libmad gets some decoding errors, like "Huffman data overrun (0x0238)".
As this really happens on many of the mp3 files I'm not sure if it really indicates a broken file…
If you are looking for stable hashes for the actual music you might want to look at libOFA. Your current methods will give you different results because the formats can have embedded tags. Also if you want two different files with the same song to return the same hash you need to regard things like bitrate and sample frequencies.
libOFA on the other hand can give you a stable hash that can be used between formats and different encodings. Might be what you want?
I needed tools to quickly check if my MP3/OGG library is still valid.
For MP3 I found mp3md5.py (http://snipplr.com/view/4025/mp3-checksum-in-id3-tag/) which does the job, but no simple tool for OGG Vorbis, but I coded a little bash script to do this for me.
Both tools should tolerate modifications of the comment/ID3Tag.
#!/bin/bash
# This bash script appends an MD5SUM to the vorbiscomment and/or verifies it if it exists
# Later modification of the vorbis comment does not alter the MD5SUM
# Julian M.K.
FILE="$1"
if [[ ! -f "$FILE" || ! -r "$FILE" || ! -w "$FILE" ]] ; then
echo "File $FILE" does not exist or is not readable or writable
exit 1
fi
OLDCRC=`vorbiscomment "$FILE" | grep ^CRC=|cut -d "=" -f 2`
NEWCRC=`ogginfo "$FILE" |grep "Total data length:" |cut -d ":" -f 2 | md5sum |cut -d " " -f 1`
if [[ "$OLDCRC" == "" ]] ; then
echo "ADDED $FILE $NEWCRC"
vorbiscomment -a -t "CRC=$NEWCRC" "$FILE"
# rewrite CRC to get proper data length, I dont know why this is necessary
NEWCRC=`ogginfo "$FILE" |grep "Total data length:" |cut -d ":" -f 2 | md5sum |cut -d " " -f 1`
vorbiscomment -w -t "CRC=$NEWCRC" "$FILE"
elif [[ "$OLDCRC" == "$NEWCRC" ]] ; then
echo "VERIFIED $FILE"
else
echo "FAILURE $FILE -- $OLDCRC - $NEWCRC"
fi
There is an easy stable way to do it. Just make a copy of the file and remove all the tags from it (e.g. using mutagen.id3) and take the hashsum of the resulting file.
The only disadvantage of this method is its performance.
Bene, If I were you, (And I am in the process of working on something very similar to what you want to do), I would hash the mp3 data block. (Extract it to raw data first, and write it out to disk, so you know what you are dealing with). Then, modify the ID3 tag. Hash your data again. Now, if it changes, compare your two sets of raw data and find out WHERE it changed. Chances are, you might be over-stepping a boundary somewhere. If I recall, MP3 files start with something like FF F8. Well, at least the frame does.
I'm interested in your findings, as I'm still writing all my code to deal with the finger prints, etc, and haven't gotten to the actual hashing yet.
Update many years later:
See my answer here to a very similar question. It turns out that ffmpeg actually supports doing checksums of the individual streams. To get the md5 hash of only the audio stream:
ffmpeg -i "$filename" -map 0:a -codec copy -f md5 "$filename.md5"
There is also support for other hash formats with the generic -f hash format, or for doing it per frame with -f framemd5.
I'm trying to do the same thing. I used MD5 instead of SHA1. I started to export audio checksums using mp3tag (www.mp3tag.de/en/); then made a Perl script similar to yours to do the same thing. Then I removed all tags from my test file, and the audio checksum remained the same.
This is the script:
use MPEG::Audio::Frame;
use Digest::MD5 qw(md5_hex);
use strict;
my $file = 'E:\Music\MP3\Russensoul\01 - 5nizza , Soldat (Russensoul - Russensoul).mp3';
my $mp3tag_audio_md5 = lc '2EDFBD62995A46A45CEEC08C1F303486';
my $md5 = Digest::MD5->new;
open(FILE, $file) or die "Cannot open $file : $!\n";
binmode FILE;
while(my $frame = MPEG::Audio::Frame->read(\*FILE)){
$md5->add($frame->asbin);
}
print '$md5->hexdigest : ', $md5->hexdigest, "\n",
'mp3tag_audio_md5 : ', $mp3tag_audio_md5, "\n",
;
Is it possible that whatever you use to modify your tags sometimes also modifies mp3 headers?

Resources