I need a script that will go into a directory, execute the command on each file, i tried some commands in a batch file, but i can't figure it out :)
john-wick-parse serialize file_route/filename_with_no_extention
Bash shell for loops
for f in file1 file2 file3 file5
do
echo "We are now processing... $f"
# do something on $f
done
And if you want, you can declare a variable containing the path to the files, one on each line, something like this:
FILES="file1
/path/to/file2
/etc/resolv.conf"
for f in $FILES
do
echo "We are now processing... $f"
# do something on $f
done
There's all sort of options for that, like selecting all files with a certain extension.
So if you were to use the first option:
Our files
-- my_folder
-- file1
-- file2
-- file3
-- etc
Our code
cd /path/to/my_folder
for f in file1 file2 file3 file5
do
echo "We are now processing... $f"
# do something on $f
done
Related
#!/bin/bash
n=0
for f in *; do
[[ -f "$f" ]] && { echo "$f"; ((n++)); }
done
echo :Number of empty files: $n"
currently it checks the current directory for empty files, I would like it to search for empty files in any directory. Any ideas?
Recursively searches for empty files in current directory and below:
find . -empty -type f
Recursively lists empty files in specified directory and below and reports total
findempty
#!/bin/bash
echo :Number of empty files: `find $1 -empty -type f | tee /dev/tty | wc -l`
Example Usage
findempty /tmp
Example Output
/tmp/source/fb/b
/tmp/source/fb/a
/tmp/source/fb/c
/tmp/source/fa/b
/tmp/source/fa/a
/tmp/source/fa/c
/tmp/source/fc/b
/tmp/source/fc/a
/tmp/source/fc/c
/tmp/dest/source/fb/b
/tmp/dest/source/fa/b
/tmp/dest/source/fc/b
:Number of empty files: 12
I want to merge the content of multiple text files to one text file.
I've tried cat explained in this answer. But it is very slow.
The copy command is much faster but you have to put the filenames in a plus sign separated string like:
cmd /c copy file1.txt + file2.txt + file3.txt + file1.txt all.txt
It is ok for a few files, but not for thousands of files.
So my idea was to create a variable that contains the file input for copy like:
%list = 'file1.txt + file2.txt + file3.txt + file1.txt'
and then:
cmd /c copy %list all.txt
But this doesn't work.
(I can create the string with the filenames also within Powershell with a loop.)
Now I want to make a loop that merges the first file with the second file and the resulting file with the third file and so on.
cmd /c copy file1.txt + file2.txt merge1.txt
then:
cmd /c copy merge1.txt + file3.txt merge2.txt
...
How can I do this within a loop in Powershell?
# Forces the creation of your content file
New-Item -ItemType File ".\all.txt" –force
# Build your file list here
$fileList = #('file1.txt', 'file2.txt', 'file3.txt')
# Assumes that all files are in the directory where you run the script
# You might have to adapt it to provide full path (e.g. $_.FullName)
$fileList | %{ Get-Content $_ -read 1000 } | Add-Content .\all.txt
Please feel free to rename the question to something more appropriate.
How would I mimic the below zsh using bash instead?
mkdir folder1
mkdir folder2
mkdir folder3
# zsh
folders=(folder*) | print $folders
#folder1 folder2 folder3
# bash
folders=(folder*/) | echo $folders
#folder1
As you can see, this only outputs the first element.
Any pointers would be appreciated, thanks.
Try changing it to:
folders=(folder*); echo "${folders[#]}"
folders[#] gives all the elements in the array
${} expands the output from above command to bash.
If lets say, you have multiple .txt file in some Directory and you want to get/display those folders . you can try something like this:
declare -a folder_arr
i=0
for dir in *.txt; do
folder_arr[i]=$dir
i=$((i+1))
done
for j in $(seq 0 $((i-1)))
do
echo ${folder_arr[$j]}
done
I excuted the above file and was able to get the expected reult.
/temps$ ./dirrr.sh
z.txt
basically I want to run the following command on every text-file automatically:
awk -f myScript.awk file1.txt > new\file1.txt
awk -f myScript.awk file2.txt > new\file2.txt
...
Then move the processed files to the folder \old.
move *.txt \old
should work for that part.
How do I create the correct for-loop, so that the output of the awk program has the same name as the input, just in the new folder?
OK, try this:
for %%i in (*.txt) do awk -f myScript.awk "%%~fi" > "new\%%~nxi"
On windows I could do something like...
copy /b file1+file2 file3
copy /b file1+file4 file5
In one batch file to generate a files made up of several files.
I've trying to create some html help files, with a header, footer, unique content and files containing common content.
How can I do this ?
cat file1 file2 >> file3
cat file1 file4 >> file5
file3 and file5 are the output files, just list the files you want to concatenate in order of how they should appear in the output files.