I'm trying to write a bash script for a homework question where I need to access some files in a source folder, remove all comments from them and send the uncommented files (or a copy) to a destination folder, here's my current attempt:
#!/bin/bash
destination="$1"
source="$2"
mkdir "$destination"
files=(${$("$source"/*)})
for file in "${files[#]}"
do
grep -E -v "^[[:space:]]*[//]" "$file">> "/$destination/$file"
done
The problem seems to be I'm not creating the array elements correctly, I want the array to contain the names of the files in the source folder, can anyone direct me to the correct way of doing that (preferably without solving the whole exercise as it is homework after all)/
Change this
files=(${$("$source"/*)})
to
files=("$source"/*) # grab name of all files under $source dir and store it in array
You actually don't need the array at all, and for a large number of matching files it is more efficient as well to iterate over the pattern directly.
for file in "$source"/*; do
Related
I'm trying to create a simple backup script before a system upgrade.
I want to have an array of file paths(BACKUP_DIRS) which then get concatenated into another variable(SOURCE_DIRS) that will be used for backups using tar
I am having difficulty with joining the array into a single variable that is spaced.
#!/bin/bash
BACKUP_DIRS=(
~/.ssh/
~/workspace/
~/Downloads/
)
# Concat paths
SOURCE_DIRS=''
for DIR in "${BACKUP_DIRS[#]}"
do
$SOURCE_DIRS = $SOURCE_DIRS' '$DIR
done
# Backup
tar -czf backup.tar.gz $SOURCE_DIRS
Why create one string SOURCE_DIRS when you could just use
tar -czf backup.tar.gz "${BACKUP_DIRS[#]}"
The array version expands to /my/first entry, /the/second entry, ... where the one string version would be interpreted as /my/first, entry, /the/second, entry, .... Therefore your old approach probably wouldn't work as exepected for paths with spaces in them.
I find first example of Shake usage demonstrating a pattern that seems error prone:
contents <- readFileLines $ out -<.> "txt"
need contents
cmd "tar -cf" [out] contents
Why do we need need contents when readFileLines reads them and cmd references them? Is this so we can avoid requiring ApplicativeDo?
I think part of the confusion may be the types/semantics of contents. The file out -<.> "txt" contains a list of filenames, so contents is a list of filenames. When we need contents we are requiring the files themselves be created and depended upon, using the filenames to specify which files. When we pass contents on to cmd we are passing the filenames which tar will use to query the files.
So the key point is that readFileLines doesn't read the files in question, it only reads the filenames out of another file. We have to use need to make sure that using the files is fine, and then we actually use the files in cmd. Another way of looking at the three lines is:
Which files do we want to operate on?
Make sure those files are ready.
Use those files.
Does that make sense? There's no relationship with ApplicativeDo - it's presence wouldn't help us at all.
I need to check the files of a versioned system. To do that, I need to write a batcha program so to compare the contents of several folders containing the repositories.
So, my question is: how can I "read" the names of all the subfolders inside a folder, so to use these names later to find subfolders having the same names in a different repositories?
I suppose I may use DIR to print on the screen a list of these names but I don't know how to write it on a text file and then read it. Moreover, I should edit this kind of list, anyway.
Any suggestions or new ideas to solve this problem?
I thank gratefully who ever will answer.
it seems that you can get the subfolders using batch file from perl as follows:
system("start C:\\Temp\\mybatchfile.bat");
or you might try to pass your command suggested by #Stephan straight to system and try to handle what it is returned.
I have a folder with a few files in it; I like to keep my folder clean of any stray files that can end up in it. Such stray files may include automatically generated backup files or log files, but could be a simple as someone accidentally saving to the wrong folder (my folder).
Rather then have to pick through all this all the time I would like to know if I can create a batch file that only keeps a number of specified files (by name and location) but deletes anything not on the "list".
[edit] Sorry when I first saw the question I read bash instead of batch. I don't delete the not so useful answer since as was pointed out in the comments it could be done with cygwin.
You can list the files, exclude the one you want to keep with grep and the submit them to rm.
If all the files are in one directory:
ls | grep -v -f ~/.list_of_files_to_exclude | xargs rm
or in a directory tree
find . | grep -v -f ~/.list_of_files_to_exclude | xargs rm
where ~/.list_of_files_to_exclude is a file with the list of patterns to exclude (one per line)
Before testing it make a backup copy and substitute rm with echo to see if the output is really what you want.
White lists for file survival is an incredibly dangerous concept. I would strongly suggest rethinking that.
If you must do it, might I suggest that you actually implement it thus:
Move ALL files to a backup area (one created per run such as a directory containing the current date and time).
Use your white list to copy back files that you wanted to keep, such as with copy c:\backups\2011_04_07_11_52_04\*.cpp c:\original_dir).
That way, you keep all the non-white-listed files in case you screw up (and you will at some point, trust me) and you don't have to worry about negative logic in your batch file (remove all files that _aren't of all these types), instead using the simpler option (move back every file that is of each type).
I have been cat'ing files in the Terminal untill now.. but that is time consuming when done alot. What I want is something like:
I have a folder with hundreds of files, and I want to effectively cat a few files together.
For example, is there a way to select (in the Finder) five split files;
file.txt.001, file.txt.002, file.txt.003, file.txt.004
.. and then right click on them in the Finder, and just click Merge?
I know that isn't possible out of the box of course, but with an Automator action, droplet or shell script, is something like that possible to do? Or maybe assigning that cat-action a keyboard shortcut, and when hit selected files in the Finder, will be automatically merged together to a new file AND placed in the same folder, WITH a name based on the original split files?
In this example file.001 through file.004 would magically appear in the same folder, as a file named fileMerged.txt ?
I have like a million of these kind of split files, so an efficient workflow for this would be a life saver. I'm working on an interactive book, and the publisher gave me this task..
cat * > output.file
works as a sh script. It's piping the contents of the files into that output.file.
* expands to all files in the directory.
Judging from your description of the file names you can automate that very easily with bash. e.g.
PREFIXES=`ls -1 | grep -o "^.*\." | uniq`
for PREFIX in $PREFIXES; do cat ${PREFIX}* > ${PREFIX}.all; done
This will merge all files in one directory that share the same prefix.
ls -1 lists all files in a directory (if it spans multiple directories can use find instead. grep -o "^.*\." will match everything up to the last dot in the file name (you could also use sed -e 's/.[0-9]*$/./' to remove the last digits. uniq will filter all duplicates. Then you have something like speech1.txt. sound1.txt. in the PREFIXES variable. The next line loops through those and merges the groups of files individually using the * wildcard.