I have a folder containing a lot of files and multiple layers of sub-directories. I would like to zip the folder including the entire content, but exclude all files that is bigger than a certain value, lets say 1000 Mb.
Anyone has any idea about how to accomplish this task?
Thanks!
On Linux, Mac OS X, or Cygwin, Use find to list the files and pipe the file names to zip
find Folder -type f -not -size +1000M | zip foo --names-stdin
This will list recursively all files in Folder whose size is not 1000 Mb or greater and archive using zip into a file named foo.zip.
Related
I wanted to ask for some help to extract a few things.
I have more or less 2TB of files that are zipped, and I want to unzip the .txt files and that inside that .txt file it says "Função: finalizar vendas"
There are several .rar .zip .7z files that have inside them several folders with the names of the employees, and inside the folders their respective functions, which would be inside some of the notepads.
The only thing I managed to get closer to this was in 7-Zip, with a command to extract only the .txt files from the compact files, but there are many unnecessary notepads and I just wanted the one that contains the previously said sentence.
I own a MacBook Air. I'm trying to unzip all of these folders all at once instead of double clicking on each zip file that are in each directory. Is it possible? If so, how? For example, Folder 1 contains Cow.zip and Pig.zip, Folder 2 contains Dragon.zip, Dog.zip, and Cat.zip and
Folder 3 contains Hen.zip and Flowers.zip. Folder 1, Folder 2, and Folder 3, are in File called Animals.
Try this:
open ./*/*.zip
This will recursively traverse all the folders in the current directory and open all .zip files (i.e. "double click"/unzip).
If you need to unzip lots of files, use this command instead to avoid numerous pop ups. (Replace path/to/folder with the path that contains the folders with zips/more folders)
find path/to/folder -name "*.zip" | while read filename; do unzip -o -d "`dirname "$filename"`" "$filename"; done;
So I have a very big folder full of more folders which hold files that all have their regular extension, but then with ,v after it (like .xml,v)
Is there a quick way/program to make it go through all of the folders and when it finds a ,v it'll remove the ,v from it?
Thanks
EDIT: I am running Windows 7 (64-bit). Also please remember than I am an idiot :P
Use find to list the files ending ,v. Pipe the output to a shell loop that renames the files.
${f%%,v} matches the file name without the ,v suffix.
find . -name \*,v | while read f; do mv $f ${f%%,v} ;done
Not clear, Where you have the files? (In your computer / on a server).
What is the platform (Windows / Linux) ...
There are multiple ways to solve it based on scenario (like a tiny batch file can do it in a flash if the folder is in your local computer with windows platform) ...
Please can someone help with the below
I have two folders:
C:\FolderA
C:\FolderB
Folder A contains a bunch of files like an archive
Folder B contains the same bunch of files with the same name, however some data within the files may be different.
I want to write a .bat file which uses the diff command to compare all the files from folder A to the files in folder B with the corresponding name (e.g. update0001 against update 0001) and outputs the difference in "C:\Folder C" with each file difference in a separate text output. (e.g one file is called “Error update0001” and another “Error Update0005”
This is a simple way to check the files in two folders and give results.
fc /b "c:\folder a\*.*" "c:\folder b\*.*" >"c:\folder c\results.txt"
I searched this site but all the examples I found were when someone wanted to merge all the files into only 1 file.
I'm on Win 7 x64.
I have 2 folders with 250 text files in each and the filenames (of the text files) across both folders are the same.
Example:
Folder A:
file001.txt
file002.txt
file003.txt
Folder B:
file001.txt
file002.txt
file003.txt
The contents in all these files (and across folders), are different. The filenames themselves vary greatly, too (I just named them like above for example purposes).
Now I want to merge the files from Folder A into the files from folder B.
I want to do this:
Merge FolderA\file001.txt to FolderB\file001.txt
Merge FolderA\file002.txt to FolderB\file002.txt
etc.
So if file001.txt (Folder A) had 500 lines and file001.txt (Folder B) had 300 lines, after the merge file001.txt (Folder B) should have 800 lines.
Right now I'd have to open the file in Folder A, copy all, go to folder B, open the 2nd file, paste, save. For 250 files that's just too much.
Does anybody know of a way to batch merge text files from different folders as explained above?
I'd just love to select all 250 files in folder 1, copy them, paste into folder 2 and have them all merged to their counterparts...but I guess a solution like that doesn't exist. If you know of a program or batch command that does this, I'm all ears.
For %%a in (Folder1\*.txt) do type "%%~a" >> "Folder2\%%~nxa"