How to extract a single file from a tar which itself is also inside a tar? - archive

How can we extract a file ( lets call it file c.txt ) from a tar ( B.tar ) which itself is inside a tar (A.tar). Without extracting any of the parent tar files completely.
Structure is
A.tar -> B.tar -> c.txt
Here A.tar is 300GB
B.tar is 200GB
c.txt is only 2GB
There is this question which I found similar
How to extract a tar file inside a tar file
In this the output may look similar but here the 2nd file (B.tar) will be extracted in its entirety then c.txt will be extracted from it, which will take lots of time. whereas what I want is to only extract c.txt

Related

How do I get cmd to ‘combine’ multiple files from a windows backup?

I am trying to extract a .pst file from a windows backup. In order to do this I need to copy each ‘partial’ file from the backup zips and then combine them together to make the one file. I have a command that will copy them out and combine them from this post but the problem I have is that cmd is not doing it in numerical order, therefore the file is not complete. I am using this script to put the files in order:
Echo y | for /F "tokens=*" %A in (filenamesinorder.txt) do copy /b %A “c:\pstcombiner\combined.pst”
But all this does is copy each individual file and overwrites it. I get that that’s what the command does but I need it to combine all the files into one. What am I doing wrong?
Form the Microsoft documentation for the copy command:
To append files, specify a single file for destination, but multiple files for source (use wildcard characters or file1+file2+file3 format).
You’ll need to construct the source text for the copy in your for loop and do the copy after; I’ll see if I can provide an example when I am at my Windows system.
Instead of concatenating, you could try merging using the type command:
create an empty target file
copy nul target.ext > nul
then loop the type command to merge the files to the end of the target file
type fileN.ext>>target.ext
where fileN.ext is file 1, 2,3 ... n

How to compare multiple files (by content, not date and hour) if they are exactly the same

I need to compare 2 folders with files if they are exactly the same, and the files in one folder are not corrupt. I tried with Total commander but it works only with one file. I tried with Beyond compare and it didn't gave me any resoults :/ Any idea?
Try this command in CMD
FC /B pathname1 pathname2
for more parameters check this http://ss64.com/nt/fc.html
You can get the MD5 checksums of all the files in both directories and compare them using a graphical text editor. The checksum looks purely at the contents of the files, not their timestamps.
Downoad FCIV from Microsoft for free (here).
Then run the following in CMD
FCIV -wp folder1 > f1.txt
FCIV -wp folder2 > f2.txt
notepad f1.txt
notepad f2.txt

IBM i PASE tar - Excluding files or directories

I want to exclude some directories from an archive using the PASE tar command on an IBMi but the [-X Exclude File] option doesn't seems to work for me.
I tried using an exclude file that just contained a file name (/home/JSMITH/data/sub2/file2.txt) and then one that just contained a pattern (*.txt), and neither archive operation omitted anything.
Given the following directory structure:
/home/JSMITH/data
/home/JSMITH/data/sub1
/home/JSMITH/data/sub1/file1.txt
/home/JSMITH/data/sub2
/home/JSMITH/data/sub2/file2.txt
/home/JSMITH/data/sub3
/home/JSMITH/data/sub3/file3.txt
and the following command:
/qopensys/usr/bin/tar -cvf /home/JSMITH/test.tar -X /home/JSMITH/excludes.txt /home/JSMITH/data
The entire /home/JSMITH/data structure gets included in the resulting archive.
I have tried using the /home/JSMITH/excludes.txt file with either of these contents:
/home/JSMITH/data/sub2/file2.txt
or
*.txt
How does one exclude files/directories/patterns from the IBMi PASE tar command?
You need the full path in the exclude file.
I created mine via ls /home/JSMITH/data/*.txt > /home/JSMITH/excludes.txt
If you're doing it by hand, make certain you haven't got any trailing whitespace.
Also, I used Notepad++ when I created mine by hand. I found that the green screen edtf created an EBCDIC file with CRLF in it, and that didn't exclude for me.
IBM i 7.1

Recursively converting .jpeg images

I am trying to convert a large number of .jpg files to the medical image format .dcm. There are many folders (with no subfolders) within a directory called C:\dicom. Each of these contains a patient specific .jpg called "REF.jpg" that needs to be converted to a file called "request.dcm" by using a small utility called img2dcm located in C:.
Each folder also contains a patient specific file called "IMG.dcm" used as a template for the conversion.Patient specific metadata is inserted from the template into the newly created request.dcm file.
For an individual folder called "foldername" containing the "REF.jpg" file, and the template file "IMG.dcm", the following command line (including the spaces) will create a usable "request.dcm" file in the same folder:
img2dcm foldername\REF.jpg foldername\request.dcm -stf foldername\IMG.dcm -k "Ser
iesDescription"=REQUEST -k "Modality"=OT -k "SeriesNumber"=200 -k "ImageNumber"=
1"
What I need to do is create a batch file to loop this command through every folder in the directory, all differently named but all containing the required files. It is crucial the newly created file be placed within its parent folder.
Any help would be greatly appreciated for what is a fairly daunting project for a someone without an IT or computing background.
#echo off
c:
cd \dicom
for /d %%i in (*) do (
img2dcm c:\dicom\%%i\REF.jpg c:\dicom\%%i\request.dcm -stf c:\dicom\%%i\IMG.dcm -k "Ser iesDescription"=REQUEST -k "Modality"=OT -k "SeriesNumber"=200 -k "ImageNumber"= 1"
)
should accomplish this.
I'd suggest you copy a portion of your data to a test subdirectory (say C:\DUMMY) and run that routine - having changed dicom throughout to dummy

Unix combine a bunch of text files without taking extra disk space?

I have a bunch of text files I need to temporarily concatenate so that I can pass a single file (representing all of them) to some post-processing script.
Currently I am doing:
zcat *.rpt.gz > tempbigfile.txt
However this tempbigfile.txt is 3.3GB, while the original size of the folder with all the *.rpt.gz files is only 646MB! So I'm temporarily quadroupling the disk space used. Of course after I can call myscript.pl with tempbigfile.txt, it's done and I can rm the tempbigfile.txt.
Is there a solution to not create such a huge file and still get all those files together in one file object?
You're deflating the files with zcat, so you should compress the text once more with gzip:
zcat *.rpt.gz | gzip > tempbigfile.txt

Resources