I have formated an encrypted disk, containing a LVM with a btrfs system.
All superblocks appear to be destroyed; the btrfs-progs tools can't find the root tree anymore and scalpel, binwalk, foremost & co return only scrap.
The filesystem was on an ssd and mounted with -o compression=lzo.
How screwed am I? Any chances to recover some files?
Is there a plausible way to rebuild the superblock manually? Checking the raw image with xxd gives me not a single readable word.
I managed to decrypt the LV and dd it to an image. What can I do?
Related
When a file is saved into a drive, its contents are written & then indexed. I want to get the indexes and to access the raw contents of the files.
Any idea on the method how to do it, especially for ex4 & btrfs?
UPDATE: I want to get the addresses of the extents of a file. The information about the addresses must be stored somewhere onto the disk. I want to retrieve this info, in order to map the physical location of the file contents. Any methods in order to achieve that?
UPDATE: Hello, all! Thanks for your replies. What I want is a function/command which returns me a list of extent addresses. debugfs seems the function/command with the most-relevant functionality.
It depends of the filesystem you are using. If you are running Linux you can use debufs to seek the file in the filesystem.
I have to say that all FSs are mounted through a VFS, a virtual filesystem that is like a simplified interface with the standard operations (open, close, read...). What is the meaning of that? No filesystem nor its contents(files, dirs) are opened directly from disk, when you open something, you move it to the main memory(your RAM) you do your operations and when you close something it returns to the disk drive.
Now, the question is: Can I get the absolute address in a FS? Yes, if you open your whole filesystem like open ("/dev/sdaX", 0_RDONLY); so you get the address relative to your filesystem using lseek in C for example.
And then... Can I get the same in the whole drive? No, that is because you cannot open the whole drive as a file descriptor. Remember /dev/sdaXin UNIX? Partitions and its can be opened like files because they have a virtual interface running on them.
Your last answer: Can I read really raw contents? All files are read as they appear on disk, the only thing that changes is the descriptor used by the OS and some data about how is indexed, all this as a "file header".
I hope all your questions are answered.
The current solution/workaround is to call these functions with popen:
filefrag -e /path/to/file
hdparm --fibmap /path/to/filename
Then one should simply parse the stringoutputs of these programs. It is not a real solution (i.e.: outputs at C/C++ level), but I'll accept it for now.
Sources:
https://unix.stackexchange.com/questions/106802/what-command-do-i-use-to-see-the-start-and-end-block-of-a-file-in-the-file-syste
https://serverfault.com/questions/29886/how-do-i-list-a-files-data-blocks-on-linux
We talk about HDD with single NTFS partition with size about 650Gigabytes.
We've done following:
delete partition scheme i.e. 512Kilobytes from the beginning;
flush 50Gigabytes with \xff from the beginning during write test;
restore partition scheme i.e. load mbr backup.
The question: How can we restore NTFS in that case?
What we tried to do:
testdisk with deep search without any found NTFS.
Additional info:
NTFS Boot Sector |Master File Table | File System Data | Master File Table Copy
To prevent the MFT from becoming fragmented, NTFS reserves 12.5 percent of volume by default for exclusive use of the MFT.
50G > 12.5% * 650G, therefore we cleaned vital data for ntfs recovery capability.
Recently I downloaded a big (140 GB) tar file and it has an MD5 code to verify the downloaded version.
I used md5sum filename to generate MD5 code and compare it with the original one. But, it seems that I should wait for a long time.
Is there a faster way to generate MD5 code for a big file in Fedora?
If you're not using SSD, your hard drive will be only able to read at about 30M/s.
So for a 140 000MB file size, you have already something like 1h and a half just to read the file.
Now add that there is some process on your computer running, i guess that your "long time" can be something like 2 hours.
Unless switching of storage support for a faster one (SSD, USB), there's nothing much you can do.
Now if md5sum take 10h, i guess it's possible you can find better.
I am looking for the best optimized way we can use to transfer the large log files from local path to NFS path.
Here the log files will keep on changing dynamically with time.
What i am currently using is a java utility which will read the file from local path and will transfer it to NFS path. But this seems to be consuming high time.
We cant use copy commands, as the log file are getting appended with more new logs. So this will not work.
What i am looking for is .. Is there any way other than using a java utility which will transfer the details of log file from local path to NFS path.
Thanks in Advance !!
If your network speed is higher than log growing speed, you can just cp src dst.
If log grows too fast and you can't push that much data, but you only want to take current snapshot, I see three options:
Like you do now, read whole file into memory, as you do now, and then copy it to destination. With large log files it may result in very large memory footprint. Requires special utility or tmpfs.
Make a local copy of file, then move this copy to destination. Quite obvious. Requires you to have enough free space and increases storage device pressure. If temporary file is in tmpfs, this is exactly the same as first method, but doesn't requires special tools (still needs memory and large enough tmpfs).
Take current file size and copy only that amount of data, ignoring anything that will be appended during copying.
E.g.:
dd if=src.file of=/remote/dst.file bs=1 count=`stat -c '%s' src.file`
stat takes current file size, and then this dd is instructed to copy only that amount of bytes.
Due to low bs, for better performance you may want to combine it with another dd:
dd status=none bs=1 count=`stat -c '%s' src.file` | dd bs=1M of=/remote/dst.file
I am trying to automatically determine the size of a ext2 ramdisk filesystem that a directory will make. What I am currently doing is:
BLOCK_COUNT=`du $RAMDISK_FS_DIR| tail -1 |awk '{print $1}'
dd if=/dev/zero of=ramdisk.img bs=1024 count=$BLOCK_COUNT
mke2fs -F ramdisk.img -L "ramdisk" -b 1024 -m 0
tune2fs ramdisk.img -i 0
But when I mount and cp -r $RAMDISK_FS_DIR into ramdisk.img i get messages like these
cp: cannot create directory `ramdisk/var/www': No space left on device
I determined that in my this specific case increasing BLOCK_COUNT by 48 blocks is exactly how much I need for the operation to succeed. I need a way to find this number for arbitrary directory sizes.
Note that my host filesystem is ext4.
I don't think there's a good way to do this in general. In general, superblocks, inode tables, block bitmaps, and various other structures in the filesystem will vary in size depending on exactly how big the filesystem is and what's in it. Even the space occupied by files (as computed by du) may not be the same on the source filesystem as on the destination filesystem.
I wonder why you are trying to make a filesystem with a size exactly equal to its contents. Since this filesystem will always be full after it is built, you can't add anything to it (unless you delete stuff first), which makes me think it might be intended to be read-only. Buy if it's read-only, why don't you use a filesystem type like cramfs or squashfs that is specifically meant for this kind of application?