My SSD on my Macbook Pro was full. (Too much music and video). I moved a bunch of stuff back and forth between my external HD and laptop to make room to download new music. Few days later, SSD is full again AND I realise I accidentally deleted some work stuff that I really need.
Am I screwed, because my understanding is that since the hard drive is full, no recovery of what was deleted before would be possible? :S
Since the disk is full, from a practical point of view your deleted data is gone. All blocks are used for storing data and none is unused that could still store some old files.
From a theoretical point of view, SSDs have some spare blocks that they need for wear levelling and that are not visible to the OS. With lots of luck some of your data might still reside in one of those and could be extracted by directly reading the flash chips. But the effort for doing this would be extremely high and the probability that you will find your data is really low.
If the hard drive that contained the deleted work is full then the work has been overwritten (if it was even still there after being deleted).
Related
I want to securely delete my context of my SSD hard disk. I had a look on sdelete but i realized that file names are not deleted or overwrited.
Is there any free tool that i can achieve the above?
Thank you
I'm not sure if you want to delete permanently or secure delete from the drive and cannot be recovered anymore.
So, these are the two ways:
Delete permanently: in Windows Explorer, you can select the file and type "shift + del" on the keyboard. This way the file'll not be moved to your recycle bin;
Secure delete: When you delete a file from a HDD, the sector of the disk is marked as unused and not really erased. So, you need a software to replace these sector with "nothing" and avoid others user can recovery your deleted files using others softwares. One very good software is ERASER, that have one very good method to total erase the file from the disk, called "Gutmann standard": it´s overwrite the deleted files 35 times. Yes, there are softwares that keep trying to read the same sectors on the disk severall times.
But, as your case, the disk is a SSD, the only way to secure erase the file, really destroying all the data, is reformating it.
An alternative to this bad solution, preventing this situation, is enabling file-drive encryption. This option is already available on Windows 10.
Obs: of course, the file that you want to delete, can't be in using.
Erasing an SSD is not that easy because SSDs are more like mini-computers with an own OS only showing you only some of the data saved in it's flash chips. Als wear-leveling algorithms and overprovisioning makes secure deleting on user level next to impossible.
As far as I know there is only one solution to securely delete data on an SSD (without destroying the SSD):
Perform the Secure Erase Command using a SSD software - usually provided by the SD manufacturer itself.
It deletes and recreates the internal encryption key which makes all the data unreadable that is stored on the SSD.
Note that the secure erase command is not supported by every SSD.
The question is simple (I think), I want to destroy my bcache setup, with is a 4tb hdd with a 16gb ssd as cache. I am wondering if I can safely remove the bcache and revert the two devices back to normal drives without losing any data. I do have another 4tb hard drive for backup just in case it does not work. I am pretty new to bcache and I am trying to move the platform to Unraid.
I ended up using wipefs to clear the signatures, then used testdisk to rewrite a table to the drive.
I'm looking for a proper way to archive and back up my data. This data consists of photos, video's, documents and more.
There are two main things I'm afraid might cause data loss or corruption, hard drive failure and bit rot.
I'm looking for a strategy that can ensure my data's safety.
I came up with the following. One hard drive which I will regularly use to store and display data. A second hard drive which will serve as an onsite backup of the first one. And a third hard drive which will serve as an offsite backup. I am however not sure if this is sufficient.
I would prefer to use regular drives, and not network attached storage, however if it's better suited I will adapt.
One of the things I read about that might help with bit rot is ZFS. ZFS does not prevent bit rot but can detect data corruption by using checksums. This would allow me to recover a corrupted file from a different drive and copy it to the corrupted one.
I need at least 2TB of storage but I'm considering 4TB to ensure potential future needs.
What would be the best way to safely store my data and prevent data loss and corruption?
For your local system plus local backup, I think a RAID configuration / ZFS makes sense because you’re just trying to handle single-disk failures / bit rot, and having a synchronous copy of the data at all times means you won’t lose the data written since your last backup was taken. With two disks ZFS can do a mirror and handles bit rot well, and if you have more disks you may consider using RAIDZ configurations since they use less storage overall to provide single-disk failure recovery. I would recommend using ZFS here over a general RAID solutions because it has a better user interface.
For your offsite backup, ZFS could make sense too. If you go that route, periodically use zfs send to copy a snapshot on the source system to the destination system. Debatably, you should use mirroring or RAIDZ on the backup system to protect against bit rot there too.
That said — there are a lot of products that will do the offsite backup for you automatically, and if you have an offsite backup, the only advantage of having an on-site backup is faster recovery if you lose your primary. Since we’re just talking about personal documents and photos, taking a little while to re-download them doesn’t seem super high stakes. If you use Dropbox / Google Drive / etc. instead, this will all be automatic and have a nice UI and support people to yell at if anything goes wrong. Also, storage at those companies will have much higher failure tolerances because they use huge numbers of disks (allowing stuff like RAIDZ with tens of parity disks and replicated across multiple geographic locations) and they have security experts to make sure that all your data is not stolen by hackers.
The only downsides are cost, and not being as intimately involved in building the system, if that part is fun for you like it is for me :).
I need to store a very large amount of data on an hard disk. I can format it in basically any kind of format. That data is fundamental, therefore I made a copy of it. However, if some file goes corrupted, I immediately need to know it so that I can make a new copy of the only remaining file.
However, while it is easy to check if the hard disk as a whole is safe and sound, the only way I can check if a file is not corrupted is to read it and hash it. For very large amounts of data, however, this is nearly unfeasible! I can't afford 10 hours of reading and hashing to check the integrity of all the files. Moreover, continuously reading the whole data would keep my hard disk spinning and therefore could get it damaged. It sounded reasonable to me, however, that some form of check could be automatically implemented thanks to the file system itself.
I know that systems as RAID exist to assure file integrity, but those involve more hard disks, right?
So my question is: given that I know that my hard disk is alive, how can I know if some data on it somewhere got corrupted? Is there any way to make that data recoverable?
The advanced file systems like ZFS (Solaris file sysyem but available in Linux) provides the file integrity by storing the cksum of data blocks.
The RAID can provides more reliability with redundancy that one has choose for
critical data.
Recently, I read an article entitled "SATA vs. SCSI reliability". It mostly discusses the very high rate bit flipping in consumer SATA drives and concludes "A 56% chance that you can't read all the data from a particular disk now". Even Raid-5 can't save us as it must be constantly scanned for problems and if a disk does die you are pretty much guaranteed to have some flipped bits on your rebuilt file system.
Considerations:
I've heard great things about Sun's ZFS with Raid-Z but the Linux and BSD implementations are still experimental. I'm not sure it's ready for prime time yet.
I've also read quite a bit about the Par2 file format. It seems like storing some extra % parity along with each file would allow you to recover from most problems. However, I am not aware of a file system that does this internally and it seems like it could be hard to manage the separate files.
Backups (Edit):
I understand that backups are paramount. However, without some kind of check in place you could easily be sending bad data to people without even knowing it. Also figuring out which backup has a good copy of that data could be difficult.
For instance, you have a Raid-5 array running for a year and you find a corrupted file. Now you have to go back checking your backups until you find a good copy. Ideally you would go to the first backup that included the file but that may be difficult to figure out, especially if the file has been edited many times. Even worse, consider if that file was appended to or edited after the corruption occurred. That alone is reason enough for block-level parity such as Par2.
That article significantly exaggerates the problem by misunderstanding the source. It assumes that data loss events are independent, ie that if I take a thousand disks, and get five hundred errors, that's likely to be one each on five hundred of the disks. But actually, as anyone who has had disk trouble knows, it's probably five hundred errors on one disk (still a tiny fraction of the disk's total capacity), and the other nine hundred and ninety-nine were fine. Thus, in practice it's not that there's a 56% chance that you can't read all of your disk, rather, it's probably more like 1% or less, but most of the people in that 1% will find they've lost dozens or hundreds of sectors even though the disk as a whole hasn't failed.
Sure enough, practical experiments reflect this understanding, not the one offered in the article.
Basically this is an example of "Chinese whispers". The article linked here refers to another article, which in turn refers indirectly to a published paper. The paper says that of course these events are not independent but that vital fact disappears on the transition to easily digested blog format.
ZFS is a start. Many storage vendors provide 520B drives with extra data protection available as well. However, this only protects your data as soon as it enters the storage fabric. If it was corrupted at the host level, then you are hosed anyway.
On the horizon are some promising standards-based solutions to this very problem. End-to-end data protection.
Consider T10 DIF (Data Integrity Field). This is an emerging standard (it was drafted 5 years ago) and a new technology, but it has the lofty goal of solving the problem of data corruption.
56% chance I can't read something, I doubt it. I run a mix of RAID 5 and other goodies and just good backup practices but with Raid 5 and a hot spare I haven't ever had data loss so I'm not sure what all the fuss is about. If you're storing parity information ... well you're creating a RAID system using software, a disk failure in R5 results in a parity like check to get back the lost disk data so ... it is already there.
Run Raid, backup your data, you be fine :)