Remove a bcache without erasing data - ubuntu-18.04

The question is simple (I think), I want to destroy my bcache setup, with is a 4tb hdd with a 16gb ssd as cache. I am wondering if I can safely remove the bcache and revert the two devices back to normal drives without losing any data. I do have another 4tb hard drive for backup just in case it does not work. I am pretty new to bcache and I am trying to move the platform to Unraid.

I ended up using wipefs to clear the signatures, then used testdisk to rewrite a table to the drive.

Related

Secure delete files on Windows 10

I want to securely delete my context of my SSD hard disk. I had a look on sdelete but i realized that file names are not deleted or overwrited.
Is there any free tool that i can achieve the above?
Thank you
I'm not sure if you want to delete permanently or secure delete from the drive and cannot be recovered anymore.
So, these are the two ways:
Delete permanently: in Windows Explorer, you can select the file and type "shift + del" on the keyboard. This way the file'll not be moved to your recycle bin;
Secure delete: When you delete a file from a HDD, the sector of the disk is marked as unused and not really erased. So, you need a software to replace these sector with "nothing" and avoid others user can recovery your deleted files using others softwares. One very good software is ERASER, that have one very good method to total erase the file from the disk, called "Gutmann standard": it´s overwrite the deleted files 35 times. Yes, there are softwares that keep trying to read the same sectors on the disk severall times.
But, as your case, the disk is a SSD, the only way to secure erase the file, really destroying all the data, is reformating it.
An alternative to this bad solution, preventing this situation, is enabling file-drive encryption. This option is already available on Windows 10.
Obs: of course, the file that you want to delete, can't be in using.
Erasing an SSD is not that easy because SSDs are more like mini-computers with an own OS only showing you only some of the data saved in it's flash chips. Als wear-leveling algorithms and overprovisioning makes secure deleting on user level next to impossible.
As far as I know there is only one solution to securely delete data on an SSD (without destroying the SSD):
Perform the Secure Erase Command using a SSD software - usually provided by the SD manufacturer itself.
It deletes and recreates the internal encryption key which makes all the data unreadable that is stored on the SSD.
Note that the secure erase command is not supported by every SSD.

Approach to properly archiving and backing up data, preventing data loss and corruption

I'm looking for a proper way to archive and back up my data. This data consists of photos, video's, documents and more.
There are two main things I'm afraid might cause data loss or corruption, hard drive failure and bit rot.
I'm looking for a strategy that can ensure my data's safety.
I came up with the following. One hard drive which I will regularly use to store and display data. A second hard drive which will serve as an onsite backup of the first one. And a third hard drive which will serve as an offsite backup. I am however not sure if this is sufficient.
I would prefer to use regular drives, and not network attached storage, however if it's better suited I will adapt.
One of the things I read about that might help with bit rot is ZFS. ZFS does not prevent bit rot but can detect data corruption by using checksums. This would allow me to recover a corrupted file from a different drive and copy it to the corrupted one.
I need at least 2TB of storage but I'm considering 4TB to ensure potential future needs.
What would be the best way to safely store my data and prevent data loss and corruption?
For your local system plus local backup, I think a RAID configuration / ZFS makes sense because you’re just trying to handle single-disk failures / bit rot, and having a synchronous copy of the data at all times means you won’t lose the data written since your last backup was taken. With two disks ZFS can do a mirror and handles bit rot well, and if you have more disks you may consider using RAIDZ configurations since they use less storage overall to provide single-disk failure recovery. I would recommend using ZFS here over a general RAID solutions because it has a better user interface.
For your offsite backup, ZFS could make sense too. If you go that route, periodically use zfs send to copy a snapshot on the source system to the destination system. Debatably, you should use mirroring or RAIDZ on the backup system to protect against bit rot there too.
That said — there are a lot of products that will do the offsite backup for you automatically, and if you have an offsite backup, the only advantage of having an on-site backup is faster recovery if you lose your primary. Since we’re just talking about personal documents and photos, taking a little while to re-download them doesn’t seem super high stakes. If you use Dropbox / Google Drive / etc. instead, this will all be automatic and have a nice UI and support people to yell at if anything goes wrong. Also, storage at those companies will have much higher failure tolerances because they use huge numbers of disks (allowing stuff like RAIDZ with tens of parity disks and replicated across multiple geographic locations) and they have security experts to make sure that all your data is not stolen by hackers.
The only downsides are cost, and not being as intimately involved in building the system, if that part is fun for you like it is for me :).

How do I prevent my data from being corrupted on a network drive?

I've been cracking my head this week over a problem I have with my applications at work.
I made a few apps that run on multiple computers and read/write data from a network drive.
The data is usually just a few kilobytes in size and changes every few seconds, so I thought text files were the easiest and fastest way to do it.
The problem is that the data (or text files) often gets, corrupt?
While some computers show the correct data, others will show older data (usually from a few minutes before) or don't show anything at all. When checking the contents of the text files on multiple computers they often show different data even though it is the same file on the network drive.
Could it be that the file gets corrupted because multiple instances are writing and reading data at a fast pace to the same file?
The problem is easily fixed by deleting/re-creating the files or move them to another folder, but it is a real pain to do this every now and then.
Setting up an SQL server is not the solution for now because I'm still waiting for permission.
Maybe I should try SQLite or any other type of database for now?
Or maybe there is an easier fix to get rid of this problem, anyone had this problem before?

Recovering deleted files from full, SSD hard drive. Possible?

My SSD on my Macbook Pro was full. (Too much music and video). I moved a bunch of stuff back and forth between my external HD and laptop to make room to download new music. Few days later, SSD is full again AND I realise I accidentally deleted some work stuff that I really need.
Am I screwed, because my understanding is that since the hard drive is full, no recovery of what was deleted before would be possible? :S
Since the disk is full, from a practical point of view your deleted data is gone. All blocks are used for storing data and none is unused that could still store some old files.
From a theoretical point of view, SSDs have some spare blocks that they need for wear levelling and that are not visible to the OS. With lots of luck some of your data might still reside in one of those and could be extracted by directly reading the flash chips. But the effort for doing this would be extremely high and the probability that you will find your data is really low.
If the hard drive that contained the deleted work is full then the work has been overwritten (if it was even still there after being deleted).

Memory leak using SQL FileStream

I have an application that uses a SQL FILESTREAM to store images. I insert a LOT of images (several millions images per days).
After a while, the machine stops responding and seem to be out of memory... Looking at the memory usage of the PC, we don't see any process taking a lot of memory (neither SQL or our application). We tried to kill our process and it didn't restore our machine... We then kill the SQL services and it didn't not restore to system. As a last resort, we even killed all processes (except the system ones) and the memory still remained high (we are looking in the task manager's performance tab). Only a reboot does the job at that point. We have tried on Win7, WinXP, Win2K3 server with always the same results.
Unfortunately, this isn't a one-shot deal, it happens every time.
Has anybody seen that kind of behaviour before? Are we doing something wrong using the SQL FILESTREAMS?
You say you insert a lot of images per day. What else do you do with the images? Do you update them, many reads?
Is your file system optimized for FILESTREAMs?
How do you read out the images?
If you do a lot of updates, remember that SQL Server will not modify the filestream object but create a new one and mark the old for deletion by the garbage collector. At some time the GC will trigger and start cleaning up the old mess. The problem with FILESTREAM is that it doesn't log a lot to the transaction log and thus the GC can be seriously delayed. If this is the problem it might be solved by forcing GC more often to maintain responsiveness. This can be done using the CHECKPOINT statement.
UPDATE: You shouldn't use FILESTREAM for small files (less than 1 MB). Millions of small files will cause problems for the filesystem and the Master File Table. Use varbinary in stead. See also Designing and implementing FILESTREAM storage
UPDATE 2: If you still insist on using the FILESTREAM for storage (you shouldn't for large amounts of small files), you must at least configure the file system accordingly.
Optimize the file system for large amount of small files (use these as tips and make sure you understand what they do before you apply)
Change the Master File Table
reservation to maximum in registry (FSUTIL.exe behavior set mftzone 4)
Disable 8.3 file names (fsutil.exe behavior set disable8dot3 1)
Disable last access update(fsutil.exe behavior set disablelastaccess 1)
Reboot and create a new partition
Format the storage volumes using a
block size that will fit most of the
files (2k or 4k depending on you
image files).

Resources