Secure delete files on Windows 10 - file

I want to securely delete my context of my SSD hard disk. I had a look on sdelete but i realized that file names are not deleted or overwrited.
Is there any free tool that i can achieve the above?
Thank you

I'm not sure if you want to delete permanently or secure delete from the drive and cannot be recovered anymore.
So, these are the two ways:
Delete permanently: in Windows Explorer, you can select the file and type "shift + del" on the keyboard. This way the file'll not be moved to your recycle bin;
Secure delete: When you delete a file from a HDD, the sector of the disk is marked as unused and not really erased. So, you need a software to replace these sector with "nothing" and avoid others user can recovery your deleted files using others softwares. One very good software is ERASER, that have one very good method to total erase the file from the disk, called "Gutmann standard": it´s overwrite the deleted files 35 times. Yes, there are softwares that keep trying to read the same sectors on the disk severall times.
But, as your case, the disk is a SSD, the only way to secure erase the file, really destroying all the data, is reformating it.
An alternative to this bad solution, preventing this situation, is enabling file-drive encryption. This option is already available on Windows 10.
Obs: of course, the file that you want to delete, can't be in using.

Erasing an SSD is not that easy because SSDs are more like mini-computers with an own OS only showing you only some of the data saved in it's flash chips. Als wear-leveling algorithms and overprovisioning makes secure deleting on user level next to impossible.
As far as I know there is only one solution to securely delete data on an SSD (without destroying the SSD):
Perform the Secure Erase Command using a SSD software - usually provided by the SD manufacturer itself.
It deletes and recreates the internal encryption key which makes all the data unreadable that is stored on the SSD.
Note that the secure erase command is not supported by every SSD.

Related

Remove a bcache without erasing data

The question is simple (I think), I want to destroy my bcache setup, with is a 4tb hdd with a 16gb ssd as cache. I am wondering if I can safely remove the bcache and revert the two devices back to normal drives without losing any data. I do have another 4tb hard drive for backup just in case it does not work. I am pretty new to bcache and I am trying to move the platform to Unraid.
I ended up using wipefs to clear the signatures, then used testdisk to rewrite a table to the drive.

How do I prevent my data from being corrupted on a network drive?

I've been cracking my head this week over a problem I have with my applications at work.
I made a few apps that run on multiple computers and read/write data from a network drive.
The data is usually just a few kilobytes in size and changes every few seconds, so I thought text files were the easiest and fastest way to do it.
The problem is that the data (or text files) often gets, corrupt?
While some computers show the correct data, others will show older data (usually from a few minutes before) or don't show anything at all. When checking the contents of the text files on multiple computers they often show different data even though it is the same file on the network drive.
Could it be that the file gets corrupted because multiple instances are writing and reading data at a fast pace to the same file?
The problem is easily fixed by deleting/re-creating the files or move them to another folder, but it is a real pain to do this every now and then.
Setting up an SQL server is not the solution for now because I'm still waiting for permission.
Maybe I should try SQLite or any other type of database for now?
Or maybe there is an easier fix to get rid of this problem, anyone had this problem before?

Does a file system assure any form of file integrity?

I need to store a very large amount of data on an hard disk. I can format it in basically any kind of format. That data is fundamental, therefore I made a copy of it. However, if some file goes corrupted, I immediately need to know it so that I can make a new copy of the only remaining file.
However, while it is easy to check if the hard disk as a whole is safe and sound, the only way I can check if a file is not corrupted is to read it and hash it. For very large amounts of data, however, this is nearly unfeasible! I can't afford 10 hours of reading and hashing to check the integrity of all the files. Moreover, continuously reading the whole data would keep my hard disk spinning and therefore could get it damaged. It sounded reasonable to me, however, that some form of check could be automatically implemented thanks to the file system itself.
I know that systems as RAID exist to assure file integrity, but those involve more hard disks, right?
So my question is: given that I know that my hard disk is alive, how can I know if some data on it somewhere got corrupted? Is there any way to make that data recoverable?
The advanced file systems like ZFS (Solaris file sysyem but available in Linux) provides the file integrity by storing the cksum of data blocks.
The RAID can provides more reliability with redundancy that one has choose for
critical data.

Does LevelDB supports hot backups (or equivalent)?

Currently we are evaluating several key+value data stores, to replace an older isam currently in use by owr main application (for 20 something years!) ...
The problem is that our current isam doesn't support crash recoveries.
So LevelDB seemd Ok to us (also checking BerkleyDB, etc)
But we ran into de question of hot-backups, and, given the fact that LevelDB is a library, and not a server, it is odd to ask for 'hot backup', as it would intuitively imply an external backup process.
Perhaps someone would like to propose options (or known solutions) ?
For example:
- Hot backup through an inner thread of the main applicacion ?
- Hot backup by merely copying the LevelDB data directory ?
Thanks in advance
You can do a snapshot iteration through a LevelDB, which is probably the best way to make a hot copy (don't forget to close the iterator).
To backup a LevelDB via the filesystem I have previously used a script that creates hard links to all the .sst files (which are immutable once written), and normal copies of the log (and MANIFEST, CURRENT etc) files, into a backup directory on the same partition. This is fast since the log files are small compared to the .sst files.
The DB must be closed (by the application) while the backup runs, but the time taken will obviously be much less than the time taken to copy the entire DB to a different partition, or upload to S3 etc. This can be done once the DB is re-opened by the application.
LMDB is an embedded key value store, but unlike LevelDB it supports multi-process concurrency so you can use an external backup process. The mdb_copy utility will make an atomic hot backup of a database, your app doesn't need to stop or do anything special while the backup runs. http://symas.com/mdb/
I am coming a bit late to this question, but there are forks of LevelDB that offer good live backup capability, such as HyperLevelDB and RocksDB. Both of these are available as npm modules, i.e. level-hyper and level-rocksdb. For more discussion, see How to backup RocksDB? and HyperDex Question.

Memory leak using SQL FileStream

I have an application that uses a SQL FILESTREAM to store images. I insert a LOT of images (several millions images per days).
After a while, the machine stops responding and seem to be out of memory... Looking at the memory usage of the PC, we don't see any process taking a lot of memory (neither SQL or our application). We tried to kill our process and it didn't restore our machine... We then kill the SQL services and it didn't not restore to system. As a last resort, we even killed all processes (except the system ones) and the memory still remained high (we are looking in the task manager's performance tab). Only a reboot does the job at that point. We have tried on Win7, WinXP, Win2K3 server with always the same results.
Unfortunately, this isn't a one-shot deal, it happens every time.
Has anybody seen that kind of behaviour before? Are we doing something wrong using the SQL FILESTREAMS?
You say you insert a lot of images per day. What else do you do with the images? Do you update them, many reads?
Is your file system optimized for FILESTREAMs?
How do you read out the images?
If you do a lot of updates, remember that SQL Server will not modify the filestream object but create a new one and mark the old for deletion by the garbage collector. At some time the GC will trigger and start cleaning up the old mess. The problem with FILESTREAM is that it doesn't log a lot to the transaction log and thus the GC can be seriously delayed. If this is the problem it might be solved by forcing GC more often to maintain responsiveness. This can be done using the CHECKPOINT statement.
UPDATE: You shouldn't use FILESTREAM for small files (less than 1 MB). Millions of small files will cause problems for the filesystem and the Master File Table. Use varbinary in stead. See also Designing and implementing FILESTREAM storage
UPDATE 2: If you still insist on using the FILESTREAM for storage (you shouldn't for large amounts of small files), you must at least configure the file system accordingly.
Optimize the file system for large amount of small files (use these as tips and make sure you understand what they do before you apply)
Change the Master File Table
reservation to maximum in registry (FSUTIL.exe behavior set mftzone 4)
Disable 8.3 file names (fsutil.exe behavior set disable8dot3 1)
Disable last access update(fsutil.exe behavior set disablelastaccess 1)
Reboot and create a new partition
Format the storage volumes using a
block size that will fit most of the
files (2k or 4k depending on you
image files).

Resources