Deduplication solution for multiple offline database backups - database

I have multiple copies of the same database with size of several terabytes. I was looking for a solution where I could upload the very first backup and then, instead of uploading the same entire backup with only few megabytes of changes, only upload the blocks that have changed. I know this process is called deduplication, so I was wondering if there is a software that does that, possibly to be a built-in nas-management software solution, like openmediavault.

There is one way to solve this problem
I was also suffering from this problem.
but I found that how to use "BATCH" file
There are mainly 2 command
X_COPY
ROBO_COPY
According to your need here, (2)robo_copy will be helpfull
robocopy will backup your specific file or folder
even if you changed some megabytes data, it will only copy the new data
only your new blocks will be copy.
HOW TO DO
Open NotePad and type
robocopy "file path you want to copy" "on which folder you want to paste" /MIR
And then save your notepad as ".bat" file
for more help check out
https://www.geeksforgeeks.org/what-is-robocopy-in-windows/

Related

How do I create a searchable backup of all filesnames of a remote drive which is not always connected?

My problem is the following:
I usually backup files (e.g. pictures) on external harddisc drives and the store them away in safe places. In the meantime also on NAS. But I don't want to have them connected and online all the time, for power and security reasons.
If I'm now looking for an old file (e.g. a special jpg from the holiday in April 2004) I would have to connect a few discs and search them for the needed file.
To overcome this problem I usually create a recursive dir-dump into a textfile for the whole disc after backup.
This way I can search the filename in the text-file.
But there still is a problem if I don't exactly know the file name that I am looking for. I know the Year and month and maybe the camera I was using then, but there must be hundreds of files in this month.
Therefore I would like to create a "dummy"-backup-filesystem with all the filesnames on the harddisc but without the actual data behind it. This way I could click through the folders and see the foldernames and filenames and easily find the respective file.
The question is: How do I create such a filesystem copy with the complete folderstructures but only the filenames and not the data?
I'm working on Linux, Opensuse, but I guess this is not a linux specific question.
In the meantime I found the solution I was looking for:
Virtual Volume View:
http://vvvapp.sourceforge.net/
Works with Linux, MacOS and Windows!

How to batch move/delete files on FTP using scripts

I currently use Globalscapes CuteFTP as my FTP client and am in the process of cleaning up old, unused files. I use a script to upload new files to the FTP but that is based on a wildcard; uploading anything I have in a specific folder.
Now I want to do the opposite and delete files but only specific files. I have a list of over 1,000 file names that I need to remove (or ideally move to a designated folder) but I am not sure how to write the script to do this. Could someone help me create a batch relocate script or at least point me in the right direction?
You'll have better luck looking for some FTP client that allows scriptable actions. A quick search pointed out http://winscp.net/eng/docs/scripting which might be helpful.

How to delete my temp folder after Powershell is forced to stop

I have a security question regarding a script that I have. My script creates two temporary CSV files, and after I run the script those files are deleted. Now, when the script crashes or the user stops it, those files remain in to the folder. How can I make sure that those files will get deleted if this happens?
I was thinking about using the windows temp folder "$TempDir = [System.IO.Path]::GetTempPath()", but this will not make any difference since the temp folder is renewed after boot time.
Any thoughts/suggestions?
One solution to your problem may be to write the temporary files to a location that only a limited number of people have access too.
Then, if the existence of these files may affect future executions of the script then a good practice would be to first check for left-overs from previous executions of the scripts, and if there are start the script with cleaning these up.

Does copying FoxPro .DBF files affect a running FoxPro application?

I want to copy many big FoxPro dbf files (totally >80G) from production environment to a test machine. Those dbf files are business data of a old FoxPro application. This application is running and I cannot stop it.
Can I copy those files? Will it affect the application?
It depends how valid you need the test machine data to be - do you need it exact to test for a problem, or do you just need a copy to play with?
If you need an exact copy then you need to stop the FoxPro application. There is no way round this, because the only way you can ensure that all tables have been written to and closed is by stopping the application.
If you just need a copy to mess around with then I often do this at the prompt using XCOPY with the /Z parameter.
Make sure that the FoxPro application is not being actively used if possible, then if your live data is in c:\mylivedata and you want to copy to c:\mytestdata, at cmd prompt:
xcopy /z /s c:\mylivedata*.* c:\mytestdata

File Management for Large Quantity of Files

Before I begin, I would like to express my appreciation for all of the insight I've gained on stackoverflow and everyone who contributes. I have a general question about managing large numbers of files. I'm trying to determine my options, if any. Here it goes.
Currently, I have a large number of files and I'm on Windows 7. What I've been doing is categorizing the files by copying them into folders based on what needs to be processed together. So, I have one set that contains the files by date (for long term storage) and another that contains the copies by category (for processing and calculations). Of course this doubles my data each time. Now I'm having to create more than one set of categories; 3 copies to be exact. This is quadrupling my data.
For the processing side of things, the data ends up in excel. Originally, all the data was brough into excel. Then all organization and filtering was performed in excel. This was time consuming and not easily maintainable over the long term. Later the work load was shifted to the file system itself, which lightened the work in excel.
The long and short of it is that this is an extremely inefficient use of disk space. What would be a better way of handling this?
Things that have come to mind:
Overlapping Folders
Is there a way to create a folder that only holds the addresses of a file, rather than copying the file. This way I could have two folders reference the same file.
To my understanding, a folder is a file listing the memory addresses of the files inside of it, but on Windows a file can only be contained in one folder.
Microsoft SQL Server
Not sure what could be done here.
Symbolic Links
I'm not an administrator, so I cannot execute the mklink command.
Also, I'm uncertain about any performance issues with this.
A Junction
Apparently not allowed for individual files, only folders in windows.
Search folders (*.search-ms)
Maybe I'm missing something, but to my knowledge there is no way to specify individual files to be listed.
Hashing the files
Creating hash tags for all the files, would allow for the files to be stored once. But then I have no idea how I would handle the hash tags.
XML
Maybe I could use xml files to attach meta data to the files and somehow search using them.
Database File System
I recently came across this concept in my search. Not sure how it would apply Windows.
I have found a partial solution. First, I discovered that the laptop I'm using is actually logged in as Administrator. As an alternative to options 3 and 4, I have decided to use hard-links, which are part of the NTFS file system. However, due to the large number of files, this is unmanageable using the following command from an elevated command prompt:
mklink /h <source\file> <target\file>
Luckily, Hermann Schinagl has created the Link Shell Extension application for Windows Explorer and a very insightful reading of how Junctions, Symbolic Links, and Hard Links work. The only reason that this is currently a partial solution, is due to a separate problem with Windows Explorer, which I intend to post as a separate question. Thank you Hermann.

Resources