I want to copy many big FoxPro dbf files (totally >80G) from production environment to a test machine. Those dbf files are business data of a old FoxPro application. This application is running and I cannot stop it.
Can I copy those files? Will it affect the application?
It depends how valid you need the test machine data to be - do you need it exact to test for a problem, or do you just need a copy to play with?
If you need an exact copy then you need to stop the FoxPro application. There is no way round this, because the only way you can ensure that all tables have been written to and closed is by stopping the application.
If you just need a copy to mess around with then I often do this at the prompt using XCOPY with the /Z parameter.
Make sure that the FoxPro application is not being actively used if possible, then if your live data is in c:\mylivedata and you want to copy to c:\mytestdata, at cmd prompt:
xcopy /z /s c:\mylivedata*.* c:\mytestdata
Related
I work on my WPF project daily in Visual Studio 2022, and back-up my project folder in the evening. I can't help but notice the number of files in there just keeps getting greater (by hundreds every day), even when I'm not adding anything new to the project. It's causing back-up time, and the time it takes to transfer my files between workstations, to get longer and longer.
What could be generating these extra files and is there a way to minimize it?
I've had a look around the net for answers, but to no avail.
Thanks for all and any advice you can give me.
The best way to solve your problem is using the GitHub Repository.
Go to the Git Studio menu, select the create repository item in it, upload your Solution to the created Repository, commit the changes when you need it.
In addition to just backing up, this will allow you to easily and conveniently compare different versions.
If for some reason this is inconvenient for you, then pay attention to the ignore file in the Repository. All templated files and folders in this file do not require backup.
Here is a link to the ignore file from my Repository: https://github.com/EldHasp/CyberForumMyCycleRepos/blob/master/.gitignore
In the most minimal variant, all folders whose names begin with a dot, the bin, obj and packages folders do not require saving.
Try the following:
Try compiling in release mode instead of debug mode.
Try clearing the obj and bin folders
Turn off the automatic generation of xml.
Update:
My guess is that you are running your program in debug mode and your program is causing some files to grow.
There are the following methods to clean up the obj folder:
Right-click the project and select clean, then rebuild
Add the following code into the pre-build event, so that the obj folder will be cleaned up before each build, but this operation will delete the previous obj folder.
rmdir /s /q "$(ProjectDir)obj"
I have multiple copies of the same database with size of several terabytes. I was looking for a solution where I could upload the very first backup and then, instead of uploading the same entire backup with only few megabytes of changes, only upload the blocks that have changed. I know this process is called deduplication, so I was wondering if there is a software that does that, possibly to be a built-in nas-management software solution, like openmediavault.
There is one way to solve this problem
I was also suffering from this problem.
but I found that how to use "BATCH" file
There are mainly 2 command
X_COPY
ROBO_COPY
According to your need here, (2)robo_copy will be helpfull
robocopy will backup your specific file or folder
even if you changed some megabytes data, it will only copy the new data
only your new blocks will be copy.
HOW TO DO
Open NotePad and type
robocopy "file path you want to copy" "on which folder you want to paste" /MIR
And then save your notepad as ".bat" file
for more help check out
https://www.geeksforgeeks.org/what-is-robocopy-in-windows/
So, I have a script that uses bulk insert to pull text from files and insert their contents into a table. I am loading from text files because the text may be large and in doing this, I do not need to worry about escaping. I have the script working locally with a set defined directory. ex ('C:\Users\me\Files\File.txt') But, I need to run this script in a Post Deployment script. The text files that I am reading from are in the same Database project. I cannot do a set defined directory as the directory may be different depending on the different environments that the project is published to. Is there a way to get a relative path or get what the solution/project's directory is after deployment?
So, because Bulk Insert needs an absolute path, scripts have no concept of relative paths, and this will be deployed on multiple environments where I do not know the absolute path. I decided to utilize Powershell AND Bulk Insert. So, what I am doing is, on the Database project's Pre-Build, I call my Powershell script. The Powershell script is able to figure out its current directory. I build a SQL file that is called in the Post-Deployment script. In this SQL file, I Bulk Insert using the current directory.
Why not use BCP: http://msdn.microsoft.com/en-us/library/ms162802.aspx ? It can handle relative paths. And if you are able to all PowerShell, I don't see why you wouldn't be able to call BCP.EXE. And it is essentially the same API as BULK INSERT.
Have you considered using a standard location on the file system? When I need to write DOS/CMD scripts that are portable (including install stuff for later consumption via T-SQL, such as CREATE ASSEMBLY FROM), I do something like:
IF NOT EXIST C:\TEMP\MyInstallFolder (
MKDIR C:\TEMP\MyInstallFolder
)
REM put stuff into C:\TEMP\MyInstallFolder now that it is certain to be there)
REM CALL some process that looks in C:\TEMP\MyInstallFolder
The MKDIR will create all missing parent folders. So a folder like TEMP, which used to be standard on PCs running Windows, is typically not there anymore since they have moved to per-user temp folders but is created and then MyInstallFolder is created, causing no errors. The IF NOT EXIST will make sure that re-running the script will also not error after the first run.
Before I begin, I would like to express my appreciation for all of the insight I've gained on stackoverflow and everyone who contributes. I have a general question about managing large numbers of files. I'm trying to determine my options, if any. Here it goes.
Currently, I have a large number of files and I'm on Windows 7. What I've been doing is categorizing the files by copying them into folders based on what needs to be processed together. So, I have one set that contains the files by date (for long term storage) and another that contains the copies by category (for processing and calculations). Of course this doubles my data each time. Now I'm having to create more than one set of categories; 3 copies to be exact. This is quadrupling my data.
For the processing side of things, the data ends up in excel. Originally, all the data was brough into excel. Then all organization and filtering was performed in excel. This was time consuming and not easily maintainable over the long term. Later the work load was shifted to the file system itself, which lightened the work in excel.
The long and short of it is that this is an extremely inefficient use of disk space. What would be a better way of handling this?
Things that have come to mind:
Overlapping Folders
Is there a way to create a folder that only holds the addresses of a file, rather than copying the file. This way I could have two folders reference the same file.
To my understanding, a folder is a file listing the memory addresses of the files inside of it, but on Windows a file can only be contained in one folder.
Microsoft SQL Server
Not sure what could be done here.
Symbolic Links
I'm not an administrator, so I cannot execute the mklink command.
Also, I'm uncertain about any performance issues with this.
A Junction
Apparently not allowed for individual files, only folders in windows.
Search folders (*.search-ms)
Maybe I'm missing something, but to my knowledge there is no way to specify individual files to be listed.
Hashing the files
Creating hash tags for all the files, would allow for the files to be stored once. But then I have no idea how I would handle the hash tags.
XML
Maybe I could use xml files to attach meta data to the files and somehow search using them.
Database File System
I recently came across this concept in my search. Not sure how it would apply Windows.
I have found a partial solution. First, I discovered that the laptop I'm using is actually logged in as Administrator. As an alternative to options 3 and 4, I have decided to use hard-links, which are part of the NTFS file system. However, due to the large number of files, this is unmanageable using the following command from an elevated command prompt:
mklink /h <source\file> <target\file>
Luckily, Hermann Schinagl has created the Link Shell Extension application for Windows Explorer and a very insightful reading of how Junctions, Symbolic Links, and Hard Links work. The only reason that this is currently a partial solution, is due to a separate problem with Windows Explorer, which I intend to post as a separate question. Thank you Hermann.
I am trying to make my job a little bit more time efficient by using a batch file to change multiple settings within control panel all at once. I can do this manually but if you are setting up 20+ computers it can get a little time consuming. I do not work much with batch files so I do not have any idea where I should start or if it is even possible.
Better way is actually interact with Windows registry from .bat.
Exactly, which configurations do you wanna change?
This is very easy, you can easily find info online about how to do that, MS should have some tutorials on MSDN too. Also, you can done almost all those things thru PowerSheel scripts instead of bat (which I personally think is the best approach)
Anyways, to change ACC you need to go over registry key
HKLM\Software\Microsoft\Windows\CurrentVersion\Action Center
Firewall settings at
HKLM\SOFTWARE\Policies\Microsoft\WindowsFirewall\
Installed software and its uninstall commands at
HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall
be careful, registry changes can harm the system.