use svnsync for backup - batch-file

I want use svnsync for create a back-up copy of my svn repository.
The svnsync command replicates all versioned files, and their
attendant svn properties. It does not copy hooks or conf
directories from the source repository, so I will need to do those
manually.
I have wrote this batch script for create the backup repository:
setlocal enableextensions enabledelayedexpansion
:: create local backup project
svnadmin create C:\Repositories\Test
:: initialize local backup project with remote project
svnsync initialize https://local.com/svn/Test https://remote.com/svn/Test
:: get remote info project and store in info.txt
svn info https://remote.com/svn/Test>info.txt
:: get remote UUID project from info.txt
set UUID=
for /f "delims=" %%a in (info.txt) do (
set line=%%a
if "x!line:~0,17!"=="xRepository UUID: " (
set UUID=!line:~17!
)
)
:: set local UUID project with the remote UUID project
svnadmin setuuid C:\Repositories\Test %UUID%
:: sync local and remote project
svnsync sync https://local.com/svn/Test https://remote.com/svn/Test
endlocal
I have wrote this batch script for synchronize the backup repository with master repository (hourly schedule):
svnsync sync https://local.com/svn/Test https://remote.com/svn/Test
Say, if I set up a mirror svn repo via svnsync, and if my production svn
server crashed, then how can i restore the production svn repo via the
mirror? Is it possible?
Can someone suggest the best practice of backing up the production server using svnsync?

There are several options that are better than using svnsync:
When you need to have a live backup or DR server for your Subversion repositories, consider deploying a replica (slave server) using the Multisite Repository Replication (VDFS) feature. For the more detailed getting started guidance please consider the KB136: Getting started with Multisite Repository Replication and KB93: Performing disaster recovery for distributed VDFS repositories articles.
When you want to backup your Subversion repositories, use the built-in Backup and Restore feature. The Backup and Restore feature helps you make daily backups of the repositories of any size and does not have any impact on performance and user operations. What is more, the Backup and Restore feature in VisualSVN Server is very easy to setup and maintain. Please, read the article KB106: Getting Started with Backup and Restore for setup instructions.
Don't forget to create background verification jobs, too. See the article KB115: Getting started with repository verification jobs.

Related

Automate Azure VM server SQL job backup copy to another server?

I Have Azure VM server. In that I have a job set up for automatic backup to Azure local storage. I will need to store a copy of that backup in another server? How do I do that? Is there any way to do it automatically?
I am not sure of you can do it directly from one serve to another server but you can do via blob storage. Use AzCopy(https://learn.microsoft.com/en-us/azure/storage/common/storage-use-azcopy) for uploading and downloading files from blobs.
You can also use Azure File Service to copy the backups for archival purposes. Use the following commands to mount a network drive to archive the backup:
Create a storage account in Windows Azure PowerShell
New-AzureStorageAccount -StorageAccountName “tpch1tbbackup” -Location “West US”
Create a storage context
$storageAccountName = “tpch1tbbackup”
$storageKey = Get-AzureStorageKey $storageAccountName | %{$_.Primary}
$context = New-AzureStorageContext -StorageAccountName $storageAccountName -StorageAccountKey $storageKey
Create a share
New-AzureStorageShare -Name backup -Context $context
Attach the share to your Azure VM
net use x: \tpch1tbbackup.file.core.windows.netbackup /u:$storageAccountName $storageKey
Xcopy backup to x: to offload the backup on the mounted drive
xcopy D:backup*.* X:tpchbackup*.*
According to you question, you can achieve this with many ways. As #Alberto Morillo and #Vivek said, you can use powershell and AzCopy to do that . But if you want to back up and copy automatically, you can use a runbook to achieve that.
Also, you can set Schedules to runbook. With this, you can backup your resource automatically. Runbook can run powershell cmdlets and provide many features to make your job automatically.
See more details about Runbook in Azure automation in this document.
To automate you backup process from one server to another server using azure storage service, you have to make three batch files.
It will take backup of your db and will store it locally. here is the command to do that.
set FileName=DBName_%date:~-4,4%%date:~-10,2%%date:~-7,2%.bacpac
echo %FileName%
"C:\Program Files (x86)\Microsoft SQL Server\140\DAC\bin\sqlpackage.exe" /a:Export /ssn:your IP(00:00:00) /sdn:yourdatabaseName /tf:"D:\%FileName%" /su:username /sp:"password"
It will post your locally saved backup file to Azure storage.
"C:\AzCopy\AzCopy.exe" /Source:D: /Dest:https://youstoragedestination.link/blobname/ /DestKey:yourAzureStoragekey /Pattern:"*.bacpac" /Y
del "d:*.bacpac"
from this batch file call above two batch file
example:
call "yourpath\backupFile.bat"
call "youpath\backupFilepushing2azure.bat"
you can schedule your third batch file to automate your process.
Now you have pushed your backup file to Azure Storage, which is enough i think.
If you really want to save that backup file to another server then make another batch file that will download your backup file from blob to server using AzCopy.

Publishing Local DB Changes to Remote DB SQL Server 2012

I have SQL Server 2012 on my local machine and I am hosting the production version of my application/database with Arvixe.
To initially set up the database on the remote server with Arvixe, I just uploaded a .bak file of my local DB. Not a big deal since it was just getting things set up, but this as you know also pushes all of my test data to the database on my production server.
My question is this .. How should I go about pushing database changes (new tables, columns, keys, etc..) from my local development environment to the production environment on Arvixe's server? A simple backup won't work now - I can't overwrite my production data and replace it with dev data.
Is there a program that I can use for this? Is there something in SQL Server 2012 that I'm just missing? All I can find is the suggestion to upload a backup version of my local DB.
Thanks for your help.
The way to push database changes from Development to Production has little to nothing to do with where the Production instance is located.
All changes should be migrated to other environments via rollout scripts:
You should be creating scripts for all changes as you make those changes.
The scripts should be placed in folder for a particular release.
The scripts should be numbered to retain the same chronological order in which those changes happened (this should eliminate -- mostly -- any dependency issues).
The script numbering should be consistent with 2 or 3 digits for proper sorting (i.e. 01, 02, ... OR 001, 002, ...).
The scripts should all be re-runnable (i.e. they should first check to see if the intended change has already happened and if so, skip to the next script).
When migrating to a higher environment, just run all of the scripts.
If any script fails, the entire process should terminate since all changes need to occur in the same order in which they happened in Development. If using SQLCMD.EXE to run your scripts, use the -b (i.e. "On error batch abort") command line switch as it will terminate the process upon any error.'
Here is a simple CMD script (you could name it DeploySqlScripts.cmd) that handles one folder at a time, to a single Server/Instance, and assumes that you have a USE [DatabaseName]; line at the top of each script:
#ECHO OFF
SETLOCAL ENABLEDELAYEDEXPANSION
IF "%1"=="" (
ECHO Usage: DeployScripts "full:\path\to\SQL\scripts" Server[\Instance]
GOTO :EOF
)
FOR /F "delims=," %%B IN ('DIR /B /O:N /A-D "%1\*.sql"') DO (
SQLCMD -b -E -S "%2" -i "%%~fB"
IF !ERRORLEVEL! EQU 1 (
ECHO.
ECHO Error in release script...
ECHO.
EXIT /B !ERRORLEVEL!
)
ECHO.
)
Also:
If you are migrating from Dev to Prod, then you are missing at least one environment, if not 2 or even 3. You should not push changes directly from Development to Production since you might be developing things that are not ready to release. All changes that you feel are ready for release should first go to a QA environment that provides a better testing ground since it doesn't have any other changes that might invalidate certain tests.
You really, really should have your database object CREATE scripts in some source of source code control system (a.k.a. version control system). Subversion (SVN), Git, TFS, etc. There are several options, each with their pros and cons (as well as true-believers and haters). So do some research on a few of them, pick one that suits your needs, and just use it. And yes, the release scripts should also be a part of that repository.
There is also a tool from Redgate, SQL Source Control, that is not free, and I have not used it, but seeks to help in this area.
For simple / small projects (single database) Microsoft has a free tool that can script out differences between a source (i.e. Development) and target (i.e. QA or Production). It is called SqlPackage.exe and is part of SQL Server Data Tools (SSDT).
You can set up triggers on the database objects you're interested in pushing and set up the production server as a linked server.
As an aside, pushing production data to the development environment is fine, but going the other way involves a lot of risk.

Schedule offline db2 database backup on Windows

I only have access to the command line processor and I would like to set up a backup policy to do an offline backup once a day of a db2 database.
Can anyone point me in the right direction?
To do a single offline backup I know the code is
BACKUP DATABASE <database> TO <“drive/location”> <params>
However I can not figure out how to schedule this
If you are using LUW, you can:
In Windows, create a task in the Taks Schedule with the backup command. - http://windows.microsoft.com/en-gb/windows/schedule-task#1TC=windows-7
In linux, put the command in the crontab of a user with the privileges to execute the backup. Remember to load the db2 instance profile.
You can configure automatic backups in any OS - http://pic.dhe.ibm.com/infocenter/db2luw/v10r5/topic/com.ibm.db2.luw.sql.rtn.doc/doc/r0052291.html
If you are using Windows OS, then you create a new task and on the action tab:
Action: Start a program
Program/script: "D:\Program Files\IBM\SQLLIB\BIN\db2cmd.exe" (path there the db2cmd.exe is located)
Add arguments (optional): /c /w /i db2 backup database DBNAME to X:\Backup COMPRESS WITHOUT PROMPTING
Start in (optional): blank
The command above will backup database DBNAME to X:\Backup folder compressing the DB.

DB2: How to backup a DB2 database?

DB2 v10.1 database on WINDOWS 7.
Can somebody share about creating a database backup of the DB2? I could not find detailed instructions.
Thanks in advance for any help in this matter
Have you tried looking at the documentation? Perhaps the "Data Recovery Reference"?
http://pic.dhe.ibm.com/infocenter/db2luw/v10r1/topic/com.ibm.db2.luw.admin.ha.doc/doc/c0006150.html
In a db2cmd window type \DB2 HELP BACKUP\ for more complete command syntax. The simplest form of the command is
DB2 BACKUP DATABASE <database name>
Optim Studio in 9.7 and 10.1 and Control Center in 9.7 have GUI's to assist with these tasks as well.
For a local backup you can use a simple command line command also provided in the other answers:
db2 backup database <name>
If you want a more automated solution that's more for "enterprise" then you should look into IBM Tivoli Storage Manager for example. DB2 supports making backups to network storaged TSM on the fly with incremental backups without disrupting the local database from working. I.e. you can run queries while the backup is running.
For TSM you need log archiving enabled on the database, you can do that with command should be:
db2 update db cfg using LOGARCHMETH1 TSM
After you have enabled log archiving you can create a backup script and schedule it:
set DB2INSTANCE=DB2
"C:\IBM\ProductName\db2\BIN\db2cmd.exe" /c DB2.EXE backup db WPSDB user <DOMAINUSERNAME> using <DOMAINUSERPASSWORD> online use tsm include logs
Here's a link to a full tutorial: http://www.codeyouneed.com/db2-backup-using-tsm/
For detailed step by step guide to configure DB2 backup, you can refer:
DB2 v9.7 on AIX(x64) backup configuration for TSM v7.1
Every step form planning, preparation and execution is explained with diagrams.
Basic steps are:
Download Appropriate TSM API 32/64 bit based on db2level from passport advantage
Extract TSMCLI_AIX.tar
Login as root and enter "SMITTY INSTALL"
Select required components:
tivoli.tsm.client.ba.64bit,
tivoli.tsm.client.api.64bit etc.
If not using TSM client GUI then no need to install
Tivoli.tsm.client.jbb.64bit
Tivoli.tsm.filepath
Now apply steps mention in the link as example to configure for File level and DB2 level backup as per your environment.

How can I version-control my SQL Server database files with Git?

OK, I'm warming up to Git and DropBox for version control. I'm creating DNN sites and I'm in the process of using Git/DropBox.
I would also like to use Git on the SQL Server backing database.
Is there some sort of best practice that could be employed here?
I'm currently getting an access denied error when I attempt to create a repository in the SQL Server DATA directory.
You probably don't have permissions to make a .git folder in there. I would use the sql server tools to create the backup files elsewhere. I would then back those up. You should have no problem putting those in a git repo.
Hope this helps.
I set my database file location to a custom directory, then stopped the database service and added read/full rights for all users in the system to the file. After that, git had no problem adding the file to version control.

Resources