Im working on a clustered database and I'm not knowing where to setup my rman script in order to schedule rman backups. Should i setup my scripts on both databases, or on only one.
My aim is to backup the full databases and keep an rman script running on one of the nodes if there was any failure.
You only need to run the script against one node. You should keep copies of the script on all nodes, or on shared storage available to all nodes. Alternatively, you can run RMAN from a network client, too and not store the script on the DB nodes at all...
Related
I want to migrate my SQL Server Databases to AWS EC2 instances or AWS RDS using DMS Service of AWS?
How can I do that? What will be the architecture for the same? How to secure Databases in AWS?
What is the difference between both approaches? Can anyone provide architecture for both the approaches?
We're in the middle of moving our on-prem SQL Servers to AWS EC2 right now. For us, we're adding replicas to existing Availability Groups. When the time comes, we'll fail over the AG and tada! we'll be in AWS. But even if we didn't have AGs, I'd still do this with backup/restore over DMS. From the little bit that I looked at DMS, you need either CDC or a rowversion column on every table you want DMS to work with. Which seemed like a lot to me.
In case you're not acquainted with moving a database with backup/restore, here's the bones of it.
Copy a full backup to the target system
Restore that backup, being sure to specify with norecovery to allow it to accept more backups
Copy/restore a differential backup, again specifying with norecovery
Copy/restore transaction log backups. You'll keep doing this on a periodic basis until you're ready to do your cut over.
When you're finally ready to turn off the source database, take a log backup of the source, specifying with norecovery. This will create what is called a tail of the log backup and will stop anything from being able to write to the source database. Copy/restore that last log backup over and you're migrated.
I want to make a copy of a database to run intensive analysis on while the original is in use. From what I read, the database will not be locked during backup. I cannot find any information whether or not the database will be locked during copy using the copy-wizard.
What is the most effective (fastest and using the least disk space) way to make a duplicate of a production-database on the same server?
There are at least two ways:
transaction log shipping: secondary database is read only for your OLAP queries
replication: secondary database can be modified
You may test a database snapshot, too but this method isolates the data state only not the workload.
I have to create a copy of the production instance into a newly created development instance of sql 2008 r2. I wanted to know if there is any difference between using a copy wizard and backup restore method to create a copy of the production. If yes then which is the best approach.
Copying compressed backups is a fine approach for its simplicity. Make sure to copy also the MSDB and the Master databases if you are copying an instance as that brings the agent tasks and logins/linked server connections with it. Restore first with the Master database in the instance, and when you are done migrating the databases over (use compressed databases to ensure IO speed), you will also have to copy over the Service Master Key if you are going to a different domain. You can also drastically increase IO speed by enabling IFI (instant file initialization) - however compressing the database before transfer is critical as it will speed up the transfer of the backup as well as the restore at a minimal cost of server CPU.
Copy database Wizard cannot be used for system level databases, but allows for more flexibility and scheduling. To each his own on this note, though for very large databases I would advise compressing it into a .bak
We're using solr 3.6 replication with 2 servers - a master and a slave - and we're currently looking for the way to do clean backups.
As the wiki says so, we can use a HTTP command to create a snapshot of the master like this: http://myMasterHost/solr/replication?command=backup
But we still have some questions:
What is the benefit of the backup command on a classic shell script copying the index files?
The command only backups the indexes; is it possible to copy also the spellchecker folder? is it needed?
Can we create the snapshot while the application is running, so while there are potential index updates?
When we have to restore the servers from the backup, what do we have to do on the slave?
just copy the snapshot in its index folder, and removing the replication.properties file (or not)?
ask for a fetchindex through the HTTP command http://mySlave/solr/replication?command=fetchindex ?
just empty the slave index folder, in order to force a full replication from the master?
You can use the backup command provided by the ReplicationHandler. It's an asynchronous operation and it takes time if your index is big. This way you don't need to shutdown Solr. Then you'll find within the index directory a new directory named backup.yyyymmddHHMMSS with the backup date. You can also configure how many old backups you want to keep.
After that of course it's better if you move the backup to a safe location, probably to a different server.
I don't think it's possible to backup the spellchecker, not completely sure though.
Of course the command is meant to be run while the application is running. The only problem is that you will probably lose in the backup the documents that you committed after you started the backup itself.
You can also have a look at the lucene CheckIndex tool. Once you backed up the index you could check if the index is ok.
I wouldn't personally use the backups to restore the index on the slaves if you already have a good index on the master. The copy of the index would be automatic using the standard replication process (it's really a copy of the index segments), you don't need to copy them manually unless the backup contains better data than the master.
Hypothetical question:
If a maintenance plan is scheduled to run a full backup of several databases while they're online, and during this time other jobs are scheduled to run (stored procedures, SSIS packages etc), what happens to these jobs during the backup?
I'm guessing either:
The job is paused until the backup is completed, then they're run in the same order they were scheduled to.
Or
SQL Server works out what tables will be affected by each scheduled job and backs them up after the job completes?!
Or
SQL Server creates a "snapshot" of all the tables before the back up starts, any changes to them (including changes made by the jobs run during the backup) are added to the transaction log, which should be backed up separately.
...are any of my ideas correct?!
Idea #3 is the closest to what happens. The key is that when the backup operation completes, the backup file will be in a state that allows for the restore of the database to a consistent state.
From the documentation:
Performing a backup operation has minimal effect on transactions that
are running; therefore, backup operations can be run during regular
operations. During a backup operation, SQL Server copies the data
directly from the database files to the backup devices. The data is
not changed, and transactions that are running during the backup are
never delayed. Therefore, you can perform a SQL Server backup with
minimal effect on production workloads.
...
SQL Server uses an online backup process to allow for a database
backup while the database is still being used. During a backup, most
operations are possible; for example, INSERT, UPDATE, or DELETE
statements are allowed during a backup operation.