In SQL Server 2008, I have my parent table in one database, and the child table in another database, with FK relationship maintained by triggers. I cannot change it, cannot move both tables into one DB and have a regular FK constraint. When I restored both databases from full backups, I had orphans in my child table, because the full backups were not taken at the same time. I also have transaction logs.
In case of disaster recovery, can I restore both databases to precisely the same moment, so that the two databases are consistent?
Restoring at the same moment in time is possible as long as the databases are in full recovery mode and regular log backups are taken. See How to: Restore to a Point in Time (Transact-SQL).
However point in time recovery will not ensure cross-db transactional consistency on their own, you also need to had been used transactions on all operations that logically spanned the database boundary. Triggers have probably ensured this for deletes and updates because they run in the context of the parent operation, thus implicitly wrapping the cross db boundary operation in a transaction, but for inserts your application usually has to wrap the insert into parent and insert into child into a single transaction.
Consistency of recovery operations is the biggest hurdle with application split between different databases.
I cannot see the full solution for your problem, but you can use full backups with backups of transaction log.
first, you restore full backups on poth bases WITH NORECOVERY option, and then resore transaction-log backups WITH STOPAT='xxxxxxxx' on both bases. So you can get both databases restored on same point of time.
The best way to do this is to fix it at the point you're doing the backup. Most multi-database apps do this:
Prior to backup, execute a command to write a marked transaction in the transaction log of each database involved. (BEGIN TRANSACTION WITH MARK) Then do the backups.
That way, you can later do a RESTORE WITH STOPAT MARK to get them all to the same point in time. It's not perfect but much closer than other methods.
Related
Is there a benefit from doing a periodic full table refresh when you regularly insert/update/delete incrementally?
To clarify, this question is in regards to ETL processes.
If you are 100% certain that your incremental updates are capturing all CRUD operations, there is no reason to flush and fill. If your incrementals have room for error beyond the tolerance of the business rules governing the process, then you should consider period flush and fills.
It all depends on your source system, your target system, your ETL process, and your tolerance for error.
I'm not sure what you mean by 'data refresh', so I will take some liberties in assuming that you mean rebuilding indexes. Good maintenance involves rebuilding indexes periodically over time in order to eliminate fragmentation of any indexes on tables that are the result of INSERT/UPDATE/DELETE.
For more information, read: https://dba.stackexchange.com/questions/4283/when-should-i-rebuild-indexes
If you mean to say a full backup, then that is to truncate the transaction log and create a more recent database backup that you can fully restore from without having to restore the last full backup plus all incremental partial database backups and the transaction log backup.
For more information, read this: https://learn.microsoft.com/en-us/sql/relational-databases/backup-restore/full-database-backups-sql-server
and this: https://technet.microsoft.com/en-us/library/2009.07.sqlbackup.aspx
We have a large database in MS SQL in which one of the tables is partitioned by a date column. The Primary key index is also partitioned using the same partition function. The database is kept in Simple Recovery model, since data is added to it in batches every 3 months.
DBCC checkfilegroup found consistency errors, so we needed to bring back just one filegroup from a complete backup.
Restore did not allow me to run a restore of a filegroup in Simple Mode, so I changed to full recovery mode, then ran the following, with no errors.
restore database aricases filegroup='2003'
from disk=N'backupfile-name.bak'
with recovery
I expected the "with recovery" clause to bring this back to working order, but the process ended with a note saying
The roll forward start point is now at log sequence number (LSN) 511972000001350200037. Additional roll forward past LSN 549061000001370900001 is required to complete the restore sequence.
When I query the database table that includes this filegroup I get a message saying that the primary key cannot be accessed because one of the partitions for the table cannot be access because it is offline, restoring, or defunct.
Why didn't "with recovery" clause leave this filegroup fully restored. Now what?
The entire database is very large (1.5TB). I can't backup the log file, because I'd first need to create a backup in full model mode. The filegroup itself is only 300gb.
I can do the restore again-- but would like to know the correct way of performing this.
Is there a way of staying in complete recovery mode and performing a piecemeal filegroup backup from a complete database backup?
I found the answer. Bottom line is that Simple Recovery Model is very limited. You must restore ALL read/write filegroups together from the same backup. Individual read/only filegroups CAN be restored separately, as long as they became read/only (no more changes) BEFORE the last backup of the read/write filegroups.
Bottom line-- only Full or Bulk-Logged models let you restore single read/write filegroups.
Bulk-Logged model is what a datawarehouse with batch loading should be using, not Simple Model. My error in design.
see from Microsoft
http://msdn.microsoft.com/en-us/library/ms191253.aspx
then look at piecemeal restores for Simple Model
http://msdn.microsoft.com/en-us/library/ms190984%28v=sql.100%29.aspx
very limited
What's the difference of restore and recovery? What I understand is:
restore: refresh a database using the
backup files of this database
recovery: after the database fail,
such as the reboot the server, the
database reapply the committed
transaction in the transaction log
Does my understanding right?
Thanks,
Restore; bring a file back from backup media, e.g. tape, other disk
Recovery; re-running or unwinding in progress transactions that were partly completed in the DB.
Generally one will recover and then restore. Some RDBMes will hide the 2 different steps to some extent but still undertake the 2 different actions.
They are the same function, bringing back records that were accidentally dropped, repair corrupted database files, restore entire tables as well as queries, forms, macros, and stored procedures.
I understand that the transaction logs keep a record of historical transactions in order to facilitate a restore if needed. However do I need to keep creating transaction log backups for inactive databases that are hanging around on the server? No DDL statements are run against them and they are just used for reference.
I am just a bit worried that I might run out of log space if I get this wrong.
Have you considered changing the recovery model of your databases to the SIMPLE recovery model? Doing so would negate the need to backup the transaction log as it would be automatically re-used in the "unlikely" event that you need it to be.
I would still advise that regular FULL database backups be taken.
Also, if these database are indeed true read only databases then why not consider setting them to be so. This action would have the advantage of immediately highlighting any queries/users that are "still" issuing DML operations when you believe there to be none.
Other options for identifying queries that are performing more than just READ operations include running a Profiler Trace of activity on your database server and also an aggressive option would be to revoke all data modification rights from the relevant database Users.
Transaction logs are actually truncated when they're backed up. So, if these databases are actually inactive, you shouldn't be backing up any transaction logs for them since the logs would be empty.
Also, common practice for "inactive" databases would be to make them READ ONLY with a SIMPLE recovery model.
We have a database that has configuration data in. When the applications are run against it, they basically do lots of calculations and then write some data to file. Unlike normal installations, we do not really need the transaction log. What I mean here is that we could take a backup of the database and use it without applying transaction logs to it to get it up to date.
Since the transaction logs are not that important to us, what should be the best backup strategy? Currently the transaction log is enormous (10 GB where as the database is about 50 MB, this was over a few months).
Could we just do an initial backup of the database and then each few days or less backup the transaction log, overwriting the current one? Or could we just delete the transaction log all together and have a new one started?
JD.
Ensure the database is running in the Simple Recovery Model.
Doing so negates the need for you to perform transaction log backups.
This recovery model automatically ensures that the inactive portions of the transaction log can become immediately available for reuse.
No longer concerned with the Transaction Log management, you can focus your attention on your backup strategy.
A weekly FULL Database Backup, perhaps with daily Differential Backups may suit your requirements.
Recovery Model Overview
As I understand, you do not write any data to your database. And for this reason the best backup strategy for you will be:
1. Change recovery model to simple and shrink the transaction log, using DBCC SHRINKFILE.
2. Make one full backup of your database.