Cannot Restore DB to RDS Instance - sql-server

I am getting below error when trying to restore DB from S3 Bak file -
100 percent processed.
[2018-11-12 19:32:21.677] Processed 32980832 pages for
database 'RDSBackup', file 'SourceDB_NEW' on file 1.
[2018-11-12 19:32:21.680]
Processed 4547 pages for database
'RDSBackup', file 'SourceDB_NEW_log' on file 1.
[2018-11-12 19:32:22.013]
RDSBackup_FULL_COPY_ONLY_20181112_010009.bak:
S3 processing has been aborted.
Error started after the source DB version changed from SQL2012 Enterprise to SQL2016 Standard.

I escalated this issue to AWS. As per the latest update, they have made some changes to RDS and I could restore the DB even with a large log file (300GB).
Link for the newest update details is below -
https://aws.amazon.com/blogs/database/rds-sql-server-has-two-new-exciting-backup-and-restore-enhancements/
Alternate solution - At first, I was asked to try to restore with a smaller log file. I shrunk the logs and the restore worked.Although AWS engineers asked me to keep a check on the logfile size, they made some changes to RDS so as to accommodate S3 bak files with relatively large size as well.

Related

Shrinking log file for TFS databases

​We have a TFS2017 environment. The size is growing every week for a long time now.
In this environment , i have multiple collection ; the size of Transaction Log File , is very big (overtop the 155 Gb)
My question is : It's safe to do a shirnk of the log file for the defined TFS collection ? (without loss data or getting error in administration console) ?
Thanks
Yes. You can.
The log file is there so you can restore to a point in time between full backups. If you have backups enabled on your server and regularly take a full backup, then you can truncate the logs after each full backup.
If you want to just "delete the logs" then temporarily turn off TFS by running this on every application tier server you have: TFSServiceControl quiesce, then truncate the logs, then turn TFS back on using TFSServiceControl unquiesce.
You may also want to verify your backup strategy. If all is well, you shouldn't see a ever growing log file.
See:
https://learn.microsoft.com/en-us/azure/devops/server/command-line/tfsservicecontrol-cmd?view=azure-devops-2019

AWS SQL Server 2016 Restoring 2 Databases Error Message

I am testing to see if a SQL Server server based program can also work on AWS Cloud Server with 2016 SQL Server on the Amazon server. In order for me to test it, I need to restore 2 databases.
The first one eventually restored fine once i figured it out...restoring the database from my S3 "bucket" BAK file.
So then I tried to restore the 2nd database, using the same restore stored proceudre, and get this message:
[2017-12-28 02:44:22.320] The file 'D:\rdsdbdata\DATA\smsystemdata.mdf' cannot be overwritten. It is being used by database 'amwsys'.
[2017-12-28 02:44:22.320] File 'sm_system_data' cannot be restored to 'D:\rdsdbdata\DATA\smsystemdata.mdf'. Use WITH MOVE to identify a valid location for the file.
I can't find where to use the WITH MOVE because it won't let me restore it interactively through the Management Studio restore menu; instead I have to give it a stored procedure command:
exec msdb.dbo.rds_restore_database
#restore_db_name='sample99',
#s3_arn_to_restore_from='arn:aws:s3:::lighthouse-chicago/sample999.bak';
And each time it tells me it can't restore it because it's going to overwrite the first database's files.
Much thanks
bill
I think you are stuck in RDS's restriction.
I had the similar problem as you. Multiple restore from one DB instance is impossible at RDS.
Here is RDS's restriction you may encounter.
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/SQLServer.Procedural.Importing.html
You can't restore a backup file to the same DB instance that was used
to create the backup file. Instead, restore the backup file to a new
DB instance. Renaming the database is not a workaround for this
limitation.
You can't restore the same backup file to a DB instance multiple
times. That is, you can't restore a backup file to a DB instance that
already contains the database that you are restoring. Renaming the
database is not a workaround for this limitation.
If you are in this case, you can't use .BAK file. To avoid it, you should create DB instance with DML and import table data.

SQL Server Log File Is Huge

Currently my db logs for my production SQL Server 2008 R2 server is growing out of control:
DATA file: D:\Data...\MyDB.mdf = 278859 MB on disk
LOG file: L:\Logs...\MyDB_1.ldf = 394542 MB on disk
The server mentioned above has daily backups scheduled #1am & translog backups every 15 min.
The database is replicated in full recovery model to a subscriber. Replciation is pushed from the node above (publisher). That same db log file on the subscriber is ~< 100 GB on disk.
What I did to try and fix:
Run a full backup of the db (takes 1h:47m)
Run the translog backup job which runs every 15 min. (takes 1m:20s)
Run another full backup of the db
Nothing above has worked so I then attempt to shrink the log files which doesn't work either using DBCC SHRINKFILE. The size doesn't ever change.
Can anyone please tell me what is wrong or what I need to do as a SQL Server DBA to resolve the above issue?
Possible things that may stop you from shrinking the translog file:
Long running transaction is occurring on your database
Your replication distribution agent runs quite frequent
Looking at the size of your translog file size, most likely it was caused by the 2nd possibility.
Your replication distribution agent runs quite frequent
SQL Server log reader agent marks the translog file as being used and prevent them from being shrunk, which is what SQL Server does after the translog file is backed up. If this process happens frequent and long enough, this could prevent your translog file from being shrunk on translog scheduled back up.
Look at this MSDN transactional explaination and how to modify log reader agent.
And a thread in MSDN forum that describe similar problem, there is DBCC query here that helps you identify running transaction that may be blocking the translog file (DBCC OPENTRAN).
Long running transaction is occurring on your database
You can check wheter any long running transaction is happening by using DBCC OPENTRAN and what process is running then decide what to do with it. As soon as the long running transaction is finished you should be able to shrink the log file.
After running sp_who2, I noticed a long running transaction on the log that was growing uncontrollably. I used kill on that SPID and not I'm proceeding to shrink the log file.
You should make blank database with same table and migrate your old database data to blank database from migration script.
for eg:
INSERT INTO customers(cust_id, Name, Address)
SELECT cust_id, Name, Address
FROM olddb.customers
--this script should run in new blank database
You can manually shrink you log file
1.right click your database > task > shrink > file > file type=log
than ok

Get local copy of SQL Server hosted on Amazon RDS

I have a small (few hundred MB) SQL Server database running on RDS. I've spent several hours trying to get a copy of it onto my local SQL Server 2014 instance. All of the following fail. Any ideas what might work?
Task -> Backup fails because it doesn't give my admin account permission to backup to a local drive.
Copy Database fails during create package with While trying to find a folder on SQL an OLE DB error was encountered with error code 0x80040E4D
From SSMS, while connected to the RDS server, running BACKUP DATABASE. This fails with message BACKUP DATABASE permission denied in database 'MyDB'. Even after running EXEC sp_addrolemember 'db_backupoperator' for the connected user.
General scripts generates a 700MB .sql file. Running that with sqlcmd -i fails at some point after producing plausible .mdf and .ldf files that can't be mounted on the local server (probably because the sqlcmd failed to complete and unlock them).
AWS has finally provided a reasonably easy means of doing this: It requires an S3 bucket.
After creating a bucket called rds-bak I ran the following stored procedure in the RDS instance:
exec msdb.dbo.rds_backup_database
#source_db_name='MyDatabase',
#s3_arn_to_backup_to='arn:aws:s3:::rds-bak/MyDatabase.bak',
#overwrite_S3_backup_file=1;
The following stored procedure returns the status of the backup request:
exec msdb.dbo.rds_task_status #db_name='MyDatabase'
Once it finished I downloaded the .bak file from S3 and imported it into a local SQL Server instance using the SSMS Restore Database... wizard!
The SSIS Import Export Wizard can generate a package to duplicate a whole set of tables. (It's not the sort of Copy Database function that relies on files - it makes a package with data flow components for each table.)
It's somewhat brittle but can be made to work :-)
SSMS Generate Scripts feature can often fail with any large data set as the script for all the data is just to large/verbose. This method never scripts out the data.
Check this out: https://github.com/andrebonna/RDSDump
It is a C#.NET Console Application that search for the latest origin database Snapshot, restore it on a temporary RDS instance, generate a BACPAC file, upload it to S3 and delete the temporary RDS instance.
You can transform your RDS snapshot into a BACPAC file, that can be downloaded and imported onto your local SQL Server 2014 instance using the feature answered here (Azure SQL Database Bacpac Local Restore)
Redgate's SQL Compare and SQL Data Compare are invaluable for these types of things. They are not cheap (but worth every penny imo). But if this is a one-time thing, you could use the 14 day trial and see how it behaves for you.
http://www.red-gate.com/products/

How to create a "copy" of SQL Server transaction log file

I want a copy of SQL Server transaction log file for "raw" analysis. What is the safest way to get a copy of that file without shutting down the database and disturbing the existing log/backups/backup schedules and just about everything.
FYI, Its a SQL Server 2000 database server and I can see the log file (its about 4GB in size) and I cannot copy it as is; I get the "access denied" error when copying from explorer or command line.
Can you not analyse the log file backups instead of the log file itself? If you must have a copy of the log file itself, restoring a backup of the database and all transaction logs will give you a replica of the transaction log file in a different database.
You cannot create a copy of a transaction log file, Although, you can create the backup of your database and restore it in another location with another name and then you can detach it.

Resources