Is it possible to protect a database from being deleted in Azure? - database

I have one production database and a test database. Now and then I delete the test database, and make a fresh duplicate from the production db.
I am pretty scared that at a brainless moment I will by accident delete the production database.
I know that you can restore a deleted database, but even the downtime can be quite catastrophic at certain moments in time.
Is it possible to give it an extra lock or some other way to prevent me from deleting this database by accident?

Don't use an error prone point-and-click interface but instead write a script to delete the test database. You can use either:
Azure Powershell
Azure Cross-Platform CLI
You'll be absolutely safe because you won't have a script to delete the production database !

Dirk - you don't say if this is SQL Server on a VM or an Azure SQL Databsae.
If you are using Azure SQL Database and are using one of the new tiers (Basic, Standard or Premium) you can recover deleted databases back to the point at which they were deleted. The duration these deleted databases are held for depends on tier (7 days Basic, 14 days Standard, 35 days Premium). Full details on the Azure documentation website.

Related

Modify table row while export/import table to another database SQL Server

I have issue with my production database, so I want to reproduce the issue on my development database. The DBMS I use is SQL Server 2016 (SP1)
To reproduce the error I'm copying all the data to development database using Export in SQL Server Management Studio.
The production database is running and user still using the database, so there's gonna be insert, update, or even delete row while I'm exporting the data.
What will happen to the modified row(insert, update, or even delete) while I'm exporting the data. Will it be exported to my development database? And why, like how SQL Server handle something like this?
What is the good way to move production database to development database?
And the extreme one, What will happen if table columns modified while export is in process?
EDIT :
I need to mention that the DBMS version on production is higher then development so I can't use backup/restore to move database
What is the good way to move production database to development
database
You should backup your database on the production server and restore it on the dev server.
This will not block user activity on the prod
What will happen to the modified row(insert, update, or even delete)
while I'm exporting the data.
If your insert/update is concurrent but the reading process is already strated on a table, your changes will be blocked. Vice versa, if any DML is already started on the same rows, reading process will wait until modification is committed/rollbacked.
And the extreme one, What will happen if table columns modified while
export is in process?
While you are reading Sch-S lock is held on the table, so no column modification can be done until this lock is released.

Creating Users on secondary server in log shipping

I have a production Server say ServerA I have setup log shipping to ServerB which is left in read-only mode. The purpose of this log shipping is to lower the load on production server for some expensive queries (painful reports).
Now if I have to create some logins using our domain accounts. I cannot do this because the secondary database is in standby mode.
I thought if I create these logins on Primary server it will be copied over to secondary server then the logs are restored there but this isnt the case.
I have done a lot of research online finding a way around to this. I found the following resources for this. I tried every method suggested in this articles but none of them seems to work.
1) Log Shipping in SQL Server 2008 R2 for set BI on replicated database
2) How to transfer logins and passwords between instances of SQL Server
3) Orphaned Users with Database Mirroring and Log Shipping
Has someone experienced the same issue? what did you do? Is there any way around for this issue? Any suggestions any pointer please.
Ali,
Of course I am crafty ...
Check out these articles.
http://technet.microsoft.com/en-us/magazine/2006.05.sqlqa.aspx
http://blogs.msdn.com/b/reedme/archive/2009/04/24/log-shipping-database-snapshots-bummer-dude.aspx
Database mirroring is a better solution since you can create a snapshot and report off that.
However, both mirroring and log shipping have the database in read only state. Therefore, you can not change the orphaned users.
The best way is to make sure your login's on both servers match. Therefore, orphans will not occur.
I your case, you might have to remove log shipping, create the login's on the DR server, drop the database, reseed the DR server with a backup and restart shipping.
In this area, I am not speaking from experience since I always used clustering with a SAN.
Please test this out in a lower environment to work on any gotchas.
My upcoming project will be using Always On (with 1 primary, 1 secondary) = mirroring if synchronous or log shipping if asynchronous. But Always On allows for read only secondaries, which is nice.
Please write back on how you make out. I am curious.
Take care my friend.
J

The fastest backup/restore strategy for Azure SQL databases?

What is the fastest way to backup/restore Azure SQL database?
The background: We have the database with size ~40 GB and restoring it from the .bacbac file (~4GB of compressed data) in the native way by Azure SQL Database Import/Export Service takes up to 6-8 hours. Creating .bacpac is also very long and takes ~2 hours.
UPD:
UPD.
Creating the database (by the way transactional consistent) copy using CREATE DATABASE [DBBackup] AS COPY OF [DB] takes only 15 minutes with 40 GB database and the restore is simple database rename.
UPD. Dec, 2014. Let me share with you our experience about the fastest way of DB migration schema we ended up with.
First of all, the approach with data-tier application (.bacpac) turned out to be not viable for us after DB became slightly bigger and it also will not work for you if you have at least one non-clustered index with total size > 2 GB until you disable non-clustered indexes before export - it's due to Azure SQL transaction log limit.
We stick to Azure Migration Wizard that for data transfer just runs BCP for each table (parameters of BCP are configurable) and it's ~20% faster than approach with .bacpac.
Here are some pitfalls we encountered with the Migration Wizard:
We run into encoding troubles for non-Unicode strings. Make sure
that BCP import and export runs with same collation. It's -C ... configuration switch, you can find parameters with which BCP calling
in .config file for MW application.
Take into account that MW (at least the version that is actual at the moment of this writing) runs BCP with parameters that will leave the constraints in non-trusted state, so do not forget to check all non-trusted constraints after BCP import.
If your database is 40GB it's long past time to consider having a redundant Database server that's ready to go as soon as the main becomes faulty.
You should have a second server running alongside the main DB server that has no actual routines except to sync with the main server on an hourly/daily basis (depending on how often your data changes, and how long it takes to run this process). You can also consider creating backups from this database server, instead of the main one.
If your main DB server goes down - for whatever reason - you can change the host address in your application to the backup database, and spend the 8 hours debugging your other server, instead of twiddling your thumbs waiting for the Azure Portal to do its thing while your clients complain.
Your database shouldn't be taking 6-8 hours to restore from backup though. If you are including upload/download time in that estimate, then you should consider storing your data in the Azure datacenter, as well as locally.
For more info see this article on Business Continuity on MSDN:
http://msdn.microsoft.com/en-us/library/windowsazure/hh852669.aspx
You'll want to specifically look at the Database Copies section, but the article is worth reading in full if your DB is so large.
Azure now supports Point in time restore / Geo restore and GeoDR features. You can use the combination of these to have quick backup / restore. PiTR and Geo restore comes with no additional cost while you have to pay for
Geo replica
There are multiple ways to do backup, restore and copy jobs on Azure.
Point in time restore.
Azure Service takes full backups, multiple differential backups and t-log backups every 5 minutes.
Geo Restore
same as Point in time restore. Only difference is that it picks up a redundant copy from a different blob storage stored in a different region.
Geo-Replication
Same as SQL Availability Groups. 4 Replicas Async with read capabilities. Select a region to become a hot standby.
More on Microsoft Site here. Blog here.
Azure SQL Database already has these local replicas that Liam is referring to. You can find more details on these three local replicas here http://social.technet.microsoft.com/wiki/contents/articles/1695.inside-windows-azure-sql-database.aspx#High_Availability_with_SQL_Azure
Also, SQL Database recently introduced new service tiers that include new point-in-time-restore. Full details at http://msdn.microsoft.com/en-us/library/azure/hh852669.aspx
Key is to use right data management strategy as well that helps solve your objective. Wrong architecture and approach to put everything on cloud can prove disastrous... here's more to it to read - http://archdipesh.blogspot.com/2014/03/windows-azure-data-strategies-and.html

Best Solution to have a Live copy of a Database when replication is not an Option

Recently I had to implement transactional replication to have a live copy of that database on another server for reporting purposes. While configuring replication I realized that a lot of tables didn't have a primary key, so I could not publish all the tables I wanted to.
Second option was to implement merge replication but that would have added a GUID column to all the tables. Since it is a database for a vendor application and vendor has warned us to not "touch" the database structure because any change in the database structure can cause their application to break. So merge replication is not an option anymore.
I have been doing some research on other available options for me in this scenario; the only thing I could find is Log Shipping. I know it will leave my database in Read-Only mode but (to my knowledge) since this is the only option I am left with and it will be strictly used for Reporting purposes only I think I can live with this.
Can anyone suggest a better solution for this? Or is Log Shipping the only option left for me?
It is SQL Server 2008 R2 64-bit DataCenter Edition.
Your other options are:
Database mirroring, and using a snapshot for read-only operations. It can be a pain to manage snapshots.
Upgrading to SQL Server 2012, and make use of Readable Secondaries in Availability Groups. This can be a pain in the wallet.
You mention log shipping but, based on your follow-up comments I don't think it's clear that, every time you restore a log to the log shipped copy, you need to kick out all of the users that may be running reports. This is because you need exclusive access to the database in order to restore the log. This is another case of "you get what you pay for" - you can log ship to Express instances, if you want to (and if your database supports it), but it's not exactly a watertight solution.

SQL Server High Availability on premise - cloud

I would like to know which is the best way to make a copy and keep the copies synchronized of a on premises SQL Server 2008 (not R2) database to SQL Azure.
Think of the SQL Azure as a failover kind of structure...
Notes:
The database runs fine in SQL Azure
I have already figured out how to get the rest of the app running on Azure
Please consider suggestions of the type "Upgrade to SQL Server 2012 because of X" if the gain (reliability, efficiency, time to replicate, etc...) are worth it
I`m looking for instant replication (as fast as possible)
Yes it will have to sync back eventually. If the on-premises deploy crash and the cloud get activated and changed, sync back will be necessary, but i think it does not need to be automatic... of it is, better!
The Database consist of 900+ tables (legacy system)
http://www.windowsazure.com/en-us/manage/services/sql-databases/getting-started-w-sql-data-sync/
http://msdn.microsoft.com/en-us/library/hh456371.aspx
I think the best bet is to use SQL Data Sync, it should give you bidirectional and we use it currently to sync data around the world in terms of datacenters and one local on premise database. It will only give you 5 mins sync timing but this will probably do, otherwise the next best options is to use SQL Server VMs and do the old fashion way. But with SQL Azure Data Sync we have found to be reasonable reliable and been running it for a good six months syncing across 4 database in four data centres in Azure.
Some problems though with it,
It uses Triggers.
It will obivously add load and connections to your current SQL Database.
The new control panel in Azure is a nightmare for it, so I would use the old panel for the moment.
It is in preview last time I looked, so it might not be 100% suitable
for you.
I would imagine there is some better third party solutions out there but off the shelf and in Azure SQL Data sync is well worth a look for the situation you a describing.

Resources