We have a SQL Server 2014 database running in virtual machine withing Azure.
The virtual machine has these specs: D13, 8 cores, 56Gb RAM which is one of the top options available from Azure.
While running backups, the database causes timeout errors to the web application (roughly 100 concurrent users).
I would expect a top notch hardware from Azure to manage backups and attend users requests without struggling.
Is this something acceptable? What alternatives could I implement to avoid this?
Try Implementing backups using Split Backup mechanism. This will use multiple threads to backup your database and the time taken to complete backup process is less.
Reference: http://www.sqlideas.com/2011/08/split-database-full-backup-to-mupltiple.html
For huge databases this is the best approach in my opinion
Related
I have an Azure VM - Windows (Windows Server 2008 R2 Datacenter). It has Microsoft SQL Server 2008 R2 running on it (version v10.50.6549).
The Azure VM has backups running according to a policy - and I can see in the backups blade for the VM that they are running nightly.
If I have an issue with the SQL Server, and need to roll back to a prior version of the database, will the File Recovery option from the VM backup be adequate?
Or should I also be running SQL Server backups via a maintenance plan on the server on the VM?
If I have an issue with the SQL Server, and need to roll back to a
prior version of the database, will the File Recovery option from the
VM backup be adequate?
Maybe. VM Backups don't always give you consistent SQL backups. They usually work, but not always. If you have everything setup just right and get consistent VM backups, it might be ok-- but you are running a fairly old OS on that VM, so I'd be nervous. Very nervous. If the data is really important to you, then you should backup the data, not just the VM. Sometimes you want to restore just the data to another VM to investigate, not the entire server. I also hope you have more than just "last night's VM backup" at any given time. Sometimes bad things happen on Friday and you don't notice until Monday.
Or should I also be running SQL backups via a maintenance plan on the
server on the VM?
Yes, you should be running SQL backups if you your data is important. If your data is really important (you don't want to lose half a day of it), you should be doing full backups periodically (e.g. nightly) transaction log backups many times per hour, and keeping a few weeks worth of backups in rotation. If your data is super-important (you don't want to lose more than a few seconds), you should be mirroring it over to another database server in near-time (asynchronously). If it is critical (you don't want to lose any data), then you want to mirror to another server in real-time (synchronously).
Of course, if you are already running in Azure and don't have a DBA, managing a database is a lot easier, safer, more available, and generally cheaper if you use Azure SQL rather than trying to manage your own instance SQL Server in a VM-- oh yeah, and backups are handled for you, with millisecond point-in-time recovery for up to 45 days-- and they handle the mirroring for you to. If you want to mirror to another region across the country, you do have to pay extra for that, though.
We have some machines storing experimental data on Win7 Machines. The experimental software is only compatible with Windows 7 and as such our IT department does not want to connect to this isolated machine anymore. Is there a way to completely "mirror" this Win7 PCs SQL server or completely copy it, and keep it relatively up to date, on a win 10 PC? and by server I mean multiple databases which may grow
in number. Thank you.
Is there a way to completely "mirror" this Win7 PCs SQL server or
completely copy it, and keep it relatively up to date
What you seem to want is LOG SHIPPING. The basic steps are:
Back Up your DB
Restore it on N number of other servers that you want to keep relatively up to date
Configure log shipping, either through he wizard or manually. This will ship the transaction log backups to your other servers
Restore the transaction logs
Of course, this is all automated once you set it up. The relatively up to date is based on how often you do the transaction log backups, how long it takes to travel, and how long it takes to restore. If you only want it updated daily, or weekly, or some other time--you could just copy your DB backups to each of the other servers via PowerShell or what ever method you want, and restore the FullBackup instead of all of the transaction logs. This is a lot simpler, and the only downtime would be during the restore of your secondary servers (how ever long the restore takes).
There are other options which include Database Mirroring and Availability Groups. Each have their own pros and cons, and come at a price.
What is the fastest way to backup/restore Azure SQL database?
The background: We have the database with size ~40 GB and restoring it from the .bacbac file (~4GB of compressed data) in the native way by Azure SQL Database Import/Export Service takes up to 6-8 hours. Creating .bacpac is also very long and takes ~2 hours.
UPD:
UPD.
Creating the database (by the way transactional consistent) copy using CREATE DATABASE [DBBackup] AS COPY OF [DB] takes only 15 minutes with 40 GB database and the restore is simple database rename.
UPD. Dec, 2014. Let me share with you our experience about the fastest way of DB migration schema we ended up with.
First of all, the approach with data-tier application (.bacpac) turned out to be not viable for us after DB became slightly bigger and it also will not work for you if you have at least one non-clustered index with total size > 2 GB until you disable non-clustered indexes before export - it's due to Azure SQL transaction log limit.
We stick to Azure Migration Wizard that for data transfer just runs BCP for each table (parameters of BCP are configurable) and it's ~20% faster than approach with .bacpac.
Here are some pitfalls we encountered with the Migration Wizard:
We run into encoding troubles for non-Unicode strings. Make sure
that BCP import and export runs with same collation. It's -C ... configuration switch, you can find parameters with which BCP calling
in .config file for MW application.
Take into account that MW (at least the version that is actual at the moment of this writing) runs BCP with parameters that will leave the constraints in non-trusted state, so do not forget to check all non-trusted constraints after BCP import.
If your database is 40GB it's long past time to consider having a redundant Database server that's ready to go as soon as the main becomes faulty.
You should have a second server running alongside the main DB server that has no actual routines except to sync with the main server on an hourly/daily basis (depending on how often your data changes, and how long it takes to run this process). You can also consider creating backups from this database server, instead of the main one.
If your main DB server goes down - for whatever reason - you can change the host address in your application to the backup database, and spend the 8 hours debugging your other server, instead of twiddling your thumbs waiting for the Azure Portal to do its thing while your clients complain.
Your database shouldn't be taking 6-8 hours to restore from backup though. If you are including upload/download time in that estimate, then you should consider storing your data in the Azure datacenter, as well as locally.
For more info see this article on Business Continuity on MSDN:
http://msdn.microsoft.com/en-us/library/windowsazure/hh852669.aspx
You'll want to specifically look at the Database Copies section, but the article is worth reading in full if your DB is so large.
Azure now supports Point in time restore / Geo restore and GeoDR features. You can use the combination of these to have quick backup / restore. PiTR and Geo restore comes with no additional cost while you have to pay for
Geo replica
There are multiple ways to do backup, restore and copy jobs on Azure.
Point in time restore.
Azure Service takes full backups, multiple differential backups and t-log backups every 5 minutes.
Geo Restore
same as Point in time restore. Only difference is that it picks up a redundant copy from a different blob storage stored in a different region.
Geo-Replication
Same as SQL Availability Groups. 4 Replicas Async with read capabilities. Select a region to become a hot standby.
More on Microsoft Site here. Blog here.
Azure SQL Database already has these local replicas that Liam is referring to. You can find more details on these three local replicas here http://social.technet.microsoft.com/wiki/contents/articles/1695.inside-windows-azure-sql-database.aspx#High_Availability_with_SQL_Azure
Also, SQL Database recently introduced new service tiers that include new point-in-time-restore. Full details at http://msdn.microsoft.com/en-us/library/azure/hh852669.aspx
Key is to use right data management strategy as well that helps solve your objective. Wrong architecture and approach to put everything on cloud can prove disastrous... here's more to it to read - http://archdipesh.blogspot.com/2014/03/windows-azure-data-strategies-and.html
What are my options for achieving a cold backup server for SQL Server Express instance running a single database?
I have an SQL Server 2008 Express instance in production that currently represents a single point of failure for my application. I have a second physical box sitting at the installation that is currently doing nothing. I want to somehow replicate my database in near real time (a little bit of data loss is acceptable) to the second box. The database is very small and resources are utilized very lightly.
In the case that the production server dies, I would manually reconfigure my application to point to the backup server instead.
Although Express doesn't support log shipping, I am thinking that I could manually script a poor man's version of it, where I use batch files to take the logs and copy them across the network and apply them to the second server at 5 minute intervals.
Does anyone have any advice on whether this is technically achievable, or if there is a better way to do what I am trying to do?
Note that I want to avoid having to pay for the full version of SQL Server and configure mirroring as I think it is an overkill for this application. I understand that other DB platforms may present suitable options (eg. a MySQL Cluster), but for the purposes of this discussion, let's assume we have to stick to SQL Server.
I would also advise for a script based log shipping. After all, this is how log shipping started. All you need is a time based agent to schedule the scripts (ie. Tasks Scheduler), and a smart(er) file copy (robocopy).
I have the following scenario:
Our system is running a SQL Server Express 2005 database locally (on each users desktop, if you will). The system is storing a lot of production data from a machine. There are high demands on the safety of the data, and doing a backup each night, or even each hour is not enough. We need a backup strategy that will ensure almost instantaneous/continuous backup of the database.
Is there anyone out there that has successfully implemented a system similar to this, and/or has got some ideas of how to accomplish it? The only thing I can think of right now is to have mirrored drives (raid) to hold the data, but that would be complicated and expensive.
I would appreciate any and all thoughts on this, since it is a real issue for me and my company. Thanks in advance!
Update:
I was not clear enough in my description of the scenario. The system is storing data in a vehicle that has no connection to anything. A centralized database is therefor not possible. Neither can we use a standard/enterprise version of SQL Server, since it would be to expensive (each vehicle would need a license). Thanks for your input!
Switch your database into "Full" recovery mode. Do full backup every night and do delta backup after major user action. The delta backups can be done to the flash memory or different hard-drive, and all data can be synchronized with server when online.
Another simple way is to trace all user changes and important data in a text file that stored on a separate drive. If SQL database crashes the user or other operator can repeat steps to restore data.
One way I've seen this done is by using DoubleTake.
I will assume that a central database on a server is not feasible because your systems are running standalone and are not connected to anything. So this is what I would do
Set up RAID on the computer. This insures you against simple disk failure.
Any SQLSever database can be recovered to the point of the last commited transaction if you have a full database backup and a set of transaction logs available. Basically you simply restore the last full backup then apply the transaction logs going forward. See these links.
http://www.enterpriseitplanet.com/storage/features/article.php/11318_3776361_3
https://web.archive.org/web/1/http://blogs.techrepublic%2ecom%2ecom/datacenter/?p=132
So what you need to do is set up a periodic full backup of both the database and transaction logs, and more regular transaction log backups (and ensure that your transaction log can never run out of space).
In the event of failure you restore the last full backup, then apply the transaction logs going forward.
Myself, if these are critical systems, I would be inclined to add an additional drive to the system and make sure that the backups are copied over to that. This is because as good as raid is it does sometimes have issues - raid controllers fail, disks get wiped accidentally in parallel, disk failures go unnoticed so your just running on one disk etc. If you ensure backups are copied to a separate disk then you can always recover to the last transaction log backup. You should also ensure tape backups of course, but they are generally a last resort in the event of trouble.
If for some reason you cannot set up raid then you should still install a second disk, but place the database file on one drive and the transaction log on the other and copy backups to both disks. In the event of failure of the C drive, or some other software issue crashing the database you can still recover to the last commited transaction. Failure on the D drive limits you to the last transaction log backup (Oracle used to allow you to mirror the transaction log from the database, which again would completely cover you, but I don't think this facility exists in SQL Server)
If you are looking for a scheduler for SQL Server Express (which doesn't come with one) then I've been using SQLScheduler quite happily without problems, and it's free.
The most obvious answer would be to ditch SQL Server Express running locally and use a single source for your data (such as a standard SQL server install on a central storage location). Unless your system requires individual back ups of every single person's own individual instance of SQL Server Express.
If your requirements are so stringent as to call for instantaneous backups on every operation, you should definitely think about a different method of storage than local instances of SQL Server Express.
Wouldn't it be easier to just use one centralized SQL Server and back that up every hour or so? If you truly need instantaneous backup, your company (which seems not to want to spend money by installing Express on each machine) will need to spring for two servers and two SQL Server Enterprise licenses to implement Mirroring.
Raid isn't that expensive, but it is also not the best option. If you really want high availability data you should upgrade to sql server standard on a remote server where each user connects to and use transaction based replication to an sql server (express) instance on another machine. Raid doesn't always protect you from dataloss. If the data is that important for you then the costs should not be that much of an issue.
Update in response to the question update.
If you can't use remote servers then there a couple of options:
You write a trigger which initiates a backup script on each insert or update and stores it on a seperate harddrive.
You use raid. But beware that if the raid controller fails that you still got a problem.
RAID is not expensive. Use RAID to protect against hard drive failure. You also need monitoring though. No point in having this if you let both drives fail.
Also, implement hourly incremental backups, then daily incremental backups and finally weekly full backups.
You need all of these strategies working together because they protect against different things. RAID does not protect against human or coding errors destroying data. Hourly and weekly backups don't protect against hard drive failure.