I have a SQL database setup on Azure. I was trying to understand if it was automatically backed up or if it is something that I need to setup. Could I get some feedback on this please?
Azure SQL Databases are automatically backed up for you. You don't need to explicitly set anything up. As far as frequency with which the backup happens, from this link:
For local database backups, full database backups happen weekly,
differential database backups happen hourly, and transaction log
backups happen every five minutes. The first full backup is scheduled
immediately after a database is created. It usually completes within
30 minutes, but it can take longer when the database is of a
significant size.
Furthermore, the duration for which the backups are kept (so that go can go back in time and restore them) depends on the service tier of your database. From the same link:
Each SQL Database backup has a retention period that is based on the
service-tier of the database. The retention period for a database in
the:
Basic service tier is seven days.
Standard service tier is 35 days.
Premium service tier is 35 days.
Related
I am new in SQL Azure and trying to create automatic backup on SQL Azure. But could not found such option for this. I am using free 30 days trial with standard pricing tier. Please help!
These questions can be answered by looking at the documentation. SQL Azure has Point-in-Time restore. For databases in the Basic tier there's Point-in-Time restore for 7 days. For Standard it's 14 days and 35 days for Premium.
All Basic, Standard, and Premium databases are protected by automatic backups. Full backups are taken every week, differential backups every day, and log backups every 5 minutes. The first full backup is scheduled immediately after a database is created.
For long-term backup retention, have a look at Store Azure SQL Database backups for up to 10 years
Edit:
To learn how to configure long-term retention backup, have a look at Manage Azure SQL Database long-term backup retention
Anybody help me, How to take continuous backup from AZURE portal.(I don't have virtual machine in AZURE) It should be like scheduler running back side and will take back up from sql server in month interval.
SQL Database automatically creates database backups.
If you want it to create backup in month interval, you could upgrade your database service tier to Standard or Premium, SQL Database keeps existing backups until they are 35 days old. It keeps new backups as they occur for 35 days.
For more details, you could refer to this article.
I started using Managed Backups on my SQL server. It has been working well for over a year. It seems to backup the dbs once a week, and take incrementals every 2 hours.
A month ago, we changed our VM backup solution to Azure Recovery Services. We started running it every night. When Azure Recovery Services runs in the evening, it looks like, from the Windows and SQL logs, it takes a backup of each database before it does a volume shadow copy. They are entered into the logs as TYPE=VIRTUAL_DEVICE: and a big GUID, and a new database lsn number is created. When this VM backup occurs my weekly Managed Backups are invalidated.
When I look in the msdb.dbo.smart_backup_files table where the SQL Managed Backup stores its records to keep track of its backups, I can see there are 2 fields that seem to be important. backup_type. When this equals 1 it is a full backup, and when it is a 2 it is a log. The next field is the backup_database_lsn. This field represents the full backup that the log can be applied to.
When the SQL Managed Backup runs its full backup once a week, a new lsn number is created, and every log file that is created afterwords, has a value in the backup_database_lsn number that points back to that lsn number of the full SQL Managed Backup for that week.
Now, when Azure Recovery Services runs nightly, a new full database lsn number is created from the TYPE=VIRTUAL_DEVICE line in the logs. When I look in the Managed Backup table (msdb.dbo.smart_backup_files) I can see that all then subsequent log files that used to point to the Managed Backup's full lsn number now point to the new lsn number for the VIRTUAL_DEVICE of the Recovery Services backup.
If I need to do a restore of the Managed Backups, I can only get the full backup and 1 days worth of logs. After that, all of the log files now point back to the Recovery Services VIRTUAL_DEVICE backup, which doesn't really exist.
I have looked for the VIRTUAL_DEVICE backup. When I open a database through Enterprise Manager, and click on Restore for a database, it pulls up the most recent full backup (in this case the Recovery Manager full backup), and its log files. If I click on the full backup entry, it believes the file is in the SQL Server backup folder with the name of the file being the GUID. That file does not exist, or it may exist in the night VM backup which I can't view in Azure Recovery Services. Either way, my weekly Managed Backup is invalidated for the rest of the week.
Does anyone know how to make these two work together? I would like to have a full VM backup in case something bad gets installed on the SQL Server and we need to do a full restore, and I'd like to have a weekly full backup with incremental log files in case we need to restore one database.
It sounds as if what you are looking for differential backups. Those would contain everything added to the database following the last full backup.
I.e. you take a full backup on Sunday evening, with a differential backup every day after. On Monday evening, your differential backup would contain everything added since the backup. On Tuesday, it'd contain everything it contained on Monday, as well as everything changed since then.
If you'd do the same using Transaction Log Backups, your Monday evening backup would actually be identical to the differential backup described above. However, the Tuesday version of the Transaction Log backup would only contain the changes from the time of the Monday transaction Log backup.
When it comes to restoring, this would mean that in order to restore to a point in time, you would have to restore the latest Full Backup (Sunday), followed by every transaction log backup since, in sequence (Monday, Tuesday, etc.).
Using Differential backups, though, you would restore the latest Full Backup (Sunday), followed by the latest Differential Backup (Tuesday, if you're restoring on Wednesday).
i am facing issue in sql server Transactional replication and not able to get the root cause for it. First, let me tell you that i am not a DBA, so i may be dumb on few DBA concepts.
i am .Net developer and i have been given responsibility to setup the replication.
i have a Database in Headoffice and replicating few Tables to another server at retail Store.
First time, i configured the replication with selected articles.
the replication was continuous. it was running fine, but one Sunday night, it got failed with error "process could not execute 'sp_replcmds'".
after spending sometime on google, i couldn't find any solution. so, i rebuilt the Replication, but this time the replication was scheduled (every 15 Min), also i configured it as PULL instead of PUSH. it started, but again next Sunday night it got crashed.
So, i analyzed that in Sunday night, i had configured the Reindexing Job on the database, and Since, the recovery model was full, it was generating a very large TLOG and Repolication agent was not able to parse that.
Now, the third time, i again Rebuilt the Replication, and this time i scheduled the replication every 15 minutes but from 8:00 AM Morning to 11:30 PM, because after 11:30, no store do any transaction. Also, for Reindexing Job, i added 2 more steps. before Re-Indexing, i was changing the recovery model to simple and then Re-Indexing and after that i was changing the Recovery model back to Full. i was changing the recovery to Full, irrespective of the result from Re-Indexing step.
This setup was working fine and worked properly for around 2 Months.
Now, after 2 Months, again one Sunday night it got failed, with the same reason ("process could not execute 'sp_replcmds'"). Actually, i had scheduled the backup job, and i was taking Full Backup everyday and Log backup every 15 minutes, and no differential backup.
after, discovering that i had not configured the differential backup, i also configured the same (every 6 Hours). but, after configuring the Differential backup, in Sunday night Replication got failed.
Now, anybody, please help me with the recommended setup for my scenario.
my setup is
sql server - SQL Server 2008 R2 Enterprise on Windows Server 2008 R2
Distributor and Publisher are on same machine.
Subsriber is on the Retail Store server.
sp_replcmds is run by the log reader agent against the published database to get, well, replicated commands. According to the documentation, one needs to be at least db_owner to run that command. Make sure whatever account is running the log reader agent has at least db_owner in the published database.
I manage a web application for a client with the following specs:
ASP.net 3.5 running on a Virtual Windows 2003 Web Server
SQL Server Standard hosting the database
Database current size of 6Gb, with 1Gb/month growth rate
One single table is responsible for 98% of the size, holds the most critical data for the client
Log is not kept for this big table, only selects are done in this table
50 Gb FTP space avaiable for backup
Considering this scenario, what would be the best strategy for a SQL Backup and what tool would be best suited for this task (commercial applications included, client can pay for the license fee)?
Here is the strategy we use for CodePlex.com:
All SQL servers run with a peer server using SQL mirroring
Weekly full backup (stored on separate drive from databases)
Daily differential backup (stored on separate drive from databases)
Transaction log backup every 5 minutes (stored on separate drive from databases)
Daily tape backup
Tape backups taken offsite weekly
Also very important test your backups! Studies have shown that over 30% of untested backup procedures are flawed. Here is our backup testing strategy:
Every 30 minutes verify the full backup file exists (using scheduled task)
Every 30 minutes verify the differential backup file exists (using scheduled task)
Every 30 minutes verify the transaction log backup file exists (using scheduled task)
Every 30 minutes verify the database mirroring is configured (using scheduled task)
Every day, do a test restore of the full+differential backup and report the table row counts (using scheduled task)
Once a month do a test restore of the most recent tape backup and verify the data
It depends how critical is the data. Here is however how I i'd do it.
1. Run a full backup every day.
2. Run a differential backup every 4 hours.
3. Run a transactional log backup every 15 minutes
4. Keep a copy at the site and move a copy off the site as well as soon as the backup is done.
The database is not too big, and this is easily doable.
Use a third party tool like Redgate SQL Backup and it will automatically compress and encrypt the database backup for you. I have used it extensively and am a big fan.
Additionally if you another site available, and the data is very critical, you might want to think about setting up log shipping as well.
This is a VPC? Can you install apps?
http://www.jungledisk.com/
That's what we use - make a sql job that pushes out a backup every day, then use that service to push a copy back to Amazons S3 service. If not maybe you could have a local app that pulls the backup to a machine then pushes it /w S3 webservice, or still using Jungledisk.
This is important! If your app goes down it hurts! Also make sure you backup your deployed app and resources stored there... i.e. uploaded content to your apps storage directory.
I was supposed to type in my answer to your question but I realized there are lots of far greater resources somewhere like this article in SQLServerCentral.com. You can also find lots of "Best Practices on Backup" like this one.
You might also want to take into consideration how much data you can afford to lose and how long it will take you to restore the database. Your client may decide that they never want to lose more than 15 minutes of data ever, or they may decide that losing up to a days worth of data is okay with them.