Daily Backups for a single table in Microsoft SQL Server - sql-server

I have a table in a database that I would like to backup daily, and keep the backups of the last two weeks. It's important that only this single table will be backed up.
I couldn't find a way of creating a maintenance plan or a job that will backup a single table, so I thought of creating a stored procedure job that will run the logic I mentioned above by copying rows from my table to a database on a different server, and deleting old rows from that destination database.
Unfortunately, I'm not sure if that's even possible.
Any ideas how can I accomplish what I'm trying to do would be greatly appreciated.

You back up an entire database.
A table consists of entries in system tables (sys.objects) with permissions assigned (sys.database_permissions), indexes (sys.indexes) + allocated 8k data pages. What about foreign key consistency for example?
Upshot: There is no "table" to back up as such.
If you insist, then bcp the contents out and backup that file. YMMV for restore.

You can create a DTS/SSIS package to do this.

I've never done this, but I think you can create another file group in your database, and then move the table to this filegroup. Then you can schedule backups just for this file group. I'm not saying this will work, but it's worth your time investigating it.
To get you started...
http://decipherinfosys.wordpress.com/2007/08/14/moving-tables-to-a-different-filegroup-in-sql-2005/
http://msdn.microsoft.com/en-us/library/ms179401.aspx

Related

Physical tables in TempDB getting deleted automatically

In our solution we are creating some physical tables in "tempDB" for an activity. But recently we are facing an issue where these physical tables are getting deleted automatically. We would like to know the possible reasons/scenario behind this issue.
edit:
Yes, I get that creating physical tables in 'tempdb' is not advisable but here I am only looking for possible reasons why it is getting deleted.
Wow - that is a really interesting thing to do. I am curious why you implemented it like that.
I take it that originally this strategy worked for you but now it doesn't? SQL server will grow the tempDB to an optimal size and then delete data from it but not shrink it. The tempDB may be mostly empty at any given point in time.
Maybe your tempDB is now running at capacity and something has to give. Possibly some change in the load - type of queries being run etc means that your tables are being wiped. Try just giving it more size or creating a second tempDB on another disk.
From the docs:
tempdb is re-created every time SQL Server is started so that the
system always starts with a clean copy of the database. Temporary
tables and stored procedures are dropped automatically on disconnect,
and no connections are active when the system is shut down. Therefore,
there is never anything in tempdb to be saved from one session of SQL
Server to another. Backup and restore operations are not allowed on
tempdb.
This means that not only physical tables but also other objects like triggers, permissions, views, etc. will also be gone after a service restart. This is why you shouldn't use tempdb for user objects.
You can create a schema in your own database and keep an SQL Agent Job that deletes all it's tables every once in a while, so you can mimic a "temporary" physical table space to work around.
There are two types of temporary tables in MS SQL - local and global.
The deletion policy is the following:
local temporary tables (prefixed with #): these tables are deleted after the user disconnects from the instance of SQL Server
global temporary tables (prefixed with ##): these are deleted when all users referencing the table disconnect from the instance of SQL Server
The tempDB database tables are cleared out on startup as well.
There are other types of tables stored in the tempDB. One of them is called table variables (prefixed with #) and the other is persisted temporary tables (created without using any prefix).
Persisted temporary tables are deleted only when the SQL service is restarted.

SQL Server: Archiving old data

I have a database that is getting pretty big, but the client is only interested in the last 2 years' data. But they would like to keep the older data "just-in-case".
Now we would like to archive the data to a different server over a WAN.
My plan is to create a stored proc to:
Copy all data from lookup tables, tables containing master data and foreign key tables over to the archive server.
Copy data from transactional tables over to the archive DB.
Delete transactional data from master db that's older than 2 years.
Although the approach will teoretically meet our needs, the 2 main problems are:
Performace: I'm copying the data over via SQL Linked Servers. Some of the big tables are really slow as it needs to compare which records exist and then update them, and the records that doesn't exists needs to be created. Seems like it will run in 3-4 hours.
We need to copy the tables in the correct sequence to prevent foreign key violations, and also the tables that have a relationship to itself (eg. Customers table with a ParentCustomer field), needs to be transferred without the ParentCustomer and then the ParentCustomer needs to be updated to prevent FK violations. Thus it becomes difficult to auto generate my Insert and Update statements (I would like to auto generate my statements as far as possible).
I just feel there might be a better way of archiving data that I do not yet know about. SSIS might be an option, but not sure if it will prevent my existing challenges. I don't know much about SSIS, so I might need to find some material to study it if that's the way to go.
I believe you need a batch process that will run as a scheduled task; perhaps every night. There are two options, which you have already discussed:
1) SQL Agent Job, which executes a Stored Procedure. The stored procedure will use Linked Servers.
2) SQL Agent Job, which will execute an SSIS package.
I believe you could benefit from a combination of both approaches, which would avoid Linked Serverd. Here are the steps:
1) An SQL Agent Job executes an SSIS package, which transfers the data to be archived from the live database to the copy database. This should be done in a specific sequence to avoid foreign key violations.
2) Once the SSIS package has executed the transfer, then it executes a stored procedure on the live database deleting the information that is over two years old. The stored procedure will not require any linked servers.
You will have to use transactions to make sure duplicate data is not archived. For example, if the SSIS package fails then the transaction should be rolled back and the Stored Procedure should not be executed.
You can use table partitions to create separate partitions for relevant date ranges.

How can we take backup of one single schema with Data in SQL server, data is in billions

I have a Database which has around 100 schemas. Out of this I want to take backup of single schema which has around millions/billions record per table, is there a method to do so?
I want to do it once as data is consuming lot of space, and backup is necessary so that we can restore data back on demand by customer.
I am using SQL Sever 2008 R2.
I am afraid, taking backup of single schema is not possible. However you can transfer all your tables belong to one schema to a specific filegroup. Then you can choose "Files and filegroups" as Backup component by making Recovery model NOT Simple .

SQL Server - Tempdb vs. Database Log usage

This may be a very basic question, but how can you determine beforehand whether a large operation will end up using database log or tempdb space?
For instance, one large insert / update operation I did used the database log to a point where we needed to employ SSIS & bulk operations just so the space wouldn't run out, because all the changes in the script had to be deployed at one time.
So now I'm working with a massive delete operation, that would fill the log 10 times over. So I created a script to check the space used by the database log file and delete the rows in smaller batches, with the idea that once the log file was large enough, the script would abort and then continue from that point the next day (allowing normal usage to continue till the next backup, without risk of the log running out of space).
Now, instead of filling the log, the latter query started filling up tempdb. Tempdb data file, not log file, to be specific. So I'm thinking there's a huge hole where my understanding of these two should be. :)
Thanks for any advice!
Edit:
To clarify, the question here is that why does the first example use database log, while the latter uses tempdb data file, to store the changes? And in general, by which logic are DML operations stored to either tempdb or log? Normally log should store all DB changes while tempdb is only used to store the processed data during operation when explicitly requested (ie, temp objects) or when the server runs out of RAM, right?
There is actually quite a bit that goes on behind the scenes when deleting records from a table. This MSDN Blog link may help shed some light on why tempdb is filling up when you try and delete. Either way, the delete will fill up the transaction logs as well, it just sounds like tempdb is filling up before it gets to the step of logging the transaction(s).
I'm not entirely sure what your requirements are, but the following links could be somewhat enlightening on your transaction logging issues. These are all set for SQL Server 2008 R2, but you can switch to whatever version you are running.
Recovery Model Overiew
Considerations for Switching from the Simple Recovery Model
Considerations for Switching from the Full or Bulk-Logged Recovery Model
You also have the option of truncating the table, but that depends on a few things. If you don't need the operation to be logged and you're deleting all the records from the table you can truncate. If you are doing some sort of conditional delete, but you're deleting more than you're keeping, you could always insert all of the records you want to keep into another "staging" table and then truncate the original. Then you can re-insert the records into the staging table. However, that really only works when you have no foreign key relationships on that table.

what is the difference between backup database and script it saving schema and data?

When I want to restore a database I think that the best option is to create a backup of the database. However, I can create a script that saves schema and database, save primary keys, foreign keys, triggers, indexes...
In this case of script the result is the same as restore's ? I ask this because the script has a size about 1MB and the backup about 4MB.
I ask this because I would like to change the collation of all my columns of all my tables and I try some scripts but this does not work, so I am thinking in the possibility to create and script, so when I create the tables these are created with the collation of database. This collation I set in the script when I create the database.
Is it a good option to use a script for that or I can lost some type of restrictions or other design elements?
Thanks.
You can see Full Database Backup at msdn. The primary difference is that the transaction log is backed up. This provides you with many options such as differential backups that will eventually lead to less space needed in your drives to store your data.
In addition, using the backup schemes will provide you with easier ways to organize where,when and how(that is strategies) you backup.
There are ways to implement a full db backup by scripts look here, where you will lose nothing.

Resources