Physical tables in TempDB getting deleted automatically - sql-server

In our solution we are creating some physical tables in "tempDB" for an activity. But recently we are facing an issue where these physical tables are getting deleted automatically. We would like to know the possible reasons/scenario behind this issue.
edit:
Yes, I get that creating physical tables in 'tempdb' is not advisable but here I am only looking for possible reasons why it is getting deleted.

Wow - that is a really interesting thing to do. I am curious why you implemented it like that.
I take it that originally this strategy worked for you but now it doesn't? SQL server will grow the tempDB to an optimal size and then delete data from it but not shrink it. The tempDB may be mostly empty at any given point in time.
Maybe your tempDB is now running at capacity and something has to give. Possibly some change in the load - type of queries being run etc means that your tables are being wiped. Try just giving it more size or creating a second tempDB on another disk.

From the docs:
tempdb is re-created every time SQL Server is started so that the
system always starts with a clean copy of the database. Temporary
tables and stored procedures are dropped automatically on disconnect,
and no connections are active when the system is shut down. Therefore,
there is never anything in tempdb to be saved from one session of SQL
Server to another. Backup and restore operations are not allowed on
tempdb.
This means that not only physical tables but also other objects like triggers, permissions, views, etc. will also be gone after a service restart. This is why you shouldn't use tempdb for user objects.
You can create a schema in your own database and keep an SQL Agent Job that deletes all it's tables every once in a while, so you can mimic a "temporary" physical table space to work around.

There are two types of temporary tables in MS SQL - local and global.
The deletion policy is the following:
local temporary tables (prefixed with #): these tables are deleted after the user disconnects from the instance of SQL Server
global temporary tables (prefixed with ##): these are deleted when all users referencing the table disconnect from the instance of SQL Server
The tempDB database tables are cleared out on startup as well.
There are other types of tables stored in the tempDB. One of them is called table variables (prefixed with #) and the other is persisted temporary tables (created without using any prefix).
Persisted temporary tables are deleted only when the SQL service is restarted.

Related

What is the purpose of tempdb in SQL Server?

I need a clarification about tempdb in SQL Server and need some clarifications on following things
What is the purpose of its?
Can we create a own tempdb and how to make refer the own tempdb to own database?
FROM MSDN
The tempdb system database is a global resource that is available to all users connected to the instance of SQL Server and is used to hold the following:
Temporary user objects that are explicitly created, such as: global
or local temporary tables, temporary stored procedures, table
variables, or cursors.
Internal objects that are created by the SQL Server Database Engine,
for example, work tables to store intermediate results for spools or
sorting.
Row versions that are generated by data modification transactions in
a database that uses read-committed using row versioning isolation
or snapshot isolation transactions.
Row versions that are generated by data modification transactions
for features, such as: online index operations, Multiple Active
Result Sets (MARS), and AFTER triggers.
Operations within tempdb are minimally logged.
This enables transactions to be rolled back. tempdb is re-created every time SQL Server is started so that the system always starts with a clean copy of the database.
Temporary tables and stored procedures are dropped automatically on disconnect, and no connections are active when the system is shut down. Therefore, there is never anything in tempdb to be saved from one session of SQL Server to another. Backup and restore operations are not allowed on tempdb.
TempdB is a system database and we cant create system databases .Tempdb is a global resource for all databases ,which means temp tables,table variables,version store for user databases...all will use tempdb..This is a pretty basic explanation for tempdb.Refer to below links on how it is used for other purposes like database emails,..
https://msdn.microsoft.com/en-us/library/ms190768.aspx
1: It is what it says. A temporary storage. FOr example when you ask for DISTINCT results, SQL Server must remember what rows it already sent you. Same with a temporary table.
2: Makes no sense. Tempdb is not a database but a server thing - ONE TempDB regardless how many database. You can change where it is and how it is (file number, size) but it is never related to one database (except obviously if you only have one database on a SQL Server instance). Having your own Tempdb is NOT how SQL Server works. And while we are at it - no need to ever make a backup (of tempdb). When SQL Server starts, Tempdb is reinitialized as empty.
And, btw., this would be obvious if you would bother with such things as being borderline competent. Which includes for me reading the documentation of every major technology I work with once. You should consider this to be something to adopt because it is the only way to know what you are doing.

Copying Large Amounts of Data to Replicated Database

I have a local SQL Server database that I copy large amounts of data from and into a remote SQL Server database. Local version is 2008 and remote version is 2012.
The remote DB has transactional replication set-up to one local DB and another remote DB. This all works perfectly.
I have created an SSIS package that empties the destination tables (the remote DB) and then uses a Data Flow object to add the data from the source. For flexibility, I have each table in it's own Sequence Container (this allows me to run one or many tables at a time). The data flow settings are set to Keep Identity.
Currently, prior to running the SSIS package, I drop the replication settings and then run the package. Once the package completes, I then re-create the replication settings and reinitialise the subscribers.
I do it this way (deleting the replication and then re-creating) for fear of overloading the server with replication commands. Although most tables are between 10s and 1000s of rows, a couple of them are in excess of 35 million.
Is there a recommended way of emptying and re-loading the data of a large replicated database?
I don't want to replicate my local DB to the remote DB as that would not always be appropriate and doing a back and restore of the local DB would also not work due to the nature of the more complex permissions, etc. on the remote DB.
It's not the end of the world to drop and re-create the replication settings each time as I have it all scripted. I'm just sure that there must be a recommended way of managing this...
Not doing it. Empty / Reload is bad. Try to update the table via merge - this way you can avoid the drop and recreate, which also will result in 2 replicated operations. Load the new data into temp tables on the other server (not replicated), then merge them into the replicated tables. If a lot of data is unchanged, this will seriously reduce the replication load.

SQL Server - Tempdb vs. Database Log usage

This may be a very basic question, but how can you determine beforehand whether a large operation will end up using database log or tempdb space?
For instance, one large insert / update operation I did used the database log to a point where we needed to employ SSIS & bulk operations just so the space wouldn't run out, because all the changes in the script had to be deployed at one time.
So now I'm working with a massive delete operation, that would fill the log 10 times over. So I created a script to check the space used by the database log file and delete the rows in smaller batches, with the idea that once the log file was large enough, the script would abort and then continue from that point the next day (allowing normal usage to continue till the next backup, without risk of the log running out of space).
Now, instead of filling the log, the latter query started filling up tempdb. Tempdb data file, not log file, to be specific. So I'm thinking there's a huge hole where my understanding of these two should be. :)
Thanks for any advice!
Edit:
To clarify, the question here is that why does the first example use database log, while the latter uses tempdb data file, to store the changes? And in general, by which logic are DML operations stored to either tempdb or log? Normally log should store all DB changes while tempdb is only used to store the processed data during operation when explicitly requested (ie, temp objects) or when the server runs out of RAM, right?
There is actually quite a bit that goes on behind the scenes when deleting records from a table. This MSDN Blog link may help shed some light on why tempdb is filling up when you try and delete. Either way, the delete will fill up the transaction logs as well, it just sounds like tempdb is filling up before it gets to the step of logging the transaction(s).
I'm not entirely sure what your requirements are, but the following links could be somewhat enlightening on your transaction logging issues. These are all set for SQL Server 2008 R2, but you can switch to whatever version you are running.
Recovery Model Overiew
Considerations for Switching from the Simple Recovery Model
Considerations for Switching from the Full or Bulk-Logged Recovery Model
You also have the option of truncating the table, but that depends on a few things. If you don't need the operation to be logged and you're deleting all the records from the table you can truncate. If you are doing some sort of conditional delete, but you're deleting more than you're keeping, you could always insert all of the records you want to keep into another "staging" table and then truncate the original. Then you can re-insert the records into the staging table. However, that really only works when you have no foreign key relationships on that table.

Daily Backups for a single table in Microsoft SQL Server

I have a table in a database that I would like to backup daily, and keep the backups of the last two weeks. It's important that only this single table will be backed up.
I couldn't find a way of creating a maintenance plan or a job that will backup a single table, so I thought of creating a stored procedure job that will run the logic I mentioned above by copying rows from my table to a database on a different server, and deleting old rows from that destination database.
Unfortunately, I'm not sure if that's even possible.
Any ideas how can I accomplish what I'm trying to do would be greatly appreciated.
You back up an entire database.
A table consists of entries in system tables (sys.objects) with permissions assigned (sys.database_permissions), indexes (sys.indexes) + allocated 8k data pages. What about foreign key consistency for example?
Upshot: There is no "table" to back up as such.
If you insist, then bcp the contents out and backup that file. YMMV for restore.
You can create a DTS/SSIS package to do this.
I've never done this, but I think you can create another file group in your database, and then move the table to this filegroup. Then you can schedule backups just for this file group. I'm not saying this will work, but it's worth your time investigating it.
To get you started...
http://decipherinfosys.wordpress.com/2007/08/14/moving-tables-to-a-different-filegroup-in-sql-2005/
http://msdn.microsoft.com/en-us/library/ms179401.aspx

TempDB Log File Growth Using Global Temp Tables

This is really a two prong question.
One, I'm experiencing a phenomenon where SQL server consumes a lot of tempDB log file space when using a global temp table while using a local temp table will consume data file space?
Is this normal? I can't find anywhere on the web where it talks about consuming log file space in such a way when using global temp tables vs. local temp tables.
Two, if this is expected behavior, is there any way to tell it not to do this :). I have plenty of data space (6 GB), but my log space is restricted (750 MB with limited growth). As usual, the tempDB is setup with Simple Recovery so running into the log file space limit has never been a problem before ... but I've never used global temp tables like I'm using them before either.
Thanks!! Joel
When either form of temporary table is created (local or global) the table is physically created and stored in the tempdb database. Any transactional activity on these tables is therefore logged in the tempdb transaction log file.
There is no setting per say however, you could implement a physical table as opposed to a temporary table in order to store your data within a user database, thereby using the associated data and transaction log file for this database.
If you really want to get stuck in and learn about the tempdb database take a look at the following resources.
Everyning you ever wanted to know about the tempdb database
What is the lifespan of one of these global temp tables? Are they dropped in a reasonable time? "Regular" temp tables get dropped when the user disconnects, if not manually before then, and "Global" (##) temp tables get dropped, if memory serves, when the creating session ends. I can see the log growing if the global temp tables last for a long time, because it could be that the log records governing the temp table activity are still marked as active log records and don't get freed with log backup (full recovery) or checkpoints (simple)
The length of the session will have an impact as noted above.
Also, temporary tables work within transactions and table variable work outside the context of transactions. Because of this the temporary will log entries in the log file that relate to the updates to the use of the table.

Resources