What is the purpose of tempdb in SQL Server? - sql-server

I need a clarification about tempdb in SQL Server and need some clarifications on following things
What is the purpose of its?
Can we create a own tempdb and how to make refer the own tempdb to own database?

FROM MSDN
The tempdb system database is a global resource that is available to all users connected to the instance of SQL Server and is used to hold the following:
Temporary user objects that are explicitly created, such as: global
or local temporary tables, temporary stored procedures, table
variables, or cursors.
Internal objects that are created by the SQL Server Database Engine,
for example, work tables to store intermediate results for spools or
sorting.
Row versions that are generated by data modification transactions in
a database that uses read-committed using row versioning isolation
or snapshot isolation transactions.
Row versions that are generated by data modification transactions
for features, such as: online index operations, Multiple Active
Result Sets (MARS), and AFTER triggers.
Operations within tempdb are minimally logged.
This enables transactions to be rolled back. tempdb is re-created every time SQL Server is started so that the system always starts with a clean copy of the database.
Temporary tables and stored procedures are dropped automatically on disconnect, and no connections are active when the system is shut down. Therefore, there is never anything in tempdb to be saved from one session of SQL Server to another. Backup and restore operations are not allowed on tempdb.

TempdB is a system database and we cant create system databases .Tempdb is a global resource for all databases ,which means temp tables,table variables,version store for user databases...all will use tempdb..This is a pretty basic explanation for tempdb.Refer to below links on how it is used for other purposes like database emails,..
https://msdn.microsoft.com/en-us/library/ms190768.aspx

1: It is what it says. A temporary storage. FOr example when you ask for DISTINCT results, SQL Server must remember what rows it already sent you. Same with a temporary table.
2: Makes no sense. Tempdb is not a database but a server thing - ONE TempDB regardless how many database. You can change where it is and how it is (file number, size) but it is never related to one database (except obviously if you only have one database on a SQL Server instance). Having your own Tempdb is NOT how SQL Server works. And while we are at it - no need to ever make a backup (of tempdb). When SQL Server starts, Tempdb is reinitialized as empty.
And, btw., this would be obvious if you would bother with such things as being borderline competent. Which includes for me reading the documentation of every major technology I work with once. You should consider this to be something to adopt because it is the only way to know what you are doing.

Related

Physical tables in TempDB getting deleted automatically

In our solution we are creating some physical tables in "tempDB" for an activity. But recently we are facing an issue where these physical tables are getting deleted automatically. We would like to know the possible reasons/scenario behind this issue.
edit:
Yes, I get that creating physical tables in 'tempdb' is not advisable but here I am only looking for possible reasons why it is getting deleted.
Wow - that is a really interesting thing to do. I am curious why you implemented it like that.
I take it that originally this strategy worked for you but now it doesn't? SQL server will grow the tempDB to an optimal size and then delete data from it but not shrink it. The tempDB may be mostly empty at any given point in time.
Maybe your tempDB is now running at capacity and something has to give. Possibly some change in the load - type of queries being run etc means that your tables are being wiped. Try just giving it more size or creating a second tempDB on another disk.
From the docs:
tempdb is re-created every time SQL Server is started so that the
system always starts with a clean copy of the database. Temporary
tables and stored procedures are dropped automatically on disconnect,
and no connections are active when the system is shut down. Therefore,
there is never anything in tempdb to be saved from one session of SQL
Server to another. Backup and restore operations are not allowed on
tempdb.
This means that not only physical tables but also other objects like triggers, permissions, views, etc. will also be gone after a service restart. This is why you shouldn't use tempdb for user objects.
You can create a schema in your own database and keep an SQL Agent Job that deletes all it's tables every once in a while, so you can mimic a "temporary" physical table space to work around.
There are two types of temporary tables in MS SQL - local and global.
The deletion policy is the following:
local temporary tables (prefixed with #): these tables are deleted after the user disconnects from the instance of SQL Server
global temporary tables (prefixed with ##): these are deleted when all users referencing the table disconnect from the instance of SQL Server
The tempDB database tables are cleared out on startup as well.
There are other types of tables stored in the tempDB. One of them is called table variables (prefixed with #) and the other is persisted temporary tables (created without using any prefix).
Persisted temporary tables are deleted only when the SQL service is restarted.

db replication vs mirroring

Can anyone explain the differences from a replication db vs a mirroring db server?
I have huge reports to run. I want to use a secondary database server to run my report so I can off load resources from the primary server.
Should I setup a replication server or a mirrored server and why?
For your requirements the replication is the way to go. (asumming you're talking about transactional replication) As stated before mirroring will "mirror" the whole database but you won't be able to query unless you create snapshots from it.
The good point of the replication is that you can select which objects will you use and you can also filter it, and since the DB will be open you can delete info if it's not required( just be careful as this can lead to problems maintaining the replication itself), or create specific indexes for the report which are not needed in "production". I used to maintain this kind of solutions for a long time with no issues.
(Assuming you are referring to Transactional Replication)
The biggest differences are: 1) Replication operates on an object-by-object basis whereas mirroring operates on an entire database. 2) You can't query a mirrored database directly - you have to create snapshots based on the mirrored copy.
In my opinion, mirroring is easier to maintain, but the constant creation of snapshots may prove to be a hassle.
As mentioned here
Database mirroring and database replication are two high data
availability techniques for database servers. In replication, data and
database objects are copied and distributed from one database to
another. It reduces the load from the original database server, and
all the servers on which the database was copied are as active as the
master server. On the other hand, database mirroring creates copies of
a database in two different server instances (principal and mirror).
These mirror copies work as standby copies and are not always active
like in the case of data replication.
This question can also be helpful or have a look at MS Documentation

Can you insert into a replicated SQL Server DB?

I need to store some data in a SQL DB for DataWarehousing purposes.
We will be using a replicated SQL Server Database.
Is it possible to insert into only the replicated DB (and not the main DB) so that we do not effect the main DB and still allow reporting and extraction of data out of the replicated DB?
Yes, but I would advise against it. Specifically, I tend to treat replication subscribers as expendable. Which is to say that I make the choice to not back them up. What you're suggesting means that there is data in the system that exists only at the subscriber which implies that the subscriber should be backed up. You're now re-backing up days that has been backed up at the publisher.
Also, I'd completely advise against putting that data in the same table as is being subscribed. On an article re-initialization, there's too much risk of it being deleted.

Copying Large Amounts of Data to Replicated Database

I have a local SQL Server database that I copy large amounts of data from and into a remote SQL Server database. Local version is 2008 and remote version is 2012.
The remote DB has transactional replication set-up to one local DB and another remote DB. This all works perfectly.
I have created an SSIS package that empties the destination tables (the remote DB) and then uses a Data Flow object to add the data from the source. For flexibility, I have each table in it's own Sequence Container (this allows me to run one or many tables at a time). The data flow settings are set to Keep Identity.
Currently, prior to running the SSIS package, I drop the replication settings and then run the package. Once the package completes, I then re-create the replication settings and reinitialise the subscribers.
I do it this way (deleting the replication and then re-creating) for fear of overloading the server with replication commands. Although most tables are between 10s and 1000s of rows, a couple of them are in excess of 35 million.
Is there a recommended way of emptying and re-loading the data of a large replicated database?
I don't want to replicate my local DB to the remote DB as that would not always be appropriate and doing a back and restore of the local DB would also not work due to the nature of the more complex permissions, etc. on the remote DB.
It's not the end of the world to drop and re-create the replication settings each time as I have it all scripted. I'm just sure that there must be a recommended way of managing this...
Not doing it. Empty / Reload is bad. Try to update the table via merge - this way you can avoid the drop and recreate, which also will result in 2 replicated operations. Load the new data into temp tables on the other server (not replicated), then merge them into the replicated tables. If a lot of data is unchanged, this will seriously reduce the replication load.

SQL Azure Backup: What does transactionally consistent mean?

I'm using redgate's sql azure backup tool: http://www.red-gate.com/products/dba/sql-azure-backup/
It looks like if you check "Make Backup Transactionally Consistent" you get charged a full day's use for sql server. I'm wondering if I need to check this.
I do daily backups to blob storage and I backup the database to my local machine to work with every 3 days or so.
If I don't check the Transactionally Consistent box, am I going to run into any problems?
Well as the person who wrote SQL Azure Backup at Red Gate I can say that the only way to create a guaranteed transactionally consistent backup in Azure currently is indeed to use CREATE DATABASE ... AS COPY OF. This copy only exists for the duration of us taking the backup and is then dropped immediately afterwards.
If you don't check the box you'll only hit problems if there is a risk of transactions being in an inconsistent state when reading the data from each table in turn. CREATE COPY OF can take a very long time and also may cost money for the copy too.
If you're backing up to a BLOB you're using the Microsoft Import Export service rather than SQL Compare and SQL Data Compare technology but that also reads data from the tables to could be inconsistent too.
Hope this helps
Richard
AFAIK transactionally consistent means that you get a snapshot of the database at a point in time (which presumably means SQL Azure locks the db while (quickly we hope) it makes a copy of the entire database = your one day charge for a db that exists for only a few minutes).
This is better illustrated by non-transactionally consistent backup where begin by copying table X. While you are doing that someone amends (as it's a live database) table Y, which later gets copied to the backup. The foreign keys between X and Y might now not match 'cos X is from an earlier time period than Y.
I have used Sql Azure Backup and I did go for transactional consistency because the backups are for an emergency and the last thing I want in that scenario is inconsistencies in the data.
edit: now I think about it, Redgate should really state that if you backup every day you are effectively paying twice the rate for your database. I've been waiting for the sync framework which I think is there now...
To answer the question in the title: a SQL Azure database copy (the 'backup') is a SQL Azure database that is copied (fully online) from the source database and contains no uncommitted transactions (ie. is transactionally consistent). This is achieved the same way database snapshots or backup restores achieve consistency on the standalone SQL Server product: all pending transactions at the moment of 'separation' are rolled back.
As to why or how RedGate's product utilizes this, I don't know. I would venture a guess that in order to achieve a 'transitionally consistent backup' they are doing a CREATE DATABASE ... AS COPY OF ... (which creates the desired transactionally consistence) and then they use the technology from SQL Compare and Data Compare to copy out the schema and data.

Resources