Why a global temporary table is deleted on my SQL Server? - sql-server

We use global temporary tables in our program, written in C++ using ATL as a DB interface. Sessions are established with the SQL OLE DB provider.
This global temporary tables are held for a long time, maybe for the complete time of a session. Such temporary tables are explicitly deleted by us, when the specific action/activity ends. So we always clean up the tables.
Now we see an effect on people that are using a slow or unstable VPN connection that the global temporary is deleted. A query that should read some content returns an error
##tblTemp... is not a valid object name
For me it is an indicator that SQL Server terminated the session.
But how can it be? Our program has internal functions that access the server at least every 5 minutes (even if the user is inactive). Usually the SQL Server is accessed much more frequently. But the program may be minimized in the background.
What timeout is responsible that SQL Server terminates a session and deletes the temporary tables?
I see the Remote Query Timeout in the server settings. But this seams to be wrong for me, because we have no open query here... also the queries to the table are real simple. Insert an record, delete an record.
Questions:
Where do I find the settings for this session timeout?
Is there a way for the client to find out that the session got terminated? Strange for me the SQL query itself was transferred to SQL Server and finally failed because the temporary table did no longer exist. We got other error on the client.
Is there a way to protocol this on the server?
EDIT:
Here more details how we work with this tables.
The tables are created in my main thread. This thread has a SQL session that is created at start of the program and ends with the program end.
Other threads use the temporary tables. We pass the names through it.
So due to the fact that the creating SQL session is still alive and doesn't show an error when executing a statement that uses the temporary table, it tells me that the session is still alive. But my problem is the object seams to be deleted.
Again: We only have this problem on machines with a slow / bad VPN connection to the server!

To quote the manual:
Global temporary tables are automatically dropped when the session that created the table ends and all other tasks have stopped referencing them. The association between a task and a table is maintained only for the life of a single Transact-SQL statement. This means that a global temporary table is dropped at the completion of the last Transact-SQL statement that was actively referencing the table when the creating session ended.
source
So it's not about the server being accessed every few minutes, but the specific object being referenced.
If the session that created the global temporary table is ended (e.g. a timeout) and nothing else is actively referencing the same table, it is dropped!

Related

Physical tables in TempDB getting deleted automatically

In our solution we are creating some physical tables in "tempDB" for an activity. But recently we are facing an issue where these physical tables are getting deleted automatically. We would like to know the possible reasons/scenario behind this issue.
edit:
Yes, I get that creating physical tables in 'tempdb' is not advisable but here I am only looking for possible reasons why it is getting deleted.
Wow - that is a really interesting thing to do. I am curious why you implemented it like that.
I take it that originally this strategy worked for you but now it doesn't? SQL server will grow the tempDB to an optimal size and then delete data from it but not shrink it. The tempDB may be mostly empty at any given point in time.
Maybe your tempDB is now running at capacity and something has to give. Possibly some change in the load - type of queries being run etc means that your tables are being wiped. Try just giving it more size or creating a second tempDB on another disk.
From the docs:
tempdb is re-created every time SQL Server is started so that the
system always starts with a clean copy of the database. Temporary
tables and stored procedures are dropped automatically on disconnect,
and no connections are active when the system is shut down. Therefore,
there is never anything in tempdb to be saved from one session of SQL
Server to another. Backup and restore operations are not allowed on
tempdb.
This means that not only physical tables but also other objects like triggers, permissions, views, etc. will also be gone after a service restart. This is why you shouldn't use tempdb for user objects.
You can create a schema in your own database and keep an SQL Agent Job that deletes all it's tables every once in a while, so you can mimic a "temporary" physical table space to work around.
There are two types of temporary tables in MS SQL - local and global.
The deletion policy is the following:
local temporary tables (prefixed with #): these tables are deleted after the user disconnects from the instance of SQL Server
global temporary tables (prefixed with ##): these are deleted when all users referencing the table disconnect from the instance of SQL Server
The tempDB database tables are cleared out on startup as well.
There are other types of tables stored in the tempDB. One of them is called table variables (prefixed with #) and the other is persisted temporary tables (created without using any prefix).
Persisted temporary tables are deleted only when the SQL service is restarted.

Faster SQL temp table and table variable by using memory optimization

Scenario C in This Microsoft Doc describes how temp tables scoped to a connection can be replaced with Memory-Optimized Tables. The scheme uses a Filter Security Policy which calls a function to determine if ##spid matches the SpidFilter column in the Memory-Optimized table.
Will this work with .NET connection pooling? I would expect ##spid will return the same number as a connection is re-used over and over again. .NET clears the session scoped temp tables by calling sp_reset_connection, but that will not clear Memory-Optimized tables, or change ##spid. Maybe sys.dm_exec_sessions's session_id could be added to make it work in a connection pooling environment?
With the help of Microsoft Support, I was able to get the necessary details about ASP.NET Connection Pooling to answer this concern. It is true that ASP.NET threads will share the same SPID, but never at the same time. A thread gets assigned a connection only after the connection is no longer being used by the previous thread. Connection Pooling does not reduce the number of connections needed, it only reduces the number of times connections need to be opened and closed.
This is good documentation on Connection Pooling, although it does not make that distinction. https://learn.microsoft.com/en-us/dotnet/framework/data/adonet/sql-server-connection-pooling
Notice that Scenario C has special note: "Replace the CREATE TABLE #tempSessionC statements in your code with DELETE FROM dbo.soSessionC, to ensure a session is not exposed to table contents inserted by a previous session with the same session_id." – i-one
Since only one thread will be using the connection at a time, this is sufficient. If the table is not also deleted after being used, it will continue consuming memory (especially precious in Azure) until another thread happens to use a connection with the same SPID.

Connection pool with single connections

I would like to get the opinions on situation I'm facing regarding the connection pools.
I'm SW developer, working on some multitenant application. We've one DB and each tenant (client) has it's own schema. For each connected tenant, the solo process is started and it gets a solo DB connection. In the near future, I would need to use it in 300+ simultaneous tenant environment.
From what I read, using a lot of connections (100+) to postgres is not advised. The one solution is to use the connection pool. The other one, to use more DB servers.
I was thinking about the connection pooler (pgBouncer, pgPool). But in current state of our application, it is a bit of problematic. Here is the list of the "problematic" items and proposed solution:
Single connection to the server for whole lifetime of the process - This is because, we're using the temp tables and prepared statements heavily. The temp tables duration are variable but in most circumstances, they span multiple transactions.
Because connection pooler will return the "free" connection I cannot be sure if the given temp table was created in the returned connection. But I think I could workaround it creating "temp tables" in predefined schema (but then I would need to have some background task to clean the orphaned temp tables from the processes that aren't cleanly closed or crashed - the postgres temp tables are dropped on connection close automatically). For prepared statements I haven't found the workaround.
Use of "set search_path="custom_schema",public;" - This is done on application start for each tenant, so the correct tables are used.
This could be fixed by issuing the set search_path=... command on each transaction. It should be cheap/fast.
Use of triggers that depends on the temp tables - This is used for automatic logging of some stuff. It use the temp tables that are automatically created on the application start.
Don't have the solution yet as cannot use the "custom table approach" mentioned above because of the table name should be constant (multiple tenants would then create the same table more than once, which is bad).
So, I don't know if I should start to think about redesigning or not. If I could just add more DB servers and everything would run, than no need to redesign it.
What do you think?
Thank You
These things are not a problem in pgbouncer with pool_mode = session.

What is the purpose of tempdb in SQL Server?

I need a clarification about tempdb in SQL Server and need some clarifications on following things
What is the purpose of its?
Can we create a own tempdb and how to make refer the own tempdb to own database?
FROM MSDN
The tempdb system database is a global resource that is available to all users connected to the instance of SQL Server and is used to hold the following:
Temporary user objects that are explicitly created, such as: global
or local temporary tables, temporary stored procedures, table
variables, or cursors.
Internal objects that are created by the SQL Server Database Engine,
for example, work tables to store intermediate results for spools or
sorting.
Row versions that are generated by data modification transactions in
a database that uses read-committed using row versioning isolation
or snapshot isolation transactions.
Row versions that are generated by data modification transactions
for features, such as: online index operations, Multiple Active
Result Sets (MARS), and AFTER triggers.
Operations within tempdb are minimally logged.
This enables transactions to be rolled back. tempdb is re-created every time SQL Server is started so that the system always starts with a clean copy of the database.
Temporary tables and stored procedures are dropped automatically on disconnect, and no connections are active when the system is shut down. Therefore, there is never anything in tempdb to be saved from one session of SQL Server to another. Backup and restore operations are not allowed on tempdb.
TempdB is a system database and we cant create system databases .Tempdb is a global resource for all databases ,which means temp tables,table variables,version store for user databases...all will use tempdb..This is a pretty basic explanation for tempdb.Refer to below links on how it is used for other purposes like database emails,..
https://msdn.microsoft.com/en-us/library/ms190768.aspx
1: It is what it says. A temporary storage. FOr example when you ask for DISTINCT results, SQL Server must remember what rows it already sent you. Same with a temporary table.
2: Makes no sense. Tempdb is not a database but a server thing - ONE TempDB regardless how many database. You can change where it is and how it is (file number, size) but it is never related to one database (except obviously if you only have one database on a SQL Server instance). Having your own Tempdb is NOT how SQL Server works. And while we are at it - no need to ever make a backup (of tempdb). When SQL Server starts, Tempdb is reinitialized as empty.
And, btw., this would be obvious if you would bother with such things as being borderline competent. Which includes for me reading the documentation of every major technology I work with once. You should consider this to be something to adopt because it is the only way to know what you are doing.

SQL Server Subscription Initialization Restarts Endlessly, Never Finishes

I'm trying to set up transactional pull replication on 2 SQL Server 2005 instances, through a 3rd as a distributor. When the subscription is being initialized, it bulk inserts properly, giving the message that the snapshot was successfully loaded. Then it makes primary key indexes as usual.
At this point the job starts over, dropping all the tables and bulk inserting again. It loops endlessly and never finishes, until the snapshot expires and a new one has to be made. I need help diagnosing this problem, as I have checked all the error logs I know of, and didn't see anything that might be of relevance.
Check to see if there are any tables with corrupted primary keys in the publication. I have seen instances where that causes SQL Server transactional replication to behave in bizarre ways.

Resources