Unallocate unused space in tempdb sql server - sql-server

any script in sql server to find space used by temporary tables + the database name where that temp table was created in tempdb?
The size of my tempDb has grown up to 100 gb and i am not able to recover the space and am unsure what is occupying so much of space.
Thanks for any help.

Temporary tables always gets created in TempDb. However, it is not necessary that size of TempDb is only due to temporary tables. TempDb is used in various ways
Internal objects (Sort & spool, CTE, index rebuild, hash join etc)
User objects (Temporary table, table variables)
Version store (AFTER/INSTEAD OF triggers, MARS)
So, as it is clear that it is being use in various SQL operations so size can grow due to other reasons also
You can check what is causing TempDb to grow its size with below query
SELECT
SUM (user_object_reserved_page_count)*8 as usr_obj_kb,
SUM (internal_object_reserved_page_count)*8 as internal_obj_kb,
SUM (version_store_reserved_page_count)*8 as version_store_kb,
SUM (unallocated_extent_page_count)*8 as freespace_kb,
SUM (mixed_extent_page_count)*8 as mixedextent_kb
FROM sys.dm_db_file_space_usage
if above query shows,
Higher number of user objects then it means that there is more usage of Temp tables , cursors or temp variables
Higher number of internal objects indicates that Query plan is using a lot of database. Ex: sorting, Group by etc.
Higher number of version stores shows Long running transaction or high transaction throughput
based on that you can configure TempDb file size. I've written an article recently about TempDB configuration best practices. You can read that here

Perhaps you can use following SQL command on temp db files seperately
DBCC SHRINKFILE
Please refer to https://support.microsoft.com/en-us/kb/307487 for more information

Related

temp DB advice - Using Temporary tables

I am working a report, where in the result is combination of multiple #temp tables. Structure is as below
Stored procedure 1 which has a temp table which gives a 0.5 million rows
Stored procedure 2 which has a temp table which give 0.1 million rows
Finally i need to combine the result set of above 2 SP , again use a temp table and make one final result set for report. Now i am worried about the performance, later if data increases, will it effect temp db. We usually stage the data monthly , in a month it Database may contain about 1 million rows. How much is the maximum capacity temp db accommodates. Will it effect with above approach.
First, it is not about number of rows it is about size of row. So, if you have 7KB per row then for .6 million rows it would be roughly around 4 GB. Now, this is not the end, SQL Server use TempDb for storing internal objects, version objects and user objects which also include intermediate result. You can expect the size to be grown more than 4 GB in your case.
There are two possible ways to overcome this:
Tune your queries, minimize use of Temp table, Table variable, CTE, large objects like VARCHAR(MAX) or cursors.
Increase your Tempdb file size. Calculate the max size either based on observation [re-building indexes is a best bet]
In real world scenerio, there is always a chance for improvement in query itself. Check if you can avoid using tempdb by joining table correctly or by using views.
Size of the tempdb is limited only by the size of disk on which it is stored. (Or can be limited in the properties of the database.)
As for 1 million rows. Nowadays it is not much, even "a little". Especially, if we talk about data for a report.
But, I'd checked if you really need that temp tables. Getting rid of them (if they are unnecessary) you can speed up the query and decrease the tempdb usage.

DB2 - Reclaiming disk space used by dropped tables

I have an application that logs to a DB2 database. Each log is stored in a daily table, meaning that I have several tables, one per each day.
Since the application is running for quite some time, I dropped some of the older daily tables, but the disk space was not reclaimed.
I understand this is normal in DB2, so I goggled and found out that the following command can be used to reclaim space:
db2 alter tablespace <table space> reduce max
Since the tablespace that store the daily log tables is called USERSPACE1, I executed the following command successfully:
db2 alter tablespace userspace1 reduce max
Unfortunately the disk space used by DB2 instance is still the same...
I've read somewhere that the REORG command can be executed, but what I've seen it is used to reorganize tables. Since I dropped the tables, how can I use REORG?
Is there any other way to do this?
Thanks
Reduce the size of a tablespace is very complex. The extents (set of contiguous pages; unit of tablespace allocation) of the tables are not distributed sequentially for a same table. When you reorg a table, the rows will be organized in pages, and the new pages will be written normally at the end of the tablespace. Sometimes, the high watermark will be increased, and your tablespace will be bigger.
You need to reorg all tables from a tablespace in order to "defrag" all tables. Then, you have to perform a new reorg in order to use the previous space, because it should be an empty space in the tablespace.
However, there are many criteria that impacts the organization of the tables in a tablespace: New extents are created (new rows, rows overflow due to updates); compression could be activated after reorg.
What you can do is to assign few or just one table per tablespace; however, you will waste a lot of space (overhead, empty pages, etc.)
The command that you are using is an automatic way to do that, but it does not always work as desired: http://www-01.ibm.com/support/knowledgecenter/SSEPGG_10.5.0/com.ibm.db2.luw.admin.dbobj.doc/doc/c0055392.html
If you want to see the distribution of the tables in your tablespace, you can use db2dart. Then, you can have an idea of which table to reorg (move).
Sorry guys,
The command that I mentioned on the original post works after all, but the space was retrieved very slowly.
Thanks for the help

Should I specify a fill factor on my tables?

I am working on a new system with a SQL Server 2005 database which will soon be going into production. A colleague recently mentioned to me that I should always be specifying the fill factor on my tables. Currently I don't specify fill factor on any of my tables.
My application is OLTP with a mix of reads and writes. A couple of my tables are "reference" tables i.e. read-only but most are read-write. The read-only tables are low volume ( < 50000 rows ).
From what I've read in the SQL Server documentation I should be sticking with the default fill-factor unless the table is read only.
Can anyone comment on this, both for read-only and read-write tables?
No, you shouldn't specify a fill factor on your tables. The fill factor is always ignored by the engine, except for one and only one operation: index build (which includes initial build of an index on a populated table and/or a rebuild of an index). So the fill factor makes sense to be specified only in the ALTER TABLE ... REBUILD and ALTER INDEX ... REBUILD operations.
See also A SQL Server DBA myth a day: (25/30) fill factor.

Oracle 10g temp tables

I'm trying to convert the permanent tables used in a stored procedure to global temp tables. I've looked at the stats on these permanent tables and some have tens of millions of rows of data and are on the order if gigabytes in size (up to 10 GB).
So,
CREATE TABLE my_table (
column1 NUMBER,
column2 NUMBER,
etc...
)
TABLESPACE BIGTABLESPACE
NOLOGGING
NOCOMPRESS
NOCACHE
NOPARALLEL
MONITORING;
should become
CREATE GLOBAL TEMPORARY TABLE my_table (
column1 NUMBER,
column2 NUMBER,
etc..
)
ON COMMIT PRESERVE ROWS;
I'm creating an equivalent global temporary table with rows that should be preserved until the end of the session for each existing permanent table. This global temp table will be used in the procedure instead of the permanent table.
(EXECUTE IMMEDIATE 'TRUNCATE ...' at the start, and INSERT /*+ APPEND */ INTO at some later point)
All of the permanent tables have been created in a big tablespace BIGTABLESPACE
The Oracle docs state that the global temporary table will be created in the user's temp tablespace (I assume this is TEMP). The problem with this is that the TEMP tablespace is small and the extents are not set to grow to the size I need them to grow during the procedure.
The TEMP tablespace was created during the database creation
create database "$oracle\_sid"
user sys identified by "$sys\_password"
user system identified by "$system\_password"
set default bigfile tablespace
controlfile reuse
maxdatafiles 256
maxinstances $maxinstances
maxlogfiles 16
maxlogmembers 3
maxloghistory 1600
noarchivelog
character set WE8MSWIN1252
national character set AL16UTF16
datafile
'$oracle\_home/oradata/$oracle\_sid/system01.dbf' size 512M
logfile
'$oracle\_home/oradata/$oracle\_sid/redo01.log' size 1G,
'$oracle\_home/oradata/$oracle\_sid/redo02.log' size 1G,
'$oracle\_home/oradata/$oracle\_sid/redo03.log' size 1G
sysaux datafile
'$oracle\_home/oradata/$oracle\_sid/sysaux01.dbf' size 256M
default temporary tablespace temp tempfile
'$oracle\_home/oradata/$oracle\_sid/temp01.dbf' size 5G
undo tablespace "UNDOTBS1" datafile
'$oracle\_home/oradata/$oracle\_sid/undotbs01.dbf' size 5G;
The permanent tables (that I'm planning to replace) were originally created in tablespace BIGTABLESPACE
-- 50G bigfile datafile size
create bigfile tablespace "BIGTABLESPACE"
datafile '$oracle\_home/oradata/$oracle\_sid/bts01.dbf' size 50G
extent management local
segment space management auto;
The permanent table indexes were originally created in tablespace BIGTABLESPACE
-- 20G bigfile datafile size
create bigfile tablespace "BIGINDXSPACE"
datafile '$oracle\_home/oradata/$oracle\_sid/btsindx01.dbf' size 20G
extent management local
segment space management auto;
Is replacing these permanent tables with global temporary tables feasable?
The TEMP tablespace will run into a problem extending the TEMP tablespace. Is there a way to create global temporary tables and their indexes in tablespaces BIGTABLESPACE and BIGINDXSPACE?
If not, how can I make the TEMP tablespace behave like a bigfile tablespace and achieve index/table separation?
Can I create two TEMP bigfile tablespaces and create indexes into one and tables into another?
I want to use global temporary tables, but the volume of data I am handling in the procedure would seem to be above and beyond the indended design of global temporary tables.
Any suggestions?
There is no benefit to separating data and indexes into separate tablespaces other than potentially making DBAs more comfortable that similar objects are grouped together. There is a long-standing myth that separating indexes and data was beneficial for performance reasons-- that is not correct.
Temporary objects should (and must) be stored in a temporary tablespace. You could increase the size of your TEMP tablespace or create a separate temporary tablespace just for the user(s) that will own these objects if you wanted to segregate these large temporary tables into a separate tablespace. You can't (and wouldn't want to) store them in your permanent tablespaces.
Architecturally, though, I would be very curious about why temporary tables were necessary in your system. If you have sessions that are writing 10's of GB into temporary tables, then presumably reading those 10's of GB out again in order to write the data somewhere else, I would tend to suspect that there were more efficient solutions. It is very rare in Oracle to even need temporary tables-- it is far more common in other databases where readers can block writers to need to copy data out of tables before working on it. Oracle has no such limitations.
I don't think that there's anything in your description that makes GTT's unattractive. You obviously need very large temporary tablespaces but you're not consuming more space overall unless you've been making heavy use of table compression (unavailable in GTT's at least up to 10gR2). Look into the use of tablespace groups: http://download.oracle.com/docs/cd/B19306_01/server.102/b14231/tspaces.htm#ADMIN01103
Using a tablespace group, rather than a single temporary tablespace, can alleviate problems caused where one tablespace is inadequate to hold the results of a sort, particularly on a table that has many partitions. A tablespace group enables parallel execution servers in a single parallel operation to use multiple temporary tablespaces.
Also, don't neglect the use of subquery factoring clauses. They can often replace the use of temporary tables. However they might still require just as much temporary storage space because a large result set from a SQFC can spill to disk to avoid the consumption of too much memory, so you still have to go ahead with the increase in TEMP space. They're very handy for not having to deploy a new database object every time you need a new temporary table.
I looked at large sized Global Temporary Tables for a migration exercise. It worked but for debugging and rejection hadling I eventually went with plain tables.
If the GTTs don't work out, consider either Row-Level Security / VPD (or even views).
You can have a column derived from sys_context('USERENV','SESSIONID') and use that to ensure that the user can only see their own data.
Still the thought of multiple sessions dealing with multi-gigabyte datasets concurrently is a bit scary.
PS. I believe that for GTTs used through a procedure use the temp tablespace of the session user rather than the temp tablespace of the procedure owner. If you can get the sessions as separate oracle users then you have a chance at spreading your file IO over different tablespaces.

SQL Server 2000 temp table vs table variable

What would be more efficient in storing some temp data (50k rows in one and 50k in another) to perform come calculation. I'll be doing this process once, nightly.
How do you check the efficiency when comparing something like this?
The results will vary on which will be easier to store the data, in disk (#temp) or in memory (#temp).
A few excerpts from the references below
A temporary table is created and populated on disk, in the system database tempdb.
A table variable is created in memory, and so performs slightly better than #temp tables (also because there is even less locking and logging in a table variable). A table variable might still perform I/O to tempdb (which is where the performance issues of #temp tables make themselves apparent), though the documentation is not very explicit about this.
Table variables result in fewer recompilations of a stored procedure as compared to temporary tables.
[Y]ou can create indexes on the temporary table to increase query performance.
Regarding your specific case with 50k rows:
As your data size gets larger, and/or the repeated use of the temporary data increases, you will find that the use of #temp tables makes more sense
References:
Should I use a #temp table or a #table variable?
MSKB 305977 - SQL Server 2000 - Table Variables
There can be a big performance difference between using table variables and temporary tables. In most cases, temporary tables are faster than table variables. I took the following tip from the private SQL Server MVP newsgroup and received permission from Microsoft to share it with you. One MVP noticed that although queries using table variables didn't generate parallel query plans on a large SMP box, similar queries using temporary tables (local or global) and running under the same circumstances did generate parallel plans.
More from SQL Mag (subscription required unfortunately, I'll try and find more resources momentarily)
EDIT: Here is some more in depth information from CodeProject

Resources