I wans to completely remove any data from the file system after a drop table.
Is there any way to do that?
If you are referring to FILESTREAM Data, you can execute
EXEC SP_FILESTREAM_FORCE_GARBAGE_COLLECTION
Actually you need to execute twice.
More info here:
https://learn.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/filestream-and-filetable-sp-filestream-force-garbage-collection?view=sql-server-ver16
Related
I know lots have already been written about what GO does and when to use it but I haven't seen anything which explains the following scenario.
We have been supplied with some stored procedures with an application to populate a set of static warehouse tables, but they were producing error messages. Each procedure has the following structure:
exec usp_CreateDWTable
insert into dbo.DWTable
Select [somefields]
from [SomeLiveTables]
The usp_CreateDWTable stored procedure consists of:
if exists (select * from sysobjects where id = object_id('dbo.DWTable') and sysstat & 0xf = 3)
drop table dbo.DWTable
CREATE TABLE dbo.DWTable(CONSTRAINT [field1] PRIMARY KEY (Alert_ID),[field2],,,)
Running as above returns the error message:
Msg 213, Level 16, State 1, Line 15
Column name or number of supplied values does not match table definition.
I have eventually worked out that adding GO between the EXEC and the INSERT fixes the issue but if sql process commands sequentially anyway what difference does this make? Now I know a CREATE TABLE batch has to be sent to the server before you can send an INSERT but I have read that sql can handle this scenario implicitly, is the fact that the CREATE TABLE is in a sp interfering with this?
Even if my above assumption above is correct, an additional source of confusion is that we have two instances of this database on the server and this error does not happen on the other database. The structures of both databases are virtually identical, although the data is different. Is there a setting within the database that is causing them to behave differently?
Thanks
The entire batch is compiled using the existing table definition when the table already exists. So when the batch includes drop/create/insert, the batch is validated against the existing schema before the new table is created and compilation fails when the existing/new table has a different number of columns.
When you execute the script as separate batches (GO command), the drop/create statements are executed first. Then the insert statement batch is then compiled and run using the new table definition.
Hi everyone I have small doubt in SSIS package:
I am using a stored procedure which is giving set of records, finally these records will be saved in temp table.
The thing is now I want this records to be exported to excel, so I planned to use SSIS package to do that. Now the problem is how will I define the OLE DB source in SSIS, because since am using #temptable at the runtime in stored procedure it will not be displayed in source of SSIS.
Kindly suggest to Export temp table to Excel files.
That fact that you're using a temp table is not likely to matter too much. Is your temp table used in the stored procedure logic which outputs a select statement? If yes, then in your OLE DB source set Data Access Mode to SQL Command and call the store procedure (EXEC myStoredProcedure).
Using a #temptable can be problematic in SSIS. If you need to access a temp table in different data flows or transformations, then you will need to used the global ##temptable.
Review this question, the answer goes into great detail on using temp tables.
Another Reference: SSIS: Using temporary tables
You can use Execute SQL Task to execute the stored procedure, load the data into the temp table.
You can access the temp table with Query command instead of select from the drop-down list.
Remember set up the connection manager's property RetainSameConnection as TRUE
I have sometimes a problem when running a script. I have the probelm when using an application (that I didn't write and therefore cannot debug) that launches the scripts. This app isn't returning the full error from SQL Server, but just the error description, so I don't know exactly where th error comes.
I have the error only using this tool (it is a tool that sends the queries directly to SQL Server, using a DAC component), if I run the query manuallyin management studio I don't have the error. (This error moreover occurs only on a particular database).
My query is something like:
SELECT * INTO #TEMP_TABLE
FROM ANOTHER_TABLE
GO
--some other commands here
GO
INSERT INTO SOME_OTHER_TABLE(FIELD1,FIELD2)
SELECT FIELDA, FIELDB
FROM #TEMP_TABLE
GO
DROP TABLE #TEMP_TABLE
GO
The error I get is #TEMP_TABLE is not a valid object
So somehow i suspect that the DROP statement is executed before the INSERT statement.
But AFAIK when a GO is there the next statement is not executed until the previous has been completed.
Now I suspoect that this is not true with temp tables... Or do you have another ideas?
Your problem is most likely caused by either an end of session prior to the DROP TABLE causing SQL Server to automatically drop the table or the DROP TABLE is being executed in a different session than the other code (that created and used the temporary table) causing the table not to be visible.
I am assuming that stored procedures are not involved here, because it looks like you are just executing batches, since local temporary tables are also dropped when a stored proc is exited.
There is a good description of local temporary table behavior in this article on Temporary Tables in SQL Server:
You get housekeeping with Local Temporary tables; they are
automatically dropped when they go out of scope, unless explicitly
dropped by using DROP TABLE. Their scope is more generous than a table
Variable so you don't have problems referencing them within batches or
in dynamic SQL. Local temporary tables are dropped automatically at
the end of the current session or procedure. Dropping it at the end of
the procedure that created it can cause head-scratching: a local
temporary table that is created within a stored procedure or session
is dropped when it is finished so it cannot be referenced by the
process that called the stored procedure that created the table. It
can, however, be referenced by any nested stored procedures executed
by the stored procedure that created the table. If the nested
procedure references a temporary table and two temporary tables with
the same name exist at that time, which table is the query is resolved
against?
I would start up SQL Profiler and verify if your tool uses one connection to execute all batches, or if it disconnects/reconnects. Also it could be using a connection pool.
Anyway, executing SQL batches from a file is so simple that you might develop your own tool very quickly and be better off.
I have an application that uses a SQL Server database with several instances of the database...test, prod, etc... I am making some application changes and one of the changes involves changing a column from a nvarchar(max) to a nvarchar(200) so that I can add a unique constraint on it. SQL Server tells me that this requires dropping the table and recreating it.
I want to put together a script that will do the table drop, recreate it with the new schema, and then reinsert the data that was there previously all in one go, if possible, just to keep things simple for use when I migrate this change to production.
There is probably a good SQL Server way to do this but I'm just not aware of it. If I was using Mysql I would mysqldump the table and its contents, and use that as my script for applying that change to production. I can't find any export functionality in SQL server that will give me a text file consisting of inserts for all data in a table.
Use SQL Server's Generate Scripts command
right click on the database; Tasks -> Generate Scripts
select your tables, click Next
click the Advanced button
find Types of data to script - choose Schema and Data.
you can then choose to save to file, or put in new query window.
results in INSERT statements for all table data selected in bullet 2.
No need to script
here are two ways
1 use alter table ....alter column.....
example..you have to do 1 column at a time
create table Test(SomeColumn nvarchar(max))
go
alter table Test alter column SomeColumn nvarchar(200)
go
2 dump into a new table while converting the column
select <columns except for the columns you want to change>,
convert(nvarchar(200),YourColumn) as YourColumn
into SomeNewTable
from OldTable
drop old table
rename this table to the same table as the old table
EXEC sp_rename 'SomeNewTable', 'OldTable';
Now add your index
In SQL Server 2008, I want to move ALL non-clustered indexes in a DB to a secondary filegroup. What's the easiest way to do this?
Run this updated script to create a stored procedure called MoveIndexToFileGroup. This procedure moves all the non-clustered indexes on a table to a specified file group. It even supports the INCLUDE columns that some other scripts do not. In addition, it will not rebuild or move an index that is already on the desired file group. Once you've created the procedure, call it like this:
EXEC MoveIndexToFileGroup #DBName = '<your database name>',
#SchemaName = '<schema name that defaults to dbo>',
#ObjectNameList = '<a table or list of tables>',
#IndexName = '<an index or NULL for all of them>',
#FileGroupName = '<the target file group>';
To create a script that will run this for each table in your database, switch your query output to text, and run this:
SELECT 'EXEC MoveIndexToFileGroup '''
+TABLE_CATALOG+''','''
+TABLE_SCHEMA+''','''
+TABLE_NAME+''',NULL,''the target file group'';'
+char(13)+char(10)
+'GO'+char(13)+char(10)
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_TYPE = 'BASE TABLE'
ORDER BY TABLE_SCHEMA, TABLE_NAME;
Please refer to the original blog for more details. I did not write this procedure, but updated it according to the blog's responses and confirmed it works on both SQL Server 2005 and 2008.
Updates
#psteffek modified the script to work on SQL Server 2012. I merged his changes.
The procedure fails when your table has the IGNORE_DUP_KEY option on. No fix for this yet.
#srutzky pointed out the procedure does not guarantee to preserve the order of an index and made suggestions on how to fix it. I updated the procedure accordingly.
ojiNY noted the procedure left out index filters (for compatibility with SQL 2005). Per his suggestion, I added them back in.
Script them, change the ON clause, drop them, re-run the new script. There is no alternative really.
Luckily, there are scripts on the Interwebs such as this one that will deal with scripting for you.
Update: This thing will take long time to do step 2 manually if you are using MS SQL Server manager 2008R2 or earlier. I used sql server manager 2014, so it works well (because the way it export the drop and create index is easy to modify)
I tried to run script in SQL server 2014 and got some issue, I'm too lazy to detect the problems, SO I come up with another solution that not depend on the version of SQL server you are running.
Export your index (with drop and create)
2.Update your script, remove all things related to drop create tables, keep the thing belong to indexs. and Replace your original index with the new index (in my case, I replace ON [PRIMARY] by ON [SECONDARY]
[]5
Run script! And wait until it done.
(You may want to save the script to run in some others environment).