I'm dropping/creating a temp table many times in a single script
IF OBJECT_ID('tempdb..#uDims') IS NOT NULL
DROP TABLE #uDims
select * into #uDims from table1
.... do something else
IF OBJECT_ID('tempdb..#uDims') IS NOT NULL
DROP TABLE #uDims
select * into #uDims from table2 -- >> I get error here
.... do something else
IF OBJECT_ID('tempdb..#uDims') IS NOT NULL
DROP TABLE #uDims
select * into #uDims from table3 -- >> and here
.... do something else
when trying to run the script, I get
There is already an object named '#uDims' in the database.
on the second and third "select into..."
That is obviously a compile time error. If I run the script section by section, every thing will work well.
There are many workaround for this issue, but I want to know why SSMS is upset on that.
You can't create the same temp table more than once inside a stored procedure.
Per the documentation (in the Remarks section),
If more than one temporary table is created inside a single stored
procedure or batch, they must have different names.
So, you either have to use different temp table names or you have to do this outside a stored procedure and use GO.
Ivan Starostin is correct. I test on my SQL this TSQL and it works fine.
IF OBJECT_ID('tempdb..#uDims') IS NOT NULL
DROP TABLE #uDims
select top 10 * into #uDims from tblS
go
IF OBJECT_ID('tempdb..#uDims') IS NOT NULL
DROP TABLE #uDims
select top 10 * into #uDims from Waters
without the go I get the same error as you(FLICKER).
For a script, as others have said using GO is the fix.
However, if this is actually code in a stored procedure, you’ve got a different problem. It’s not SSMS that doesn’t like the syntax, it’s the SQL compiler. It sees and chokes on those three INSERT… INTO… statements, and is not clever enough to realize that you are dropping the table between creation statements. (Even if you take out the IF statements, you still get the problem.)
The fix is to use different temp table names. (A fringe benefit, since the temp table are based on three different tables, this will help make it clearer that the table structures are different.) If you are worried about excess space in memory, you can still drop each temp table once you’re done with it.
Related
I'm using SQL Server 2012 Express. I want to create a synonym (or similar inline solution) to substitute in multiple standard column names across many tables.
For example, almost every table in my database has 3 identical columns: ID, DateAdded and TenantID. I want to have a way to select these without having to list them all out every time.
I tried some simple code as below to try to achieve this, but the syntax isn't correct in the create synonym section. I've googled but can't find anything that gives me what I'm after as an inline solution.
So for example, rather than:
SELECT [ID], [DateAdded], [TenantID]
FROM TableName
instead, I hoped to use this code to create a synonym:
CREATE SYNONYM [dbo].[Fields] FOR [ID], [DateAdded], [TenantID]
then I can repeatedly write the query:
SELECT dbo.[Fields] FROM TableName
and have the TableName be different every time.
I need this to work across many tables, so creating a view for each table won't be satisfactory.
Maybe synonyms aren't the right solution, but if not then I'd be happy to hear of some other way that provides an inline solution.
Following the post for a while as the topic seems interesting and a learning opportunity to me :) Not sure this is possible or not, but you can think about Dynamic query execution as an alternative as below-
DECLARE #C_Names VARCHAR(MAX)
DECLARE #T_Name VARCHER(MAX)
SET #C_Names = '[ID], [DateAdded], [TenantID]'
SET #T_Name = 'your_table_name'
--AS column names are fixed, you can now change the table name
--only and execute the script to get your desired output
EXEC('SELECT '+#C_Names+' FROM '+#T_Name+'')
Hope this will at least give you some light of hope.
I have a table with a simple identity column primary key. I have written a 'For Update' trigger that, among other things, is supposed to log the changes of certain columns to a log table. Needless to say, this is the first time I've tried this.
Essentially as follows:
Declare Cursor1 Cursor for
select a.*, b.*
from inserted a
inner join deleted b on a.OrderItemId = b.OrderItemId
(where OrderItemId is the actual name of the primary identity key).
I then do the usual open the cursor and go into a fetch next loop. With the columns I want to test, I do:
if Update(Field1)
begin
..... do some logging
end
The columns include varchars, bits, and datetimes. It works, sometimes. The problem is that the log function is writing the a and b values of the field to a log and in some cases, it appears that the before and after values are identical.
I have 2 questions:
Am I using the Update function correctly?
Am I accessing the before and after values correctly?
Is there a better way?
If you are using SQL Server 2016 or higher, I would recommend skipping this trigger entirely and instead using system-versioned temporal tables.
Not only will it eliminate the need for (and performance issues around) the trigger, it'll be easier to query the historical data.
I have the following scenario for which i will have to write a stored procedure:
Header table containing invoice_ID and invoice_line_ID
Address Line table containing invoice_line_id and 'Ship_From' and 'Ship_To' corresponding to each invoice_line_ID in header table.
3.Address header table containing invoice_ID and 'Ship_From' and 'Ship_To' corresponding to each invoice_id in header table.
The cases are such that not always all 'Ship_From' and 'Ship_To' information will be present in the Address Line table. In that case the information needs to be selected from Address Header table.
So i will write a case structure and two joins : 1. That will join Header table and Address Line table
2. That will join Header table and Address Header table.
with the condition to do the second join in case entire information for a particular invoice_line_id is not available in line table.
My question here is where should i store the information? I will use a cursor to perform the above case structure. But should i use a ref cursor or a temp table in this case?
Please note that my customer is not liking the idea of extra database objects in the database so i might have to delete the temp table after i am done displaying. I need help on that as well as to is there any alternative to temp table or whether ref cursor take up extra space on the database or not.
In your case you shouldn't use temporary tables. This sort of tables not differs much from ordinary tables. It is an object, that persists in DB always. If you want to create and drop it every time, you need to solve a number of problems. If two users work with a database in a same time, you need to check, is it already created by another user or not. Or you need to be sure, that every user will create a table with unique name. You need a mechanism to delete a table, that was not deleted properly, for example, when user session was aborted due to network problem. And many other problems. It is not a way to use oracle temporary tables.
UPD
About refcursors.
declare
my_cursor sys_refcursor;
num number;
begin
open my_cursor for select rownum from dual;
loop
fetch my_cursor into num;
exit when my_cursor%notfound;
-- do something else
end loop;
end;
/
This is a simple example of using cursors. As for me, it matches more in your situation than temporary table.
I am getting an error like "Bad int8 external representation "6*725" " in netezza while executing a stored procedure . This stored procedure takes data from a table does some transformations and load into another table.
Can any one please help me .
Thanks,
Brajendra
FYI: May be multiple answers to this question because do not have the query that you ran to get the error.
If you did a direct INSERT command like this, the order of the columns of the table from the select clause DO NOT match the order of the columns of the table from the insert clause. Most database management systems doesn't care what the order is, but Netezza does. The fact that it threw the "Bad int8" just means the first column it couldn't match in the select clause has that data type, and the data type in the insert clause has a different data type.
INSERT INTO DB1..TABLE1
SELECT * FROM DB1..TABLE2;
You can fix with one of two methods. Either change the order of the columns by dropping and recreating the table. Or use explicit column names in the INSERT INTO/SELECT command.
I want to do some calculations when my table data is changed. However, I am updating my table manually and copy pasting about 3000 rows in once. That makes my trigger work 3000 times, but I want it to do the trigger only once.
Is there a way to do that ?
Thanks.
What's your statement?
If you have multiple inserts then it will fire for each insert you are running. If you want it to execute only once for multiple inserts you must:
Write your insert on a single statement such as insert into foo select * from bar
Your trigger can't be for each row
Other possibility might be:
Disable the trigger
Perform your insertions
Put the trigger code in a stored procedure
Run your stored procedure
If by 'manually' you mean you are copying and pasting into some User Interface tool (like an Access dataGrid) or something like that, then the tool may be issuing one insert statement per row, and in that case you are out of luck the database trigger will be executed once per insert statement. As other answers have mentioned, if you can insert the rows directly into the database, using a single insert statement, then the trigger will only fire once.
The issue is caused because you are manually pasting the 3000 rows. You really have 2 solutions. You can turn off the trigger by doing this:
ALTER TABLE tablename DISABLE TRIGGER ALL
-- do work here
ALTER TABLE tablename ENABLE TRIGGER ALL
and then run the contents of your trigger at then end or you can put your 3000 columns into a temp table and then insert them all at once. This will only setup 1 trigger. If this isn't enough please give us more info on what you are trying to do.
Your trigger will not fire 3000 times if you are modifying 3000 rows in a single statement. Your trigger will fire once and there will be 3000 rows in your virtual 'deleted' table.
If you are limited in how you can import the data into the system that does a single insert per row, I would suggest that you import data into an intermediate table then do the insert into the final table from the intermediate one.
There is a better way.
Re-link the tables that your trigger created or altered, compare them against what's expected to be changed or added and avoid the trigger with a simple WHERE clause.
eg - this trigger I use to INSERT a record, but only once based on a column value existing (#ack).
DECLARE #ack INT
SELECT #ack = (SELECT TOP 1 i.CUSTOM_BOOL_1 AS [agent_acknowledged] FROM inserted AS i)
IF #ack = 1
BEGIN
INSERT INTO TABLEA(
COLA, COLB, etc
)
SELECT
COLA, COLB, etc
from inserted as i
LEFT JOIN TABLEA AS chk --relink to the INSERT table to see if the record already exists
ON chk.COLA = i.COLA
AND chk.COLB = i.COLB
AND etc
WHERE chk.ID IS NULL --and here we say if NOT found, then continue to insert
END