Procedure for Altering Table and updating keeps failing with invalid column - sql-server

I'm trying to add new columns to a table then update the table and set the new column with a date format change of the old column.
I have my procedure set out as follows:
begin
alter table [dbo].[mytable]
add New_Field1 varchar(24)
end
......
update [dbo].[SMR06_TARGET]
set New_Field1 = convert(varchar(24),Old_Field1,103)
.....
I have multiple alter table statements at the top of the table and update statements at the bottom for each new column. I think this is a rule with SQL keeping DDL at top and DML at bottom.
Ok so everytime I execute this to create the procedure it fails with incorrect column name New_Field1. I really can't peg down what is causing this. I've tried different variations of BEGIN....END tried commenting out the apprent offending statement, then it runs, then it fails again with the next statement.
I'm reckoning it's something to do with the way the statement(s) are terminated. I'm not sure as haven't done this type of procedure statement before with mixed DDL/DML.
Any hints would be most welcome.
Thanks
Andrew

You need to batch the statement that adds the column separately from the statement that updates it.
BEGIN TRANSACTION
GO
ALTER TABLE [dbo].[mytable]
ADD New_Field1 varchar(24) NULL
GO
UPDATE [dbo].[mytable]
SET New_Field1 = convert(varchar(24),Old_Field1,103)
GO
ALTER TABLE dbo.Batch SET (LOCK_ESCALATION = TABLE)
GO
COMMIT

The entire batch is reviewed by the parser before it starts executing the first line. Adding Old_Field1 is in the same batch as the reference to use Old_Field1. At the time the parser considers the statement containing Old_Field1, the statement to add Old_Field1 has not been executed, so that field does not yet exist.
If you're running in SSMS, include GO between each statement to force multiple batches. If you're running this in another tool that can't use GO, you'll need to submit each statement individually to ensure that they are fully executed before the next step is parsed.

Related

Drop or not drop temporary tables in stored procedures

I saw this question quite a many times but I couldn't get the answer that would satisfy me. Basically what people and books say is "Although temporary tables are deleted when they go out of scope, you should explicitly delete them when they are no longer needed to reduce resource requirements on the server".
It is quite clear to me that when you are working in management studio and creating tables, then until you close your window or disconnect, you will use some resources for that table and it is logically that it is better to drop them.
But when you work with procedure then if you would like to cleanup tables most probably you will do that at the really end of it (I am not talking about the situation when you drop the table as soon as you really do not need that in the procedure). So the workflow is something like that :
When you drop in SP:
Start of SP execution
Doing some stuff
Drop tables
End of execution
And as far as I understand how can it possibly work when you do not drop:
Start of SP execution
Doing some stuff
End of execution
Drop tables
What's the difference here? I can only imagine that some resources are needed to identify the temporary tables. Any other thoughts?
UPDATE:
I ran simple test with 2 SP:
create procedure test as
begin
create table #temp (a int)
insert into #temp values (1);
drop table #temp;
end
and another one without drop statements. I've enabled user statistics and ran the tests:
declare #i int = 0;
while #i < 10000
begin
exec test;
SET #i= #i + 1;
end
That's what I've got (Trial 1-3 dropping table in SP, 4-6 do not dropping)
As the picture shows that all stats are the same or decreased a bit when I do not drop temporary table.
UPDATE2:
I ran this test 2nd time but now with 100k calls and also added SET NOCOUNT ON. These are the results:
As the 2nd run confirmed that if you do not drop the table in SP then you actually save some user time as this is done by some other internal process but outside of the user time.
You can read more about in in this Paul White's article: Temporary Tables in Stored Procedures
CREATE and DROP, Don’t
I will talk about this in much more detail in my next post, but the
key point is that CREATE TABLE and DROP TABLE do not create and drop
temporary tables in a stored procedure, if the temporary object can be
cached. The temporary object is renamed to an internal form when DROP
TABLE is executed, and renamed back to the same user-visible name when
CREATE TABLE is encountered on the next execution. In addition, any
statistics that were auto-created on the temporary table are also
cached. This means that statistics from a previous execution remain
when the procedure is next called.
Technically, a locally scoped temp table (one with a single hashtag before it) will automatically drop out of scope after your SPID is closed. There are some very odd cases where you get a temp table definition cached somewhere and then no real way to remove it. Usually that happens when you have a stored procedure call which is nested and contains a temp table by the same name.
It's good habit to get into dropping your tables when you're done with them but unless something unexpected happens, they should be de-scoped anyway once the proc finishes.

Add temporary column for insert stored procedure in oracle

I am trying to add a temporary column to a target table and use that column in a where clause to insert new data into a parent table via stored procedure that I am using for a one-to-one relationship from parent to target table (see code below). I am getting an error with the alter table add column statement thus resulting in the IMPORT_NUMBER being an invalid identifier. Any help would be much appreciated.
EXECUTE IMMEDIATE
'ALTER TABLE TARGET_TABLE ADD IMPORT_NUMBER NUMBER';
INSERT
INTO
TARGET_TABLE(
existing_col_1,
existing_col_2,
existing_col_3,
IMPORT_NUMBER
)
SELECT
STAGED_TABLE.value1,
STAGED_TABLE.value2,
STAGED_TABLE.value3,
STAGED_TABLE.IMPORT_NUMBER
FROM
STAGED_TABLE
GROUP BY
IMPORT_NUMBER;
UPDATE
PARENT_TABLE
SET
target_table_id =(
SELECT
TARGET_TABLE.id
FROM
TARGET_TABLE
WHERE
TARGET_TABLE.IMPORT_NUMBER = PARENT_TABLE.IMPORT_NUMBER
)
WHERE
PARENT_TABLE.IMPORT_NUMBER IS NOT NULL;
EXECUTE IMMEDIATE 'ALTER TABLE TARGET_TABLE DROP COLUMN IMPORT NUMBER';
If this is a stored procedure, the entire procedure is parsed and validated at the time of create or replace procedure. At the time the procedure is created the column IMPORT_NUMBER does not exist, so the insert and update statements can not be validated.
I would try to find a solution that does not include DDL if possible. Can the column be made a permanent part of the table?
If you must follow this path, the insert and update statements will need to be in strings and passed to execute immediate or DBMS_SQL so that they are parsed and validated at run time, after the column is created.

Let SQL wait until previous statement is done

I have been searching around but I cannot find the correct answer, probably I search wrong because I don't know what to look for :)
Anyway, I have a TSQL with a begin and commit transaction. In the transaction I add some columns and also rename some columns.
Just after the renames and added column statement i also run some update statements to load data into the newly created columns.
Now the problem is that for some reason the update gives an error that it cannot update the given column as it does not exist (YET???).
My idea is that the statement is still working out the rename and the adding of the columns but already goes ahead with the update statements. The table is very big and has a few million records so I can imagine it takes some time to add and rename the columns
If I run first the rename and add statements and than separate the update statements, it does work. So it has to do with some wait time.
Is it possible to make sql force to execute step by step and wait until the complete statement is done before going to the next?
If you modify columns (e.g. add them), you have to finish the batch before you can continue with updating them. Insert the GO keyword between table structure changes and updates.
To illustrate that, the following code won't work:
create table sometable(col1 int)
go
alter table sometable add col2 varchar(10)
insert into sometable(col2) values ('a')
But inserting go will make the insert recognise the new column
create table sometable(col1 int)
go
alter table sometable add col2 varchar(10)
go
insert into sometable(col2) values ('a')
If you do it in the code, you may want to create separate transaction for the structure changes and data migration. You can still wrap them it in one transaction for data integrity.
It doesn't have anything to do with wait time. The queries are run in order. It's because all the queries are submitted all at once and therefore when it tries to validate your update, the column doesn't exist at that point in time. To get around it, you need to send the update in a separate batch. The following keyword needs to be added between your alter and update statement
GO
You can try using select for update,
http://docs.oracle.com/cd/B28359_01/server.111/b28286/statements_10002.htm#i2130052
This will ensure that your query will wait for lock, bit it is recommended to Specify WAIT to instruct the database to wait integer seconds so that it will not wait for indefinate time.

SQL Server - after insert trigger - update another column in the same table

I've got this database trigger:
CREATE TRIGGER setDescToUpper
ON part_numbers
AFTER INSERT,UPDATE
AS
DECLARE #PnumPkid int, #PDesc nvarchar(128)
SET #PnumPkid = (SELECT pnum_pkid FROM inserted)
SET #PDesc = (SELECT UPPER(part_description) FROM inserted)
UPDATE part_numbers set part_description_upper = #PDesc WHERE pnum_pkid=#PnumPkid
GO
Is this a bad idea? That is to update a column on the same table. I want it to fire for both insert and update.
It works, I'm just afraid of a cyclical situation. The update, inside the trigger, fires the trigger, and again and again. Will that happen?
Please, don't nitpick at the upper case thing. Crazy situation.
It depends on the recursion level for triggers currently set on the DB.
If you do this:
SP_CONFIGURE 'nested_triggers',0
GO
RECONFIGURE
GO
Or this:
ALTER DATABASE db_name
SET RECURSIVE_TRIGGERS OFF
That trigger above won't be called again, and you would be safe (unless you get into some kind of deadlock; that could be possible but maybe I'm wrong).
Still, I do not think this is a good idea. A better option would be using an INSTEAD OF trigger. That way you would avoid executing the first (manual) update over the DB. Only the one defined inside the trigger would be executed.
An INSTEAD OF INSERT trigger would be like this:
CREATE TRIGGER setDescToUpper ON part_numbers
INSTEAD OF INSERT
AS
BEGIN
INSERT INTO part_numbers (
colA,
colB,
part_description
) SELECT
colA,
colB,
UPPER(part_description)
) FROM
INSERTED
END
GO
This would automagically "replace" the original INSERT statement by this one, with an explicit UPPER call applied to the part_description field.
An INSTEAD OF UPDATE trigger would be similar (and I don't advise you to create a single trigger, keep them separated).
Also, this addresses #Martin comment: it works for multirow inserts/updates (your example does not).
Another option would be to enclose the update statement in an IF statement and call TRIGGER_NESTLEVEL() to restrict the update being run a second time.
CREATE TRIGGER Table_A_Update ON Table_A AFTER UPDATE
AS
IF ((SELECT TRIGGER_NESTLEVEL()) < 2)
BEGIN
UPDATE a
SET Date_Column = GETDATE()
FROM Table_A a
JOIN inserted i ON a.ID = i.ID
END
When the trigger initially runs the TRIGGER_NESTLEVEL is set to 1 so the update statement will be executed. That update statement will in turn fire that same trigger except this time the TRIGGER_NESTLEVEL is set to 2 and the update statement will not be executed.
You could also check the TRIGGER_NESTLEVEL first and if its greater than 1 then call RETURN to exit out of the trigger.
IF ((SELECT TRIGGER_NESTLEVEL()) > 1) RETURN;
Use a computed column instead. It is almost always a better idea to use a computed column than a trigger.
See Example below of a computed column using the UPPER function:
create table #temp (test varchar (10), test2 AS upper(test))
insert #temp (test)
values ('test')
select * from #temp
And not to sound like a broken record or anything, but this is critically important. Never write a trigger that will not work correctly on multiple record inserts/updates/deletes. This is an extremely poor practice as sooner or later one of these will happen and your trigger will cause data integrity problems asw it won't fail precisely it will only run the process on one of the records. This can go a long time until someone discovers the mess and by themn it is often impossible to correctly fix the data.
It might be safer to exit the trigger when there is nothing to do. Checking the nested level or altering the database by switching off RECURSIVE can be prone to issues.
Ms sql provides a simple way, in a trigger, to see if specific columns have been updated. Use the UPDATE() method to see if certain columns have been updated such as UPDATE(part_description_upper).
IF UPDATE(part_description_upper)
return
Yes, it will recursively call your trigger unless you turn the recursive triggers setting off:
ALTER DATABASE db_name SET RECURSIVE_TRIGGERS OFF
MSDN has a good explanation of the behavior at http://msdn.microsoft.com/en-us/library/aa258254(SQL.80).aspx under the Recursive Triggers heading.
Yea...having an additional step to update a table in which you can set the value in the inital insert is probably an extra, avoidable process.
Do you have access to the original insert statement where you can actually just insert the part_description into the part_description_upper column using UPPER(part_description) value?
After thinking, you probably don't have access as you would have probably done that so should also give some options as well...
1) Depends on the need for this part_description_upper column, if just for "viewing" then can just use the returned part_description value and "ToUpper()" it (depending on programming language).
2) If want to avoid "realtime" processing, can just create a sql job to go through your values once a day during low traffic periods and update that column to the UPPER part_description value for any that are currently not set.
3) go with your trigger (and watch for recursion as others have mentioned)...
HTH
Dave
create or replace
TRIGGER triggername BEFORE INSERT ON
table FOR EACH ROW
BEGIN
/*
Write any select condition if you want to get the data from other tables
*/
:NEW.COLUMNA:= UPPER(COLUMNA);
--:NEW.COUMNa:= NULL;
END;
The above trigger will update the column value before inserting.
For example if we give the value of COLUMNA as null it will update the column as null for each insert statement.

Why does this update throw an error even though the Alter Table command should be finished?

This has been a nagging issue for me for some time and I would love to know the reason why these SQL Batch commands aren't working.
I have a table that I use to hold configuration settings for a system. When a new setting is added, we add a new field to the table. During an update, I need to change a slew of databases on the server with the same script. Generally, they are all in the same state and I can just do the following:
Alter Table Configuration Add ShowClassesInCheckin bit;
GO
Update Configuration Set ShowClassesInCheckin=ShowFacilitiesInCheckin;
GO
This works fine. However, sometimes one or two databases get updated so I want to write conditional logic to make these changes only if the field doesn't already exist:
if Not Exists(select * from sys.columns where Name = N'ShowClassesInCheckin' AND Object_ID = Object_ID(N'Configuration'))
BEGIN
Alter Table Configuration Add ShowClassesInCheckin bit;
Update Configuration Set ShowClassesInCheckin=ShowFacilitiesInCheckin;
END;
GO
In this case, I get an error: "Invalid column name 'ShowClassesInCheckin'" Now, this makes sense in that the Alter Table isn't comitted in the batch before the Update is called (it doesn't work without the "GO" between the Alter and Update). But that doesn't help...I still don't know how to I achieve what I am after...
The entire SQL script is parsed before it's executed. During the parsing phase, the column will not exist, so the parser generates an error. The error is raised before the first line of the script is executed.
The solution is dynamic SQL:
exec (N'Update Configuration Set ShowClassesInCheckin=ShowFacilitiesInCheckin;')
This won't get parsed before the exec is reached, and by then, the column will exist.
An alternative that should work is to re-introduce the go. This means that you need to use something else as the condition for the update, possibly based on database name.
if Not Exists(select * from sys.columns where Name = N'ShowClassesInCheckin' AND Object_ID = Object_ID(N'Configuration'))
BEGIN
Alter Table Configuration Add ShowClassesInCheckin bit;
END;
GO
if *new condition here*
BEGIN
Update Configuration Set ShowClassesInCheckin=ShowFacilitiesInCheckin;
END;
GO

Resources