SQL Server trigger - copy row before updating - sql-server

I'd like to copy a table's row before updating and I'm trying to do it like this:
CREATE TRIGGER first_trigger_test
on Triggertest
FOR UPDATE
AS
insert into Triggertest select * from Inserted
Unfortunately, I get the error message
Msg 8101, Level 16, State 1, Procedure first_trigger_test, Line 6
An explicit value for the identity column in table 'Triggertest' can only be specified when a column list is used and IDENTITY_INSERT is ON.
I assume it's because of the id-column; can't I do something like 'except' id? I do not want to list all the columns in the trigger as it should be as dynamic as possible...

You can't, basically. You'll either have to specify the columns, or use a separate table:
CREATE TRIGGER first_trigger_test
on Triggertest
FOR UPDATE
AS
insert into Triggertest_audit select * from deleted
(where Triggertest_audit is a second table that looks like Triggertest, but without the primary key/identity/etc - commonly multiple rows per logical source row; not I assumed you actually wanted to copy the old values, not the new ones)

The problem happens because you are trying to set an identity column in Triggertest.
Is that your plan?
If you want to copy the new identity columns from INSERTED into Triggertest, then define the column in Triggertest without IDENTITY
If Triggertest has it's own IDENTITY columns, use this:
insert into Triggertest (col1, col2, col3) select col1, col2, col3 from Inserted
After comment:
No, you can't without dynamic SQL to detect what table and find all non-identity colums.
However, if you add or remove columns you'll then have a mis-match between trigger table and Triggertest and you'll get a different error.
If you really want it that dynamic, you'd have to concat all columns into one or use XML to ignore schema.
Finally:
Do all your tables have exactly the same number of columns and datatypes and nullability as TriggerTest... because this is the assumption here...

IF you want the table to be built each time the trigger runs then you have no choice but to use the the system tables to find the columns and create a table with those column definitions. Of course your first step will have to be to drop the existing table or the trigger won't work the second time someone updates a record.
However, I think you need to rethink this process. Dropping a table then creating a new one every time you change a record is a seriously bad idea. How is this table in anyway useful when it may get wiped out and rebuilt every second or so?
What you might consider doing instead is create a dynamic process to create the Create trigger scripts that have the correct information for that table but which are not dynamic. Then your configuration people need to run this process every time table changes are made.
Remember it is critical for triggers to do two things, run as fast as humanly possible and account for proccesing all the records inthe batch (triggers should never have row-by-row proccessing or other slow processses or assume only one row will be in inserted or deleted tables) Dynamic SQL in a trigger is porbably also a bad idea as you can't test out all the possibilites beforehand and can bring your whole production server to a screaming halt when some unexpected thing happens.

Related

Using Dynamic SQL in a trigger to identify changes

I'm in the process of building a brand-new database, and want every table I create to have a corresponding audit table which would track any data changes.
In order to avoid having to hard-code every table column, what I would like to do is use Dynamic SQL to review each column in the table (with the exception of the Identity column) and work out whether or not the column has been changed, using the Inserted and Deleted tables to do so. By doing that, I could then theoretically add columns to the tables without having to re-create the triggers associated with the tables.
Is such a thing possible or am I running down a blind alley?

Behind the scene operations for ALTER COLUMN statement in SQL Server

I am altering the column datatype for a table with around 100 Million records using the below query:
ALTER TABLE dbo.TARGETTABLE
ALTER COLUMN XXX_DATE DATE
The column values are in the right date format as I inserted original date from a valid data source.
However, the query have been running for a long time and even when I attempt to cancel the query it seems to take forever.
Can anyone explain what is happening behind the scene in SQL Server when an ALTER TABLE STATEMENT is executed and why requires such resources?
There are a lot of variables that will make these Alter statements
make multiple passes through your table and make heavy use of TempDB
and depending on efficiency of TempDB it could be very slow.
Examples include whether or not the column you are changing is in the
index (especally clustered index since non-clustering key carries the
clustering index).
Instead of altering table...i will give you one simple exmaple...so you can try this....
Suppose your table name is tblTarget1
Create the another table (tblTarget2) with same structure...
Change the dataType of tblTarget2.....
Copy the data from tblTarget1 To tblTarget2 using Insert into query....
Drop the original table(tblTarget1)
Rename the tblTarget2 as tblTarget1
The main Reaseon is that....changing the data type will take a lot of data transfer and data page alignment....
For more Information you can follow this Link
Another approach to do this is the following:
Add new column to the table - [_date] date
Using batch update you can change transfer the values from the old to the new column without blocking the table for the other users.
Then in one transaction do the following:
update all of the new values inserted after the update is done
drop the old column
rename the new column
Note, if you have an index on this field you need to drop it before deleting the old column and create if after renaming the new one.

are there any options for doing bulk-insert to multiple related tables with entity framework (sql server 2008 r2 target)?

There are existing options for doing bulk insert into a single table with EF entities. Specifically, this SO question and using this class from David Browne.
In the case of trying to bulk insert rows into both a parent and child table, however, nothing jumps out as an option at that same level of convenience.
The 'hacks' I can think of (but I'm hoping there's at least one better option out there) include:
generate the PK's and set the FK's before insert (in this scenario, we know nothing else is inserting at the same time), then do the bulk inserts of both (turning off IDENTITY_INSERT during the parent insert if necessary)
bulk insert (using the linked SO question's approach) the parent rows, select them (enough columns to identify which parent row is which), generate child rows, bulk insert those
generate the sql necessary to insert all the rows in a single batch, doing each parent and then all related children, using ##identity to fill in the FK for the child inserts
The 'pregenerate PK values' approach (I haven't actually tried it) seems fine, but is fragile (requires no other inserts to at least parent during the operation) and depends on either an empty table or selecting max(pk)+1 beforehand.
Since SqlBulkCopy seems to be built around inserting a table at a time (like bcp), anything that still lets sql generate the PK/identity column would seem to be built around 'dropping down' to ado.net and building the sql.
Is there an option outside of 'generate the tons of sql' that I'm missing? If not, is there something out there that already generates the sql for mass-insert into related tables?
Thanks!!
The first rule of any foreign key constraint is that it must exist, as a primary key or unique constraint, in another table before inserted into the foreign key table.
This works great when you are adding a few rows at a time (traditional transaction processing environment). Howevere, you are trying to bulk insert into both at the same time. I'd term this as batch processing. Basically, the bulk update lock on the parent table is going to block the child table from reading it to check that the fk linkage is valid.
I'd say your 2 options would be 1.) leave the fk out entirely or 2.) Set the fk as nocheck before the bulk insert, then turn the check on after the bulk insert is complete with an alter table.

How to insert empty record in a table

----------
ID NAME
3 A
4 B
5 C
----------
when i delete all record, it continues after number five's record, but i want it must be insert first index of this table. can anyone help me?
I assume you've got your ID column as an IDENTITY column, and you want to reset it to start again at 1, after having removed all rows from the table.
First, I'd say that having such a need (that the ID value start at 1) tends to mean there's something wrong with what you're doing - IDENTITY columns can always have gaps in the numbering, and should be treated as opaque blobs. The fact that they appear to be integers, and tend to be easy to remember, are just implementation details.
Second, if you want to do such a reset, you'd use DBCC CHECKIDENT
Edit
If you really do depend on these ID values (say, because they're also used in an application), it's a good indicator that the column shouldn't have the IDENTITY property in the first place. Unfortunately, you can't directly remove this property - you'd have to create a copy of the table without this property, copy all rows across, delete the original table, and rename the copy. Management Studio will pretend you can just remove the property, but will do what I've just described behind the scenes.
A simple way would be to
TRUNCATE TABLE mytable;
instead of
DELETE FROM mytable
From TRUNCATE TABLE (Transact-SQL)
If the table contains an identity column, the counter for that column is reset to the seed value defined for the column. If no seed was defined, the default value 1 is used. To retain the identity counter, use DELETE instead.
Looks like your ID column is an IDENTITY column - these will always add the next value (regardless of deletes).
The requirement to have a specific ID does sound like your application design relies on it, which is not good practice. ID fields do have gaps (which is normal) - your application shouldn't rely on them.
Regardless, here are a couple of ways of doing this:
For a one off, use SET IDENTITY INSERT ON:
SET IDENTITY INSERT dbo.myTable ON
INSERT INTO myTable
(ID, NAME)
VALUES
(1, 'H')
SET IDENTITY INSERT dbo.myTable OFF
To reset the seeding, you need to use a DBCC CHECKIDENT command, using RESEED:
DBCC CHECKIDENT('myTable', RESEED, 0)
I think you want to restart the value of the autogenerated id column with 1 again?
if it is an IDENTITY column, you can reset the seed value with teh following command
DBCC CHECKIDENT('YourTableNameHere', RESEED, 0)
Databases ID feature are intended to go sure different records will NEVER have the same ID. This is not only valid for records existing at the same time, but also for new records being inserted after another one was deleted. This is extremely useful to avoid conflicts. Although it seems to sometimes break some people's sense of taste don't work around it.
If you need to assign self chosen numbers to the records add another column. Auto-indexed columns should be used all the time. The other users told you how to fiddle with the index but use this feature very careful.

SQL Server Add Column

I want to add a column to one of my tables in SQL Server. I don't want it to be at the end of the column listing in the table...I actually want it to be somewhere else (location wise) in the table. Is there another option besides dropping and rebuilding (populating) the table to accomplish this? I obviously don't want ot lose any of my data, but I would prefer it not have to have the column at the end of the table definition.
Thanks,
S
Column order in the table is irrelevant -- it's purely cosmetic.
There is no extension to ALTER TABLE that allows you to specify the ordinal position of a new column (either for adding a new column or moving an existing column).
For more on the subject, see:
Change Order of Column In Database Tables
Short answer:
I agree with OMG Ponies, column order isn't important. If you don't have a clustered index, rather drop and recreate the table than run an ALTER TABLE x ADD col.
Long answer:
If your table has a fair bit of data (50Mb comes to mind) then you will be better off recreating the table rather than ALTER TABLE x ADD col The data page allocation plan for the table is calculated at table creation time so when you add a column, SQL Server will typically put your new column's data in separate pages and put forward pointers from your existing data pages to the new data pages for the column you added. If you're going to use the new column extensively then your table IO will be quite poor since reading even 1 row will require reading at least 2 pages. Table scans will also perform poorly since forward pointers will always be followed, causing table scans that are normally sequential to jump back and forth on your disk during a read.
In this case it's better to rename the existing table, recreate your table with the new column, insert into table_name select col1, col2, 'null or default for new col', col3 from temp_renamed_table and finally drop the old table that you renamed. The data pages will be much better organised and your IO will be faster despite looking the same from a SQL developer's point of view than when ALTER TABLE is used. If you have a clustered index the table will be reorganised when you add the column and page splits are less likely. You could also run ALTER TABLE x REBUILD if you have SQL Server 2008, don't have a clustered index and lots of time when users aren't using your table. It's hard to comment on your indexing strategy without knowing much more.
This is a much better reason for recreating the table than something cosmetic like column order.
It is an extremely bad practice to rearrange columns in a table. Do not even consider trying to do such a thing. The column order is irrelevant if you have used correct coding practices (such as never and I do mean never) using select *.
If you have used select * and you change the order of the columns, you are even more at risk of breaking code because the query may not be expecting Price as the third column but as the second column and that could seriously mess up a lot of things.
Further the only way to do this is to create another table, move your data and then drop the old table and rename the first one. Of course if you have FKs, they too have to be dropped and recreated. This takes alot of time if you hav ea large data set and could cause problems for users.
There is no cuircumstance where you would ever consider doing this for a table that in on production as it is just too risky. If you are in the early stages of design, you could consider doing it.
ALTER TABLE my_table ADD COLUMN column_name VARCHAR(50) AFTER col_name;
substituting whatever def you want for VARCHAR(50)
http://dev.mysql.com/doc/refman/5.1/en/alter-table.html
Edit This is of course the right answer for a MySQL server... but this is not what the OP wants.

Resources