Hello I have an application that works but the performance is slow on a row insert after rows in table are above 700. I know a stored procedure would work better but am very new to creating them and need some help.
In the grid I provide the user an option to insert a row anywhere they want to. I prompt them for the TASK_ID (displayOrder) that they want the new insert to be then in the app I renumber all the task_id's below the insert(TASK_ID) to "make room" for the new insert so that the new insert will be in the correct order when the grid refreshes.
How can I accomplish this insert via an SP?
Table structure
ID(PK)(int),TASK_ID(displayOrder)(int),project_id(INT), otherColumns
The advantage of a stored procedure is that you can run batches of operations so logically connected changes can all occur in the context of a single transaction.
In your case you should write a procedure to accept all the parameters you need. Next, run an UPDATE something like
UPDATE [Table] SET Task_ID = Task_ID+1 WHERE Task_ID<#DesiredTaskId
after which you can run your insert.
Related
I have a table that utilizes an instead of insert trigger. The trigger manipulates the data for each inserted row before inserting it. Because the inserted table is not modifiable, it was implemented using a temp (#) table. At the end of the trigger a select from the temp table is done to return the inserted data to the calling client. When I do a an insert in SSMS, I can see the data that is returned and the columns all have names and values. My Trigger looks like this:
Create TRIGGER [dbo].[RealTableInsteadOfInsert] on [dbo].[RealTable]
INSTEAD OF INSERT
AS
BEGIN
set nocount on;
declare #lastDigit int;
if exists (select * from inserted)
Begin
Select *
into #tempInserted
from inserted
.... Logic to add and manipulate column data .....
INSERT INTO RealTAble(id, col1, col2, col3,....)
Select *
from #tempInserted;
Select id as insertId, *
from #tempInserted;
END
End
My question is how can I capture the output of the instead of trigger into a table for further processing using the returned data. I can't use an output clause on the insert statement as the temp table no longer exists and the data that was calculated/modified on the insert was not done in the inserted table. Is my only option to update the trigger to use a global temp table instead of a local?
You have two main options:
Send data through service broker. This is complicated and potentially slow. On the plus side, it does let you do your further processing work decoupled from the original transaction, which is nice in certain use cases.
Write the data to a real table. This can also give you transactional decoupling, but you don't get automatic activation of the decoupled processing logic if you want that.
OK, but if you write to a real table, and a lot of processes are causing the trigger to fire, how do you find "your" data? Putting aside the fact that a global temp table has the same problem unless you want to get dynamic, one solution is to use a process keyed table, as described by Erland Sommarskog in his data sharing article.
Returning data from triggers (ie, to a client) is a bad idea. The ability to do this at all will soon be removed I know you don't need to do this in your solution, just giving you a heads up.
I have a table that is pushed to me from another SQL Server. The table is dropped after it is rotated to a "Current Day" table (Current Data is rotated to prev day before this).
Currently we have jobs that are running to do the "rotating" that are set at a specific time. I had originally created a trigger but clearly a trigger won't work (as I figured out from the comments) since the DDL operation wont continue its flow until after this trigger is complete... It also looks like this is just not possible since I don't have control over the group that is pushing the data to us.
Resolution : I went to the org that pushes the data and requested they add a step that inserts a record into a TableLog table and I am doing my trigger off of that insert instead.
CREATE TRIGGER InsertTest
ON [pace].[Table_Load_Log]
after insert
AS
if exists(select Table_name from inserted where inserted.Table_name = 'POE_Task_Details_SE_TEMP')
BEGIN
--drop table dbo.newtable
exec dbo.sp_start_job N'Make Pace Tables From Temp Table Push’
END
GO
There is no way to do this with a trigger. If you need to know when a CRUD operation on a table is complete, you would need to execute a command after the CRUD operation in the same process that launches it.
I'm trying to run a trigger that allows me to insert the inserted data on my local table to the linked server's table. This is what I did:
use [medb]
ALTER TRIGGER [dbo].[trigger1] ON [dbo].[tbl1]
AFTER INSERT
AS
BEGIN
INSERT into openquery(DEV, 'tbl_remotetbl') select * from inserted
END
but it is giving this error:
Cannot process the object "tbl_remotetbl". The OLE DB provider
"MSDASQL" for linked server "DEV" indicates that either the object has
no columns or the current user does not have permissions on that
object.
What seems to be my problem?
Note: I am using SQL Server 2008 R2
Did you try running the command outside the trigger ? Did it work ?
here is the syntax I'm using in my openquery:
INSERT INTO OPENQUERY(LinkedServer_Name,
'select remote_field_1,remote_field_2 from remote_Schema.remote_table')
select local_column1,local_column2
FROM local_Table
Now, with that being said, making this statement work inside a trigger, is something I couldn't do. Above statement woks perfectly when executed by it self. But once it is placed in a trigger, the entire transaction related to that trigger fails. I mean even the insert statement that fires the trigger does not go through, and the main table does not insert the data which was meant to be inserted in it.
i ran into the same issue, and spent many hours trying to figure out how to make an openquery statement work inside update/insert/delete triggers, with no success....
so, here's an alternate solution, maybe this can fix your issue, this is in a scenario where i need to pass data from MSSQL to a MySQL DB.
first, create a holding table, so you can store temporary info that's inserted into a table that will only hold data that i need to pass to MySQL
create table holding_table (ID int, value2 int)
trigger will insert data to the holding table, instead of sending it directly to MySQL
ALTER TRIGGER [dbo].[temp_data_to_mysql]
ON [dbo].[source_table]
FOR insert
AS
BEGIN
INSERT into holding_table (ID,Stock)select a, b from inserted
END
GO
after that, you can just create a task in the SQL server agent, so it can execute your stored procedure every N minutes.
hope it helps, im aware that this is a workaround, but after some investigation and testing, i was unable to make openquery work called within a trigger process..
I have been searching around but I cannot find the correct answer, probably I search wrong because I don't know what to look for :)
Anyway, I have a TSQL with a begin and commit transaction. In the transaction I add some columns and also rename some columns.
Just after the renames and added column statement i also run some update statements to load data into the newly created columns.
Now the problem is that for some reason the update gives an error that it cannot update the given column as it does not exist (YET???).
My idea is that the statement is still working out the rename and the adding of the columns but already goes ahead with the update statements. The table is very big and has a few million records so I can imagine it takes some time to add and rename the columns
If I run first the rename and add statements and than separate the update statements, it does work. So it has to do with some wait time.
Is it possible to make sql force to execute step by step and wait until the complete statement is done before going to the next?
If you modify columns (e.g. add them), you have to finish the batch before you can continue with updating them. Insert the GO keyword between table structure changes and updates.
To illustrate that, the following code won't work:
create table sometable(col1 int)
go
alter table sometable add col2 varchar(10)
insert into sometable(col2) values ('a')
But inserting go will make the insert recognise the new column
create table sometable(col1 int)
go
alter table sometable add col2 varchar(10)
go
insert into sometable(col2) values ('a')
If you do it in the code, you may want to create separate transaction for the structure changes and data migration. You can still wrap them it in one transaction for data integrity.
It doesn't have anything to do with wait time. The queries are run in order. It's because all the queries are submitted all at once and therefore when it tries to validate your update, the column doesn't exist at that point in time. To get around it, you need to send the update in a separate batch. The following keyword needs to be added between your alter and update statement
GO
You can try using select for update,
http://docs.oracle.com/cd/B28359_01/server.111/b28286/statements_10002.htm#i2130052
This will ensure that your query will wait for lock, bit it is recommended to Specify WAIT to instruct the database to wait integer seconds so that it will not wait for indefinate time.
We currently have a c# console app that creates a sync table from triggers that allows the app to send an insert update or delete statement to another copy of the database at another site. This keeps the tables in sync, however the system regularly crashes and we do not have the source code.
I am currently trying to recreate this in sql server.
The sync table is generated in a trigger on each table and adds the
TableName Action RowNumer columns_updated
test Insert 10 0x3
test2 Delete 2
test update 15 07x
From this I can generate an insert, update or delete statement which can be run on the remote server, but with thousands of rows it would be far too slow.
Insert server2.test.column1,column2,column3.column4 select column1,column2,colum3,column4 from
server1.test where row = RowNumber
What I would like to do is generate the insert statement on server1, then simply run it on server2
"Insert column1.column2.colum3.column4 into table1 values 110000,New Order, £99.00, 'John Smith'"
So does anybody have a way to write the insert statement to a table row as a string ready for processing in server 2. This select does not have to happen in the trigger.
i.e. read any row in any table and convert it into an insert statement?