I've a self join table when I delete or update it's id I want to delete or update all the direct and indirect affected records
SQL server does not allow this type of cycle cascading I've decided to use triggers but this triggers will file recursively and they will be terminated at 34 level and I don't know the depth of records and event I disable the trigger and re enable it after completing the process how can I construct a SQL statement that achieve this logic?
Because your table has circular references, a regular SQL statement won't suffice. Instead, you could write SQL using the following recipe:
create a temporary table for the IDs you want to process.
Create an SQL query that inserts the referenced IDs into the temp table. Make sure you do not insert duplicates.
Put the query in a loop and use counters to determine if records have been added. If no extra records were added, exit the loop.
Create your update statement that uses the IDs in the temp table.
Drop the temp table.
You should use a construct like nested sets for that kind of data: http://mikehillyer.com/articles/managing-hierarchical-data-in-mysql/
Edit: Ok, with circular references you'll have a problem... But dependent on your data it still may help?
Related
I'm working on updating a legacy stored procedure (which calls several other child stored procedures.) Within a transaction, it manipulates data in about a dozen or so tables and performs lots of calculations in the process, sometimes triggering lock escalation up to a table lock. This process could take 20 minutes or more to complete in some cases. Obviously, locking tables for that long is a big no no. So I'm working on a 2-stage plan to the reduce the blocking caused by this sproc in phase 1 and completely rewrite it to be more efficient and not take an inordinate amount of time in phase 2.
In order to reduce the blocking, wherever there is manipulation on the database tables, I plan to move that manipulation into a temporary table. By doing all of the work in temporary table and then updating the real tables with the final results at the very end of the process, I should be able to reduce the time spent blocking other users, significantly. (That's the "quick fix" for phase 1.)
Here's my issue: some of these temp table might have 100,000 rows or more in them while I use them for various calculations. Because of this I would like to generate indexes on the temp tables to keep performance up. And since these are temp tables that are created within a stored procedure, they need to have unique names to avoid errors if multiple users execute the sproc at the same time. I know that I can manually declare the temp tables using CREATE TABLE statements, and if I do that I can specify an index without a name and let SQL Server create the name for me. What I'm hoping to be able to do is use SELECT * INTO to generate the temp table and find another way to get SQL Server to auto-generate index names. I'm sure you're asking "Why?" My company has several changes in store for the system that I'm working with. If I can manage to use the SELECT INTO method, then, if a column gets added or resized or whatever, then there won't be an issue with the developers needing to know that they have to go back into these stored procedures and change their temp table definitions to match. Using SELECT INTO will automatically keep the temp tables matching the layout of the "real" tables.
So, does anyone know of a way to get SQL Server to auto-generate the name for an index on a temp table (aside from doing it as part of the CREATE TABLE syntax)?
Thank you!
And since these are temp tables that are created within a stored procedure, they need to have unique names to avoid errors if multiple users execute the sproc at the same time.
No they don't. Each session will have their own temp tables, and they will be automatically cleaned up.
And indexes don't have global name scope, so each temp table can have the same index names. eg
create procedure TempTest
as
begin
select * into #t from sys.objects
create index foo on #t(name)
waitfor delay '00:00:10'
select * from #t
end
And you can run
exec temptest
go 10
from multiple sessions.
I'm looking for an efficient way of detecting deleted records in production and updating the data warehouse to reflect those deletes because the table is > 12M rows and contains transactional data used for accounting purposes.
Originally, everything was done in a stored procedure by somebody before me and I've been tasked with moving the process to SSIS.
Here is what my test pattern looks like so far:
Inside the Data Flow Task:
I'm using MD5 hashes to speed up the ETL process as demonstrated in this article.
This should give a huge speed boost to the process by not having to store so many rows in memory for comparison purposes and by removing the bulk of conditional split processing at the same time.
But the issue is it doesn't account for records that are deleted in production.
How should I go about doing this? It may be simple to you but I'm new to SSIS so I'm not sure how to ask correctly.
Thank you in advance.
The solution I ended up using was to add another Data Flow Task and use the Lookup transformation to find records that didn't exist in production when compared to our fact table. This task comes after all of the inserts and updates as shown in my question above.
Then we can batch delete missing records in an execute SQL task.
Inside Data Flow Task:
Inside Lookup Transformation:
(note the Redirect rows to no match output)
So, if the ID's don't match those rows will be redirected to the no match output which we set to go to our staging table. Then, we will join staging to the fact table and apply the deletions as shown below inside an execute SQL task.
I think you'll need to adopt you dataflow to use a merge join instead of a lookup.
That way you can see whats new/changed & deleted.
You'll need to sort both Flows by the same joining key (in this case your hash column).
Personally i'm not sure I'd bother and Instead I'd simply stage all my prod data and then do a 3-way SQL merge statement to handle Inserts updates & deletes in one pass. You can keep your hash column as a joining key if you like.
Just curiosity,
Does the update statement delete and re-insert a row in MS SQL Server with the amended values?
The reason I asking is because I saw that it suggests to me that a row is deleted and re-inserted when using an output clause.
UPDATE dbo.table
SET field1= 'value'
OUTPUT DELETED.field1, INSERTED.field1
WHERE ID = 12345;
GO
If so, would a seperate DELETE FROM followed by INSERT INTO the same table work at more or less the same speed?
Even if it was physically a delete/insert pair you could never tell. SQL Server gives you specified behavior. How SQL Server physically performs the update is irrelevant for semantics (just relevant for performance).
That said, the DELETED and INSERTED tables are specified just this way so that you can use both old and new values.
Physically, an update is performed in-place if possible. If an index must be updated and the key of that index row changes you get a delete/insert pair. Again, this is indetectable to you.
I am trying to create a generic trigger in SQL Server which can copy all column data from Table A and insert them in corresponding fields in Table B.
There are few problems I am facing.
I need this copy to occur under three conditions : INSERT, DELETE and UPDATE.
The trigger needs to be triggered after CUD operations. using AFTER throws SQL error saying ntext etc are not supported in inserted. How do I resolve this error?
Instead of if used can work for INSERT but not for delete. Is there a way to do this for delete operations?
Is there a way I can write a generic code inside the trigger which can work for all sorts of tables (we can assume that all the columns in table a exists in column b)
I am not well versed with triggers or for that matter DDL in SQL Server.
Appreciate if some can provide me some solutions.
Thanks
Ben
CREATE TRIGGER (Transact-SQL)
Use nvarchar(max) instead of ntext.
You can have an instead of trigger for delete.
You can have one trigger that handles insert/update/delete for one table but you can not connect a trigger to more than one table.
I erroneously delete all the rows from a MS SQL 2000 table that is used in merge replication (the table is on the publisher). I then compounded the issue by using a DTS operation to retrieve the rows from a backup database and repopulate the table.
This has created the following issue:
The delete operation marked the rows for deletion on the clients but the DTS operation bypasses the replication triggers so the imported rows are not marked for insertion on the subscribers. In effect the subscribers lose the data although it is on the publisher.
So I thought "no worries" I will just delete the rows again and then add them correctly via an insert statement and they will then be marked for insertion on the subscribers.
This is my problem:
I cannot delete the DTSed rows because I get a "Cannot insert duplicate key row in object 'MSmerge_tombstone' with unique index 'uc1MSmerge_tombstone'." error. What I would like to do is somehow delete the rows from the table bypassing the merge replication trigger. Is this possible? I don't want to remove and redo the replication because the subscribers are 50+ windows mobile devices.
Edit: I have tried the Truncate Table command. This gives the following error "Cannot truncate table xxxx because it is published for replication"
Have you tried truncating the table?
You may have to truncate the table and reset the ID field back to 0 if you need the inserted rows to have the same ID. If not, just truncate and it should be fine.
You also could look into temporarily dropping the unique index and adding it back when you're done.
Look into sp_mergedummyupdate
Would creating a second table be an option? You could create a second table, populate it with the needed data, add the constraints/indexes, then drop the first table and rename your second table. This should give you the data with the right keys...and it should all consist of SQL statements that are allowed to trickle down the replication. It just isn't probably the best on performance...and definitely would impose some risk.
I haven't tried this first hand in a replicated environment...but it may be at least worth trying out.
Thanks for the tips...I eventually found a solution:
I deleted the merge delete trigger from the table
Deleted the DTSed rows
Recreated the merge delete trigger
Added my rows correctly using an insert statement.
I was a little worried bout fiddling with the merge triggers but every thing appears to be working correctly.