Best practices for moving data using triggers in SQL Server 2000 - sql-server

I have some troubles trying to move data from SQL Server 2000 (SP4) to Oracle10g, so the link is ready and working, now my issue is how to move detailed data, my case is the following:
Table A is Master
Table B is Detail
Both relationed for work with the trigger (FOR INSERT)
So My query needs to query both for create a robust query, so when trigger get fired on first insert of Master it passed normal, in the next step the user will insert one or more details in Table B, so the trigger will be fired any time the record increment, my problem is that I need to send for example :
1 Master - 1 Detail = 2 rows (Works Normal)
1 Master - 2 Details = 4 rows (Trouble)
In the second case I work around the detail that in each select for each insert it duplicates data, I said if Detail have 2 details the normal is that it will be 2 selects with 1 row each one, but in the second select the rows get doubled (query the first detail inserted)
How can I move one row per insert using triggers on Table B?

Most of the time this boils down to a coding error, and I blogged about it here:
http://www.brentozar.com/archive/2009/01/triggers-need-to-handle-multiple-records/
However, I'm concerned about what's going to happen with rollbacks. If you have a program on your SQL Server that does several things in a row to different tables, and they're encapsulated in different transactions, I can envision scenarios where data will get inserted into Oracle but it won't be in SQL Server. I would advise against using triggers for cross-server data synchronization.
Instead, consider using something like DTS or SSIS to synchronize the two servers regularly.

Related

How to flip tables in sql server 2014

I have a requirement wherein there are 2 tables (Staging & Target) in the same database.
Everytime data is first loaded in the Staging table. Now in second run data will be again first Loaded to the Staging table. Now I want to flip the tables using SQL query in such a way that after data is loaded into the Staging table make this changes
Staging becomes(flip) Target
Target becomes(flip) Staging
So ideally we will see both tables. But in actual at a time only 1 table has latest data.
Before opting for flip tables approach I have tried the sp_rename but that results in deadlock if someone tried to query Target table while it is being dropped and getting renamed.
Example,
IF OBJECT_ID('[dbo].[Target]','U') IS NOT NULL DROP TABLE [dbo].[Target] ;
EXEC sp_rename '[dbo].[Staging]','Target';
If we use the flip approach then there will be minimal chances of a lock. I tried to understand this flip tables concept and one approach I see is, it could be done using some kind of flag setting in SQL but not sure how. Any help on this would be really appreciated.

Many operations time out and block process in SQL Server

We have a lot of operations that time out in our site log.
After installing Redgate SQL Monitor on the server, we figured out we have many blocked processes and some times deadlock.
With Redgate we realized problem is for a stored procedure. It's a simple stored procedure and just increase view count of product (simple update)
ALTER PROCEDURE [dbo].[SP_IncreaseProductView]
#PID int
AS
BEGIN
UPDATE dbo.stProduct
SET ViewCount = ViewCount + 1
WHERE ID = #PID
END
When that stored procedure is off, everything is fine but some times block process error is back.
This table(product) hasn't any trigger. but its like heart of system and has 12000 records.
It has 3 indexes, 1 clustered and 2 non-clustered, and many statistics
We don't have any transaction. block process mostly happens in update query.
How can I figure out where the problem is?
Sorry for my bad languages
Thanks
Edit:
I think the problem isn't the SP, its about update on product table (Its my opinion). Its large table. I still get block process when SP is off but less.
We have a lot of select and update on this table.
Also i rewrite the increase view count with LINQ to SQL, but still get block process like when SP is on.
Edit 2:
I set profiler and get all query on product table.
530 select (most with join with another 2 table) and 25 update per minute (on product table only).
For now, [SP_IncreaseProductView] is off. Because when its on, we getting block process and operation timed out about every 10 second and web site stopped.
After that (set SP to off) block process still exist but roughly 50 per day.
I would go with Ingaz' second solution, but further optimizations or simplification can be performed:
1) store view count in a product 1:1 table. This is particularly useful when some queries do not need view count
2) view count redundancy
- keep view count in product table and read it from there
- also define view count in another table (just productid and viewcount columns)
- application can update asynchronously directly in the second table
- a job updates in the product table based on data from the second table
This ensures that locking is affecting product table much less than independent updates.
It is expected: I suppose that you have a lot of updates into single small (12K rows) table.
You can:
Work around your problem
Put ROWLOCK hint in your UPDATE
Change database option to READ_COMMITED_SNAPSHOT
Be warned though: it can create another problems.
More complex recipe to eliminate blocking completely
Not for faint of heart.
Create table dbo.stProduct_Increment.
Modify your [SP_IncreaseProductView] to INSERT into increment table.
Create periodic task that UPDATEs your dbo.stProduct and clear increment table.
Create view that combines stProduct and stProduct_Increment.
Modify all SELECT statements for stProduct to created view.

How to create a trigger to populate a table from another table in a different database

Basically what I'm trying to do is create a dynamic trigger where if a table from database 1 has a new record inputed. if it falls in the category of data that I need for database 2, it automatically populates the table in database 2 without me needed to manually update.
Right now I am going into the table in database 1 sorting for the category I need and copying the data I need into the table in database 2.
I tried to make this process easier by doing a select query for the columns that I need from database 1 to database 2, which works fine however it overwrites what I have already and I have to basically recreate everytime.
So after all that rambling I guess exactly what I need to know. Is there a way to create a trigger that if a new line item is inputed in database 1 with the tag matching the type of material I need to transfer to database 2. Also on top of that I only need to transfer 2 columns from database 1 to database 2.
I would try to post a sample code, however I have no idea where to start on this.
I suggest you look into Service Broker messaging. We use it quite a bit and it works quite well. You can send messages to the other database with the data that needs to be inserted and allow the second database to do all the work. This will alleviate the worries about the second database being offline or causing an error which rolls back into your trigger. If the second database is unavailable the messages will queue up in your database until it can send them. This isn't the easiest thing to set up but is a way to keep the two databases from being so closely tied together.
Service Broker
I am unclear about the logic in your selection but if you want to save a copy of what was just inserted into table1 into a table (table2) on another database, using a trigger, you can try this:
create trigger trig1 on dbo.table1
after insert as
insert into database2.dbo.table2 (col1,col2,col3) values (inserted.col1, inserted.col2)`
You could use an AFTER INSERT Trigger like this:
CREATE TRIGGER [FirstDB].[dbo].[YourTrigger]
ON [FirstDB].[dbo].[Table]
AFTER INSERT
AS
BEGIN
INSERT INTO [OtherDB].[dbo].[Table] SELECT (values...)
END
I recommend you consider non-trigger alternatives as well though. Cross-DB triggers could be risky (what if the other db is offline, etc.)

Alternative Method to Polling/Trigger a Table in Oracle?

I have a db on Oracle 11g where there's a table updated by external users. Now I want to catch the insert/update/delete on this table in order to bring these changes on a table on another db and I'm trying different methods for research. I tested polling (a job to check every minute if there is an update, insert or delete on the table) and trigger (fire on update, insert or delete on the table) yet, so are there alternative methods?
I found AOQ (Oracle Advanced Queuing), DBMS_PIPE, Oracle SNMP Agent Integrator Polling Activity, but I don't know if they are right for this case...
It depends.
Polling or triggers are often all you need depending on the volume of data involved, and the frequency of inserts/updates/deletes.
For example, the polling method might be as simple as adding a column which is set to 1 by default, and updated to NULL when the row is "consumed" by the replication code. A trigger on the table would set it back to 1 if a row is updated. An index on this column would be lightweight (the index would only include entries for rows where the column is 1) and therefore fast to query. You'd need another table to handle deletes, though.
The trigger method would merely write insert/update/delete rows into a log table of some sort, which would then get purged periodically by a job.
For heavier volumes solutions include Oracle GoldenGate and Oracle Streams: http://www.oracle.com/technetwork/database/focus-areas/data-integration/index.html

MS Sql Server performance UPDATEing a single column vs all columns

MS SQL Srvr 2005/2008 on Win Srvr 2003+
I am only updating 1 row at a time, the UPDATE is in response to a user change on a web form.
I am updating a few columns in a table using the PK. The table has 95 columns. Typically 1 FK column and 1 or 2 other columns will be updated. The table has 6 FK's.
Is it of benefit for me to dynamically generate the UPDATE statement only having the columns being changed in the SET portion of the UPDATE, or stick with the current Stored Procedure using a parameterized update with all of the columns?
Currently, and not subject to immediate change, the data from the web form is posted back to the server and is available for the update. I can't jump to an AJAX scenario where only changed data is posted back to the server from the client browser at this point.
Thanx,
G
SQL Server reads and writes "pages" that consist of 8kb of data. Typically, a page will contain one or more row.
Since disk I/O is the expensive part of an update, the cost of updating half the columns and all the columns is roughly the same. It will still result in 8kb disk I/O.
There's another aspect, that usually doesn't come into play because SQL Server writes in 8kb pages. But imagine your row looks like this:
id int identity
col1 varchar(50)
col2 varchar(50)
Now if you update col1 to be 5 bytes longer, col2 has to be moved forward by five bytes. So even if you don't update col2, it will still have to be written to disk.
In terms of performance its better to update multiple columns in a single update than updating single columns in multiple updates, once the databases locks a row for updating the time used for changing the values is not a performance issue, on the other hand the time that takes to lock a row can cause performance issues and it can get worst if you have multiple connections trying to access the same information. I would recommend to stay as you are with the parameterized stored procedure rather than trying to update single row-columns.

Resources