I have a SQL Server 2012 UPDATE Trigger which occurs when data is updated within a table in the database (Microsoft Project Server Database) :)
I am using the following statement to get the updated element from the table within the Trigger
SELECT #dataId = INSERTED.Id FROM INSERTED;
But the problem is due to the behaviour of the Microsoft Project Server, that it updates ALL Tasks within a Project even if only one task was changed (each Task is one record in database).
I know that i can get the id of the item which should be updated, but i dont know how i could compare the to updated row with the data which is still in the database within the UPDATE Trigger?
To get what rows actually changed, then use this inside the trigger
SELECT
*
FROM
INSERTED I
JOIN
DELETED D ON I.KeyCol = D.KeyCol
WHERE
NOT EXISTS (
SELECT I.Col1, I.Col2, I.Col3, I.Col4 ...
INTERSECT
SELECT D.Col1, D.Col2, D.Col3, D.Col4 ...
)
The NOT EXISTS..INTERSECT deals with NULLs etc for you already
I have a table with almost 45 million rows. I was updating a field of it with the query:
update tableName set columnX = Right(columnX, 10)
I didn't do tran or commit but directly ran the query. During the execution of query, after an hour unfortunately power failure occurred and now when i try to run select query it takes too much time and returns nothing. Even drop table doesn't work. I don't know what is the problem.
I don't know what is the problem.
SQL server is rolling back your update statement..you can monitor the status of rollback ,using many ways
1.
kill sessionid with status only
2.By using DMV
select
der.session_id,
der.command,
der.status,
der.percent_complete
from sys.dm_exec_requests as der
where command IN ('killed/rollback',’rollback’)
Dont try to restart SQLServer,as this may prolong the status..
Yesterday I had this strange (never seen before) situation when applying a patch to several databases from with SQL server management studio. I needed to update a large table (5 Million records+) over several databases. The statement I issued per database was something like:
set rowcount 100000
select 1
while ##rowcount > 1
update table
set column = newval
where column is null
Those statements ran for several seconds (anywhere between 2 and 30 seconds). On a second tab I ran this query:
select count(*)
from table
where column is null
Just to check how many rows where already processed. Impatient as I am, I pressed F5 for the count(*) statements every 20 seconds or so. And the - expected behavior - was that I had to wait for anything in the range from 0 to 30 seconds before the count(*) was calculated. I expected this behavior, since the update statement was running so the count(*) was next in line.
But then after a database or 10 this strange thing happened. I opened the next database (altered the database via USE) and after pressing F5 the count(*) responded immediately. And after pressing F5 again, direct result but without any progress, and again and again, the count(*) did not change. However the update was running.
And then after then n-th press, the count(*) dropped with exactly 100K records. I pressed F5 again, immediate result.. and so on and so on... WHY? wasn't the count(*) waiting... I did not allowed to read dirty or....
Only this database gave me this behavior. And I have really no clue what is causing this.
Between switching database I did not close or open any other tab, so the only thing I can imagine is that the connection parameters for that one database are different. However I'm overlooking those... but I can't find a difference.
as per what I understood from your description above is that you were not getting the result of count() correctly on some specific database. If that is the case then, Please use the below script Instead of your count()
SELECT
COUNT(1)
FROM Table WITH(NOLOCK)
WHERE Column IS NULL
Because count(*) takes more time to execute. and as your requirement is to just know the count Use Count(1) instead
Also there is a possibility of lock in the database if you are using transactions. So Use the WITH(NOLOCK) to avoid such blocking and get the result more faster.
I am new to sql so bear with me. I want to insert a record into another table when a datetime column = the system time. I know this is an infinite loop. I am not sure any other way to handle what I am trying to solve.
INSERT INTO dbo.Que
(Name, Time)
SELECT ProspectName, ProspectDate
FROM myProspects where ProspectDate = CURRENT_TIMESTAMP
I need to place a phone call at a certain time. I need to insert the record into another table to execute the call when the time = now. If you have a better way of handling this, please tell me.
Thanks
If you are using sql server, you can use the getdate() function.
You would need to insert the createdate into a column of the call on insert, also using getdate() or if using .net System.Datetime.Now
e.g. select all calls that happened in the last x amount of time in SQL server:
select * from calls where createdate > getdate() - .1
June 29, 2010 - I had an un-committed action from a previous delete statement. I committed the action and I got another error about conflicting primary id's. I can fix that. So morale of the story, commit your actions.
Original Question -
I'm trying to run this query:
with spd_data as (
select *
from openquery(IRPROD,'select * from budget_user.spd_data where fiscal_year = 2010')
)
insert into [IRPROD]..[BUDGET_USER].[SPD_DATA_BUD]
(REC_ID, FISCAL_YEAR, ENTITY_CODE, DIVISION_CODE, DEPTID, POSITION_NBR, EMPLID,
spd_data.NAME, JOB_CODE, PAY_GROUP_CODE, FUND_CODE, FUND_SOURCE, CLASS_CODE,
PROGRAM_CODE, FUNCTION_CODE, PROJECT_ID, ACCOUNT_CODE, SPD_ENC_AMT, SPD_EXP_AMT,
SPD_FB_ENC_AMT, SPD_FB_EXP_AMT, SPD_TUIT_ENC_AMT, SPD_TUIT_EXP_AMT,
spd_data.RUNDATE, HOME_DEPTID, BUD_ORIG_AMT, BUD_APPR_AMT)
SELECT REC_ID, FISCAL_YEAR, ENTITY_CODE, DIVISION_CODE, DEPTID, POSITION_NBR, EMPLID,
spd_data.NAME, JOB_CODE, PAY_GROUP_CODE, FUND_CODE, FUND_SOURCE, CLASS_CODE,
PROGRAM_CODE, FUNCTION_CODE, PROJECT_ID, ACCOUNT_CODE, SPD_ENC_AMT, SPD_EXP_AMT,
SPD_FB_ENC_AMT, SPD_FB_EXP_AMT, SPD_TUIT_ENC_AMT, SPD_TUIT_EXP_AMT,
spd_data.RUNDATE, HOME_DEPTID, lngOrig_amt, lngAppr_amt
from spd_data
left join Budgets.dbo.tblAllPosDep on project_id = projid
and job_code = jcc and position_nbr = psno
and emplid = empid
where OrgProjTest = 'EQUAL';
Basically I'm selecting a table from IRPROD (an oracle db), joining it with a local table, and inserting the results back on IRPROD.
The problem I'm having is that while the query runs, it never stops. I've let it run for an hour and it keeps going until I cancel it. I can see on a bandwidth monitor on the SQL Server data going in and out. Also, if I just run the select part of the query it returns the results in 4 seconds.
Any ideas why it's not finishing? I've got other queryies setup in a similar manner and do not have any problems (granted those insert from local tables and not a remote table).
You didn't included any volume metrics. But I would recommend to use a temporary table to gather the results.
Then you should try to insert the first couple of rows. If this succeeds you'll have a strong indicator that everything is fine.
Try to break down each insert task by project_id or emplid to avoid large transactions logs.
You should also think about crafting a bulk batch process.
If you run just the select without the insert, how many records are returned? Does the data look right or are there multiple records due to the join?
Are there triggers on the table you are inserting into? If you are returning many records and triggers are on the table that are designed to run row-byrow this could be slowing things down. You are also sending to another server, so the network pipeline could be what is slowing you down. Maybe it would be better to send the budget data to the Oracle server and do the insert from there rather than from the SQL Server.