I have a table with almost 45 million rows. I was updating a field of it with the query:
update tableName set columnX = Right(columnX, 10)
I didn't do tran or commit but directly ran the query. During the execution of query, after an hour unfortunately power failure occurred and now when i try to run select query it takes too much time and returns nothing. Even drop table doesn't work. I don't know what is the problem.
I don't know what is the problem.
SQL server is rolling back your update statement..you can monitor the status of rollback ,using many ways
1.
kill sessionid with status only
2.By using DMV
select
der.session_id,
der.command,
der.status,
der.percent_complete
from sys.dm_exec_requests as der
where command IN ('killed/rollback',’rollback’)
Dont try to restart SQLServer,as this may prolong the status..
Related
Problem:
A .NET application during business transaction executes a query like
UPDATE Order
SET Description = 'some new description`
WHERE OrderId = #p1 AND RowVersion = #p2
This query hangs until timeout (several minutes) and then I get an exception:
SqlException: Execution Timeout Expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
It is reproduced, when database is under heavy load (several times per day).
I need to detect the cause of the lock of the query.
What I've tried:
Exploring activity monitor - it shows that the query is hanging by lock. Filtering by headblocker does not give much, it is frequently changing.
Analyze SQL script, that gives similar to activity monitor data - almost same result as looking to activity monitor. Chasing blocking_session_id results in some session, that awaits for command or executing some SQL, I can't reason a relation to Order table. Executing the same script in a second gives other session. I also tried a some other queries/stored procedures from this atritcle with no result.
Building standard SQL Server report for locked/problem transactions results in errors like Max recursion exhausted or Local OutOfMemory Exception (I have 16 Gb RAM).
Database Details
Version: SQL Server 2016
Approximate number of concurrent queries per second by apps to database: 400
Database size: 1.5 Tb
Transaction isolation level: ReadUncommited for readonly transactions, Serializable for transactions with modifications
I'm absolutely new to this kind of problems, so I have missed a lot for sure.
Any help or direction would be great!
In case anyone interested, I have found this particular query espesially usefull:
SELECT tl.resource_type
,OBJECT_NAME(p.object_id) AS object_name
,tl.request_status
,tl.request_mode
,tl.request_session_id
,tl.resource_description
,(select text from sys.dm_exec_sql_text(r.sql_handle))
FROM sys.dm_tran_locks tl
INNER JOIN sys.dm_exec_requests r ON tl.request_session_id=r.session_id
LEFT JOIN sys.partitions p ON p.hobt_id = tl.resource_associated_entity_id
WHERE tl.resource_database_id = DB_ID()
AND OBJECT_NAME(p.object_id) = '<YourTableName>'
ORDER BY tl.request_session_id
It shows transactions, that have acquired locks on <YourTableName> and what query they are executing now.
Try to use sys.dm_exec_requests view, and filter by columns blocking_session_id, wait_time
On my test server there is a large query that is running (which is okay), but it is a multi-line query. E.g. in SSMS I told it to run something like:
begin transaction;
query;
query;
query;
query;
commit;
I want to see which query within the list is executing. Selecting text from sys.dm_exec_sql_text returns the entire statement, not the particular command that is executing within the list. Is there a way to view the individual commands that are being processed?
In case it matters (sometimes it does), this is running on a SQL Azure instance.
Use DBCC INPUTBUFFER
DBCC INPUTBUFFER(your session id)
It will display the query that is executing in your session
Here you can find my complete set o queries useful to show transactions running, time-wait events and open transactions with plan and sql texts:
http://zaboilab.com/sql-server-toolbox/queries-to-see-rapidly-what-your-sql-server-is-doing-now
Yesterday I had this strange (never seen before) situation when applying a patch to several databases from with SQL server management studio. I needed to update a large table (5 Million records+) over several databases. The statement I issued per database was something like:
set rowcount 100000
select 1
while ##rowcount > 1
update table
set column = newval
where column is null
Those statements ran for several seconds (anywhere between 2 and 30 seconds). On a second tab I ran this query:
select count(*)
from table
where column is null
Just to check how many rows where already processed. Impatient as I am, I pressed F5 for the count(*) statements every 20 seconds or so. And the - expected behavior - was that I had to wait for anything in the range from 0 to 30 seconds before the count(*) was calculated. I expected this behavior, since the update statement was running so the count(*) was next in line.
But then after a database or 10 this strange thing happened. I opened the next database (altered the database via USE) and after pressing F5 the count(*) responded immediately. And after pressing F5 again, direct result but without any progress, and again and again, the count(*) did not change. However the update was running.
And then after then n-th press, the count(*) dropped with exactly 100K records. I pressed F5 again, immediate result.. and so on and so on... WHY? wasn't the count(*) waiting... I did not allowed to read dirty or....
Only this database gave me this behavior. And I have really no clue what is causing this.
Between switching database I did not close or open any other tab, so the only thing I can imagine is that the connection parameters for that one database are different. However I'm overlooking those... but I can't find a difference.
as per what I understood from your description above is that you were not getting the result of count() correctly on some specific database. If that is the case then, Please use the below script Instead of your count()
SELECT
COUNT(1)
FROM Table WITH(NOLOCK)
WHERE Column IS NULL
Because count(*) takes more time to execute. and as your requirement is to just know the count Use Count(1) instead
Also there is a possibility of lock in the database if you are using transactions. So Use the WITH(NOLOCK) to avoid such blocking and get the result more faster.
I have this query:
UPDATE Table SET Field = #value WHERE id = #id
id is the primary key.
When I execute this query against an arbitrary record, it works fine and returns almost imediately. But when I execute it against id 178413 specifically it runs forever, until a timeout is triggered.
No queries should be locking this record for more than a few milliseconds.
The server runs SQL Server 2012.
What can be happening?
I found the problem.
Apparently one of the clients has crashed and kept the database connection open, probably in the middle of a transaction.
As soon as I restarted the faulty program, the record become updatable again.
June 29, 2010 - I had an un-committed action from a previous delete statement. I committed the action and I got another error about conflicting primary id's. I can fix that. So morale of the story, commit your actions.
Original Question -
I'm trying to run this query:
with spd_data as (
select *
from openquery(IRPROD,'select * from budget_user.spd_data where fiscal_year = 2010')
)
insert into [IRPROD]..[BUDGET_USER].[SPD_DATA_BUD]
(REC_ID, FISCAL_YEAR, ENTITY_CODE, DIVISION_CODE, DEPTID, POSITION_NBR, EMPLID,
spd_data.NAME, JOB_CODE, PAY_GROUP_CODE, FUND_CODE, FUND_SOURCE, CLASS_CODE,
PROGRAM_CODE, FUNCTION_CODE, PROJECT_ID, ACCOUNT_CODE, SPD_ENC_AMT, SPD_EXP_AMT,
SPD_FB_ENC_AMT, SPD_FB_EXP_AMT, SPD_TUIT_ENC_AMT, SPD_TUIT_EXP_AMT,
spd_data.RUNDATE, HOME_DEPTID, BUD_ORIG_AMT, BUD_APPR_AMT)
SELECT REC_ID, FISCAL_YEAR, ENTITY_CODE, DIVISION_CODE, DEPTID, POSITION_NBR, EMPLID,
spd_data.NAME, JOB_CODE, PAY_GROUP_CODE, FUND_CODE, FUND_SOURCE, CLASS_CODE,
PROGRAM_CODE, FUNCTION_CODE, PROJECT_ID, ACCOUNT_CODE, SPD_ENC_AMT, SPD_EXP_AMT,
SPD_FB_ENC_AMT, SPD_FB_EXP_AMT, SPD_TUIT_ENC_AMT, SPD_TUIT_EXP_AMT,
spd_data.RUNDATE, HOME_DEPTID, lngOrig_amt, lngAppr_amt
from spd_data
left join Budgets.dbo.tblAllPosDep on project_id = projid
and job_code = jcc and position_nbr = psno
and emplid = empid
where OrgProjTest = 'EQUAL';
Basically I'm selecting a table from IRPROD (an oracle db), joining it with a local table, and inserting the results back on IRPROD.
The problem I'm having is that while the query runs, it never stops. I've let it run for an hour and it keeps going until I cancel it. I can see on a bandwidth monitor on the SQL Server data going in and out. Also, if I just run the select part of the query it returns the results in 4 seconds.
Any ideas why it's not finishing? I've got other queryies setup in a similar manner and do not have any problems (granted those insert from local tables and not a remote table).
You didn't included any volume metrics. But I would recommend to use a temporary table to gather the results.
Then you should try to insert the first couple of rows. If this succeeds you'll have a strong indicator that everything is fine.
Try to break down each insert task by project_id or emplid to avoid large transactions logs.
You should also think about crafting a bulk batch process.
If you run just the select without the insert, how many records are returned? Does the data look right or are there multiple records due to the join?
Are there triggers on the table you are inserting into? If you are returning many records and triggers are on the table that are designed to run row-byrow this could be slowing things down. You are also sending to another server, so the network pipeline could be what is slowing you down. Maybe it would be better to send the budget data to the Oracle server and do the insert from there rather than from the SQL Server.