Finding what is writing to the Transaction Log in SQL Server? - sql-server

Is there a way to see what is writing to the transaction log?
I have a log file that has grown 15 Gigs in the last 20 minutes. Is there a way for me to track down what is causing this?

Activity monitor will show you what is executing.
DBCC OPENTRAN will show the oldest open transaction.
There is also the dynamic management view sys.dm_tran_active_transactions. For example, here's a query that shows you log file usage by process:
-- This query returns log file space used by all running transactions.
select
SessionTrans.session_id as [SPID],
enlist_count as [Active Requests],
ActiveTrans.transaction_id as [ID],
ActiveTrans.name as [Name],
ActiveTrans.transaction_begin_time as [Start Time],
case transaction_type
when 1 then 'Read/Write'
when 2 then 'Read-Only'
when 3 then 'System'
when 4 then 'Distributed'
else 'Unknown - ' + convert(varchar(20), transaction_type)
end as [Transaction Type],
case transaction_state
when 0 then 'Uninitialized'
when 1 then 'Not Yet Started'
when 2 then 'Active'
when 3 then 'Ended (Read-Only)'
when 4 then 'Committing'
when 5 then 'Prepared'
when 6 then 'Committed'
when 7 then 'Rolling Back'
when 8 then 'Rolled Back'
else 'Unknown - ' + convert(varchar(20), transaction_state)
end as 'State',
case dtc_state
when 0 then NULL
when 1 then 'Active'
when 2 then 'Prepared'
when 3 then 'Committed'
when 4 then 'Aborted'
when 5 then 'Recovered'
else 'Unknown - ' + convert(varchar(20), dtc_state)
end as 'Distributed State',
DB.Name as 'Database',
database_transaction_begin_time as [DB Begin Time],
case database_transaction_type
when 1 then 'Read/Write'
when 2 then 'Read-Only'
when 3 then 'System'
else 'Unknown - ' + convert(varchar(20), database_transaction_type)
end as 'DB Type',
case database_transaction_state
when 1 then 'Uninitialized'
when 3 then 'No Log Records'
when 4 then 'Log Records'
when 5 then 'Prepared'
when 10 then 'Committed'
when 11 then 'Rolled Back'
when 12 then 'Committing'
else 'Unknown - ' + convert(varchar(20), database_transaction_state)
end as 'DB State',
database_transaction_log_record_count as [Log Records],
database_transaction_log_bytes_used / 1024 as [Log KB Used],
database_transaction_log_bytes_reserved / 1024 as [Log KB Reserved],
database_transaction_log_bytes_used_system / 1024 as [Log KB Used (System)],
database_transaction_log_bytes_reserved_system / 1024 as [Log KB Reserved (System)],
database_transaction_replicate_record_count as [Replication Records],
command as [Command Type],
total_elapsed_time as [Elapsed Time],
cpu_time as [CPU Time],
wait_type as [Wait Type],
wait_time as [Wait Time],
wait_resource as [Wait Resource],
reads as [Reads],
logical_reads as [Logical Reads],
writes as [Writes],
SessionTrans.open_transaction_count as [Open Transactions(SessionTrans)],
ExecReqs.open_transaction_count as [Open Transactions(ExecReqs)],
open_resultset_count as [Open Result Sets],
row_count as [Rows Returned],
nest_level as [Nest Level],
granted_query_memory as [Query Memory],
SUBSTRING(SQLText.text,ExecReqs.statement_start_offset/2,(CASE WHEN ExecReqs.statement_end_offset = -1 then LEN(CONVERT(nvarchar(max), SQLText.text)) * 2 ELSE ExecReqs.statement_end_offset end - ExecReqs.statement_start_offset)/2) AS query_text
from
sys.dm_tran_active_transactions ActiveTrans (nolock)
inner join sys.dm_tran_database_transactions DBTrans (nolock)
on DBTrans.transaction_id = ActiveTrans.transaction_id
inner join sys.databases DB (nolock)
on DB.database_id = DBTrans.database_id
left join sys.dm_tran_session_transactions SessionTrans (nolock)
on SessionTrans.transaction_id = ActiveTrans.transaction_id
left join sys.dm_exec_requests ExecReqs (nolock)
on ExecReqs.session_id = SessionTrans.session_id
and ExecReqs.transaction_id = SessionTrans.transaction_id
outer apply sys.dm_exec_sql_text(ExecReqs.sql_handle) AS SQLText
where SessionTrans.session_id is not null -- comment this out to see SQL Server internal processes

If you transaction log has grown so much is such a short time this means that a lot of statements that make data or structure changes have been executed. If your database works with large blob records you can try looking there first.
Profiler won't help you much in finding out what happened previously but it can help you if this is still going on.
If you want to read into transaction log, you will need a 3rd party transaction log reader. The best solution on the market is ApexSQL Log which saved me couple times in similar situations.
However, if your database is running on sql server 2000, you can try to use SQL Log Rescue from Red Gate cause it's free. Thrid solution is to try and find Lumigent Log Explorer (product is discountinued but maybe you can find it somewhere online).
Try'em all and see which one works better for you.

You can use sql server profiler which show every transaction executed and it's start time and end time and many things and i think you can see what causing your problem.
I hope this help you.

Related

What is causing CPU spike in Azure SQL?

Below shows the CPU spike in a 24 hour period of one of our Azure SQL database. compute utilization
In Query Performance, the top 5 queries by CPU in thesame 24 hour period is shown in the image below. query performance
However,the number of spikes in the overview above are more frequent than the top query by CPU. Where can we find what is causing the spike beside the Query Performance since it seems to be something else altogether?
I know it is a bit late, but might be useful for others, the following query shows top 10 active CPU consuming queries in Azure:
SELECT TOP 10
GETDATE() runtime,
*
FROM
(
SELECT query_stats.query_hash,
SUM(query_stats.cpu_time) 'Total_Request_Cpu_Time_Ms',
SUM(logical_reads) 'Total_Request_Logical_Reads',
MIN(start_time) 'Earliest_Request_start_Time',
COUNT(*) 'Number_Of_Requests',
SUBSTRING(REPLACE(REPLACE(MIN(query_stats.statement_text), CHAR(10), ' '), CHAR(13), ' '), 1, 256) AS "Statement_Text"
FROM
(
SELECT req.*,
SUBSTRING( ST.text,
(req.statement_start_offset / 2) + 1,
((CASE statement_end_offset
WHEN -1 THEN
DATALENGTH(ST.text)
ELSE
req.statement_end_offset
END - req.statement_start_offset
) / 2
) + 1
) AS statement_text
FROM sys.dm_exec_requests AS req
CROSS APPLY sys.dm_exec_sql_text(req.sql_handle) AS ST
) AS query_stats
GROUP BY query_hash
) AS t
ORDER BY Total_Request_Cpu_Time_Ms DESC;

SQL Server - Queries to check Memory Settings

May I know how to check the below current settings in the SQL Server from one of the Databases(Queries)
In-memory OLTP storage (GB)
Target IOPS (64 KB)
Number of replicas
Thanks
One of the best set of Dynamic Management View queries was created by Glenn Berry. Naturally, DMVs vary based on edition of SQL Server. Thus, you'll have to run the right query for your 2008 and 2012 versions, respectively.
Storage:
SELECT physical_memory_in_use_kb/1024 AS [SQL Server Memory Usage (MB)],
locked_page_allocations_kb/1024 AS [SQL Server Locked Pages Allocation (MB)],
large_page_allocations_kb/1024 AS [SQL Server Large Pages Allocation (MB)],
page_fault_count, memory_utilization_percentage, available_commit_limit_kb,
process_physical_memory_low, process_virtual_memory_low
FROM sys.dm_os_process_memory WITH (NOLOCK) OPTION (RECOMPILE);
All Database properties, including if they are a replica
-- Recovery model, log reuse wait description, log file size, log usage size (Query 31) (Database Properties)
-- and compatibility level for all databases on instance
SELECT db.[name] AS [Database Name], SUSER_SNAME(db.owner_sid) AS [Database Owner], db.recovery_model_desc AS [Recovery Model],
db.state_desc, db.containment_desc, db.log_reuse_wait_desc AS [Log Reuse Wait Description],
CONVERT(DECIMAL(18,2), ls.cntr_value/1024.0) AS [Log Size (MB)], CONVERT(DECIMAL(18,2), lu.cntr_value/1024.0) AS [Log Used (MB)],
CAST(CAST(lu.cntr_value AS FLOAT) / CAST(ls.cntr_value AS FLOAT)AS DECIMAL(18,2)) * 100 AS [Log Used %],
db.[compatibility_level] AS [DB Compatibility Level], db.page_verify_option_desc AS [Page Verify Option],
db.is_auto_create_stats_on, db.is_auto_update_stats_on, db.is_auto_update_stats_async_on, db.is_parameterization_forced,
db.snapshot_isolation_state_desc, db.is_read_committed_snapshot_on, db.is_auto_close_on, db.is_auto_shrink_on,
db.target_recovery_time_in_seconds, db.is_cdc_enabled, db.is_published, db.group_database_id, db.replica_id,
db.is_encrypted, de.encryption_state, de.percent_complete, de.key_algorithm, de.key_length
FROM sys.databases AS db WITH (NOLOCK)
INNER JOIN sys.dm_os_performance_counters AS lu WITH (NOLOCK)
ON db.name = lu.instance_name
INNER JOIN sys.dm_os_performance_counters AS ls WITH (NOLOCK)
ON db.name = ls.instance_name
LEFT OUTER JOIN sys.dm_database_encryption_keys AS de WITH (NOLOCK)
ON db.database_id = de.database_id
WHERE lu.counter_name LIKE N'Log File(s) Used Size (KB)%'
AND ls.counter_name LIKE N'Log File(s) Size (KB)%'
AND ls.cntr_value > 0
ORDER BY db.[name] OPTION (RECOMPILE);
I/O
-- Get I/O utilization by database (Query 35) (IO Usage By Database)
WITH Aggregate_IO_Statistics
AS (SELECT DB_NAME(database_id) AS [Database Name],
CAST(SUM(num_of_bytes_read + num_of_bytes_written) / 1048576 AS DECIMAL(12, 2)) AS [ioTotalMB],
CAST(SUM(num_of_bytes_read ) / 1048576 AS DECIMAL(12, 2)) AS [ioReadMB],
CAST(SUM(num_of_bytes_written) / 1048576 AS DECIMAL(12, 2)) AS [ioWriteMB]
FROM sys.dm_io_virtual_file_stats(NULL, NULL) AS [DM_IO_STATS]
GROUP BY database_id)
SELECT ROW_NUMBER() OVER (ORDER BY ioTotalMB DESC) AS [I/O Rank],
[Database Name], ioTotalMB AS [Total I/O (MB)],
CAST(ioTotalMB / SUM(ioTotalMB) OVER () * 100.0 AS DECIMAL(5, 2)) AS [Total I/O %],
ioReadMB AS [Read I/O (MB)],
CAST(ioReadMB / SUM(ioReadMB) OVER () * 100.0 AS DECIMAL(5, 2)) AS [Read I/O %],
ioWriteMB AS [Write I/O (MB)],
CAST(ioWriteMB / SUM(ioWriteMB) OVER () * 100.0 AS DECIMAL(5, 2)) AS [Write I/O %]
FROM Aggregate_IO_Statistics
ORDER BY [I/O Rank] OPTION (RECOMPILE);
If you want to get even more fancy, and really see things about your server with some warnings about things that could be wrong, then I'd use Brent Ozar's sp_Blitz. It's heavily documented there, so just give it a run and drool over the results.
For In-Memory OLTP, I wrote a script to evaluate memory-optimized instances and databases, called sp_BlitzInMemoryOLTP, which is part of Brent Ozar's First Responder Kit.
https://www.brentozar.com/thanks/welcome-sql-server-medical-school/
Documentation here: http://nedotter.com/archive/2018/06/new-kid-on-the-block-sp_blitzinmemoryoltp/

Azure SQL Database - queries significantly slower than SQL Database on Azure VM

We moved our SQL Server from an Azure VM to an Azure SQL Database. The Azure VM was DS2_V2, 2 core, 7GB RAM, 6400 max IOPS The Azure SQL Database is Standard S3, 100 DTU. I chose this tier after running the Azure DTU Calculator tool on the Azure VM for 24 hours - it suggested this tier for me.
The problem is that queries (mostly SELECT and UPDATE) are painfully slow now, compared to how they were on the Azure VM. One thing I noticed is that while running a query, I went to the Resource Utilization graph under Monitoring in the Azure Portal, and it's pinging 100% throughout the time any query is being run. Does this mean my tier is in fact too low? I would hope not because the next tier up is a pretty big jump in cost.
Just for information, the Azure SQL Database is identical in schema and data to the Azure VM database, and I rebuilt all indexes (including Full-Text) after the migration.
In my research thus far I've read everything from making sure my Azure SQL DB is in the right region on Azure (it is) to network latency (non-existent on Azure VM) causing the issue.
How long has this system been running now as an Azure SQL Server Database? Presumably if it's more than a few hours old (i.e. some "production" queries have hit it) and it's generated some useful statistics.
Analyzing this and determining the source of your problem will be a multi-pronged strategy.
Service Tier Check
Try the following queries, which determine whether you are at the correct service level:
-----------------------
---- SERVICE TIER CHECK
-----------------------
-- The following query outputs the fit percentage per resource dimension, based on a threshold of 20%.
-- IF the query below returns values greater than 99.9 for all three resource dimensions, your workload is very likely to fit into the lower performance level.
SELECT
(COUNT(end_time) - SUM(CASE WHEN avg_cpu_percent >= 20 THEN 1 ELSE 0 END) * 1.0) / COUNT(end_time) AS 'CPU Fit Percent'
,(COUNT(end_time) - SUM(CASE WHEN avg_log_write_percent >= 20 THEN 1 ELSE 0 END) * 1.0) / COUNT(end_time) AS 'Log Write Fit Percent'
,(COUNT(end_time) - SUM(CASE WHEN avg_data_io_percent >= 20 THEN 1 ELSE 0 END) * 1.0) / COUNT(end_time) AS 'Physical Data Read Fit Percent'
FROM sys.dm_db_resource_stats
-- Look at how many times your workload reaches 100% and compare it to your database workload SLO.
-- IF the query below returns a value less than 99.9 for any of the three resource dimensions, you should consider either moving to the next higher performance level or use application tuning techniques to reduce the load on the Azure SQL Database.
SELECT
(COUNT(end_time) - SUM(CASE WHEN avg_cpu_percent >= 100 THEN 1 ELSE 0 END) * 1.0) / COUNT(end_time) AS 'CPU Fit Percent'
,(COUNT(end_time) - SUM(CASE WHEN avg_log_write_percent >= 100 THEN 1 ELSE 0 END) * 1.0) / COUNT(end_time) AS 'Log Write Fit Percent'
,(COUNT(end_time) - SUM(CASE WHEN avg_data_io_percent >= 100 THEN 1 ELSE 0 END) * 1.0) / COUNT(end_time) AS 'Physical Data Read Fit Percent'
FROM sys.dm_db_resource_stats
Resource Consumption Levels
It would also be useful to check the resource consumption, which you can do using the following query. This will report things like DTU consumption and IO.
-----------------
-- Resource Usage
-----------------
select *
from sys.dm_db_resource_stats
order by end_time desc
Indexes
It's also worth a quick check whether you have missing indexes or whether some of your existing indexes are getting in the way.
The missing index query is a doozy, but should be taken with a grain of salt. I generally see it as an advisement on how the db is being used and I make my own judgement on which indexes to add, and how. For example, as a general rule of thumb, all foreign keys should have non-clustered indexes to facilitate the inevitable JOIN's they're involved in.
--------------------
-- Find poor indexes
--------------------
DECLARE #dbid int
SELECT #dbid = db_id()
SELECT 'Table Name' = object_name(s.object_id), 'Index Name' =i.name, i.index_id,
'Total Writes' = user_updates, 'Total Reads' = user_seeks + user_scans + user_lookups,
'Difference' = user_updates - (user_seeks + user_scans + user_lookups)
FROM sys.dm_db_index_usage_stats AS s
INNER JOIN sys.indexes AS i
ON s.object_id = i.object_id
AND i.index_id = s.index_id
WHERE objectproperty(s.object_id,'IsUserTable') = 1
AND s.database_id = #dbid
AND user_updates > (user_seeks + user_scans + user_lookups)
ORDER BY 'Difference' DESC, 'Total Writes' DESC, 'Total Reads' ASC;
------------------
-- Missing Indexes
------------------
declare #improvementMeasure int = 100
SELECT
CONVERT (decimal (28,1),
migs.avg_total_user_cost *
migs.avg_user_impact *
(migs.user_seeks + migs.user_scans))
AS improvement_measure,
OBJECT_NAME(mid.object_id, mid.database_id) as table_name,
mid.equality_columns as index_column,
mid.inequality_columns,
mid.included_columns as include_columns,
'CREATE INDEX IX_' +
OBJECT_NAME(mid.object_id, mid.database_id) +
'_' +
REPLACE(REPLACE(mid.equality_columns, '[', ''), ']', '') +
' ON ' +
mid.statement +
' (' + ISNULL (mid.equality_columns,'') +
CASE WHEN mid.equality_columns IS NOT NULL
AND mid.inequality_columns IS NOT NULL
THEN ','
ELSE ''
END + ISNULL (mid.inequality_columns, '') +
')' +
ISNULL (' INCLUDE (' + mid.included_columns + ')',
'') AS create_index_statement,
migs.user_seeks,
migs.unique_compiles,
migs.avg_user_impact,
migs.avg_total_user_cost
FROM sys.dm_db_missing_index_groups mig
INNER JOIN sys.dm_db_missing_index_group_stats migs
ON migs.group_handle = mig.index_group_handle
INNER JOIN sys.dm_db_missing_index_details mid
ON mig.index_handle = mid.index_handle
WHERE CONVERT (decimal (28,1),
migs.avg_total_user_cost *
migs.avg_user_impact *
(migs.user_seeks + migs.user_scans)) > #improvementMeasure
ORDER BY migs.avg_total_user_cost *
migs.avg_user_impact *
(migs.user_seeks + migs.user_scans) DESC
Maintenance
A maintenance plan should also be setup, whereby you are rebuilding indexes and statistics on a somewhat regular basis. Unfortunately there is no SQL Agent in an Azure SQL environment. But Powershell and either an Azure function or Azure WebJob can help you schedule and execute this. For our on-prem and azure servers, we do this weekly.
Note that WebJob's would only help if you have a pre-existing App Service to run it within.
For scripts on helping you with index and statistics maintenance, checkout Ola Hallengren's script offering.

Azure SQL frequent connection timeouts

We're running a web app (2 instances) on Azure, backed by a SQL Azure database. At any given time there are 50-150 users using the website. The database runs at S2 performance level. The DTU is around 20% on average.
However, a few times every day I suddenly get hundreds of errors in my logs with timeouts, like this:
An error occurred while executing the command definition. See the inner exception for details.
The wait operation timed out.
Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. This failure occurred while attempting to connect to the routing destination. The duration spent while attempting to connect to the original server was - [Pre-Login] initialization=1; handshake=21; [Login] initialization=0; authentication=0; [Post-Login] complete=1;
We're using EF6 for queries with the default command timeout. I've configured this execution strategy:
SetExecutionStrategy("System.Data.SqlClient",
() => new SqlAzureExecutionStrategy(10, TimeSpan.FromSeconds(15)));
The database (about 15GB total) is heavily indexed. These errors occur all over the place, usually dozens to hundreds within 1-2 minutes.
What steps can I take to prevent this from happening?
The fact that it happens in 1-2 minutes might mean a burst in activity or some process that might be locking up tables.
If your DTU during those times is at 20% is not a CPU issue, but you can always find which are the bottlenecks by running this query on the DB:
SELECT TOP 10
total_worker_time/execution_count AS Avg_CPU_Time
,execution_count
,total_elapsed_time/execution_count as AVG_Run_Time
,(SELECT
SUBSTRING(text,statement_start_offset/2,(CASE
WHEN statement_end_offset = -1 THEN LEN(CONVERT(nvarchar(max), text)) * 2
ELSE statement_end_offset
END -statement_start_offset)/2
) FROM sys.dm_exec_sql_text(sql_handle)
) AS query_text
FROM sys.dm_exec_query_stats
ORDER BY Avg_CPU_Time DESC
Even if the DB is heavily indexed, indexes get fragmented, I'd advice running this to check the current fragmentation:
select a.*,b.AverageFragmentation from
( SELECT tbl.name AS [Table_Name], tbl.object_id, i.name AS [Name], i.index_id, CAST(CASE i.index_id WHEN 1 THEN 1 ELSE 0 END AS bit) AS [IsClustered],
CAST(case when i.type=3 then 1 else 0 end AS bit) AS [IsXmlIndex], CAST(case when i.type=4 then 1 else 0 end AS bit) AS [IsSpatialIndex]
FROM
sys.tables AS tbl
INNER JOIN sys.indexes AS i ON (i.index_id > 0 and i.is_hypothetical = 0) AND (i.object_id=tbl.object_id))a
inner join
( SELECT tbl.object_id, i.index_id, fi.avg_fragmentation_in_percent AS [AverageFragmentation]
FROM
sys.tables AS tbl
INNER JOIN sys.indexes AS i ON (i.index_id > 0 and i.is_hypothetical = 0) AND (i.object_id=tbl.object_id)
INNER JOIN sys.dm_db_index_physical_stats(DB_ID(), NULL, NULL, NULL, 'LIMITED') AS fi ON fi.object_id=CAST(i.object_id AS int) AND fi.index_id=CAST(i.index_id AS int)
)b
on a.object_id=b.object_id and a.index_id=b.index_id
order by AverageFragmentation desc
You can also use Azure Automation to schedule an automatic rebuilding of fragmented indexes, see answer at: Why my Azure SQL Database indexes are still fragmented?

List the queries running on SQL Server

Is there a way to list the queries that are currently running on MS SQL Server (either through the Enterprise Manager or SQL) and/or who's connected?
I think I've got a very long running query is being execute on one of my database servers and I'd like to track it down and stop it (or the person who keeps starting it).
This will show you the longest running SPIDs on a SQL 2000 or SQL 2005 server:
select
P.spid
, right(convert(varchar,
dateadd(ms, datediff(ms, P.last_batch, getdate()), '1900-01-01'),
121), 12) as 'batch_duration'
, P.program_name
, P.hostname
, P.loginame
from master.dbo.sysprocesses P
where P.spid > 50
and P.status not in ('background', 'sleeping')
and P.cmd not in ('AWAITING COMMAND'
,'MIRROR HANDLER'
,'LAZY WRITER'
,'CHECKPOINT SLEEP'
,'RA MANAGER')
order by batch_duration desc
If you need to see the SQL running for a given spid from the results, use something like this:
declare
#spid int
, #stmt_start int
, #stmt_end int
, #sql_handle binary(20)
set #spid = XXX -- Fill this in
select top 1
#sql_handle = sql_handle
, #stmt_start = case stmt_start when 0 then 0 else stmt_start / 2 end
, #stmt_end = case stmt_end when -1 then -1 else stmt_end / 2 end
from sys.sysprocesses
where spid = #spid
order by ecid
SELECT
SUBSTRING( text,
COALESCE(NULLIF(#stmt_start, 0), 1),
CASE #stmt_end
WHEN -1
THEN DATALENGTH(text)
ELSE
(#stmt_end - #stmt_start)
END
)
FROM ::fn_get_sql(#sql_handle)
If you're running SQL Server 2005 or 2008, you could use the DMV's to find this...
SELECT *
FROM sys.dm_exec_requests
CROSS APPLY sys.dm_exec_sql_text(sql_handle)
More about sys.dm_exec_requests
More about sys.dm_exec_sql_text
You can run the sp_who command to get a list of all the current users, sessions and processes. You can then run the KILL command on any spid that is blocking others.
I would suggest querying the sys views. something similar to
SELECT *
FROM
sys.dm_exec_sessions s
LEFT JOIN sys.dm_exec_connections c
ON s.session_id = c.session_id
LEFT JOIN sys.dm_db_task_space_usage tsu
ON tsu.session_id = s.session_id
LEFT JOIN sys.dm_os_tasks t
ON t.session_id = tsu.session_id
AND t.request_id = tsu.request_id
LEFT JOIN sys.dm_exec_requests r
ON r.session_id = tsu.session_id
AND r.request_id = tsu.request_id
OUTER APPLY sys.dm_exec_sql_text(r.sql_handle) TSQL
This way you can get a TotalPagesAllocated which can help you figure out the spid that is taking all the server resources. There has lots of times when I can't even bring up activity monitor and use these sys views to see what's going on.
I would recommend you reading the following article. I got this reference from here.
As a note, the SQL Server Activity Monitor for SQL Server 2008 can be found by right clicking your current server and going to "Activity Monitor" in the context menu. I found this was easiest way to kill processes if you are using the SQL Server Management Studio.
There are various management views built into the product. On SQL 2000 you'd use sysprocesses. On SQL 2K5 there are more views like sys.dm_exec_connections, sys.dm_exec_sessions and sys.dm_exec_requests.
There are also procedures like sp_who that leverage these views. In 2K5 Management Studio you also get Activity Monitor.
And last but not least there are community contributed scripts like the Who Is Active by Adam Machanic.
SELECT
p.spid, p.status, p.hostname, p.loginame, p.cpu, r.start_time, r.command,
p.program_name, text
FROM
sys.dm_exec_requests AS r,
master.dbo.sysprocesses AS p
CROSS APPLY sys.dm_exec_sql_text(p.sql_handle)
WHERE
p.status NOT IN ('sleeping', 'background')
AND r.session_id = p.spid
Actually, running EXEC sp_who2 in Query Analyzer / Management Studio gives more info than sp_who.
Beyond that you could set up SQL Profiler to watch all of the in and out traffic to the server. Profiler also let you narrow down exactly what you are watching for.
For SQL Server 2008:
START - All Programs - Microsoft SQL Server 2008 - Performance Tools - SQL Server Profiler
Keep in mind that the profiler is truly a logging and watching app. It will continue to log and watch as long as it is running. It could fill up text files or databases or hard drives, so be careful what you have it watch and for how long.
In the Object Explorer, drill-down to: Server -> Management -> Activity Monitor. This will allow you to see all connections on to the current server.
You can use below query to find running last request:
SELECT
der.session_id
,est.TEXT AS QueryText
,der.status
,der.blocking_session_id
,der.cpu_time
,der.total_elapsed_time
FROM sys.dm_exec_requests AS der
CROSS APPLY sys.dm_exec_sql_text(sql_handle) AS est
Using below script you can also find number of connection per database:
SELECT
DB_NAME(DBID) AS DataBaseName
,COUNT(DBID) AS NumberOfConnections
,LogiName
FROM sys.sysprocesses
WHERE DBID > 0
GROUP BY DBID, LogiName
For more details please visit:
http://www.dbrnd.com/2015/06/script-to-find-running-process-session-logged-user-in-sql-server/
here is a query that will show any queries that are blocking. I am not entirely sure if it will just show slow queries:
SELECT p.spid
,convert(char(12), d.name) db_name
, program_name
, convert(char(12), l.name) login_name
, convert(char(12), hostname) hostname
, cmd
, p.status
, p.blocked
, login_time
, last_batch
, p.spid
FROM master..sysprocesses p
JOIN master..sysdatabases d ON p.dbid = d.dbid
JOIN master..syslogins l ON p.sid = l.sid
WHERE p.blocked = 0
AND EXISTS ( SELECT 1
FROM master..sysprocesses p2
WHERE p2.blocked = p.spid )
The right script would be like this:
select
p.spid, p.status,p.hostname,p.loginame,p.cpu,r.start_time, t.text
from sys.dm_exec_requests as r, sys.sysprocesses p
cross apply sys.dm_exec_sql_text(p.sql_handle) t
where p.status not in ('sleeping', 'background')
and r.session_id=p.spid
SELECT
p.spid, p.status, p.hostname, p.loginame, p.cpu, r.start_time, t.text
FROM
sys.dm_exec_requests as r,
master.dbo.sysprocesses as p
CROSS APPLY sys.dm_exec_sql_text(p.sql_handle) t
WHERE
p.status NOT IN ('sleeping', 'background')
AND r.session_id = p.spid
And
KILL #spid
in 2005 you can right click on a database, go to reports and there's a whole list of reports on transitions and locks etc...
Trying to put things together (hope to be helpful):
SELECT
p.spid,
RIGHT(CONVERT(varchar, DATEADD(ms, DATEDIFF(ms, p.last_batch, GETDATE()), '1900-01-01'), 121), 12) AS [batch_duration],
p.[program_name],
p.hostname,
MAX(p.loginame) AS loginame,
(SELECT SUBSTRING(text, COALESCE(NULLIF(spid.stmt_start, 0), 1) + 1, CASE spid.stmt_end WHEN -1 THEN DATALENGTH(text) ELSE (spid.stmt_end - spid.stmt_start) END) FROM ::fn_get_sql(spid.[sql_handle])) AS [sql]
FROM
master.dbo.sysprocesses p
LEFT JOIN (
SELECT
ROW_NUMBER() OVER(PARTITION BY spid ORDER BY ecid) AS i,
spid,
[sql_handle],
CASE stmt_start WHEN 0 THEN 0 ELSE stmt_start / 2 END AS stmt_start,
CASE stmt_end WHEN -1 THEN -1 ELSE stmt_end / 2 END AS stmt_end
FROM sys.sysprocesses
) spid ON p.spid = spid.spid AND spid.i = 1
WHERE
p.spid > 50
AND p.status NOT IN ('background', 'sleeping')
AND p.cmd NOT IN ('AWAITING COMMAND', 'MIRROR HANDLER', 'LAZY WRITER', 'CHECKPOINT SLEEP', 'RA MANAGER')
GROUP BY
p.spid,
p.last_batch,
p.[program_name],
p.hostname,
spid.stmt_start,
spid.stmt_end,
spid.[sql_handle]
ORDER BY
batch_duration DESC,
p.spid
;
Use Sql Server Profiler (tools menu) to monitor executing queries and use activity monitor in Management studio to see how is connected and if their connection is blocking other connections.
You should try very usefull procedure sp_whoIsActive which can be found here: http://whoisactive.com and it is free.

Resources