View a list of actual running queries - sql-server

On my test server there is a large query that is running (which is okay), but it is a multi-line query. E.g. in SSMS I told it to run something like:
begin transaction;
query;
query;
query;
query;
commit;
I want to see which query within the list is executing. Selecting text from sys.dm_exec_sql_text returns the entire statement, not the particular command that is executing within the list. Is there a way to view the individual commands that are being processed?
In case it matters (sometimes it does), this is running on a SQL Azure instance.

Use DBCC INPUTBUFFER
DBCC INPUTBUFFER(your session id)
It will display the query that is executing in your session

Here you can find my complete set o queries useful to show transactions running, time-wait events and open transactions with plan and sql texts:
http://zaboilab.com/sql-server-toolbox/queries-to-see-rapidly-what-your-sql-server-is-doing-now

Related

Executing query from SSDT

I'm using Visual Studio 2015, SSIS to run set of sql tasks in Execute Sql task and then do a data transfer between tables which are in SSMS by executing package in SSIS. When we run a series of sql statements on SSMS, we get results like rows effected for every sql successful activity. However, now I want to automate the process using SSIS to reduce the turn around time. I would like to get the rows effected for every sql query like select, insert, delete which are in execute sql task. How can it be done in SSIS? I don't have dbo_owner permission to stored procedures in SSMS. I'm thinking SSIS would be a quick way. But it is very important for me to make a log of rows effected to validate the data, as it is financial data. I have nearly 10 sql statements in each sql task like select and delete. But the output is only one table.
For example my sql task is like below
select * from dbo.table1;
select * from dbo.table2 where city = 'Chicago';
create dbo.table3(id int, name varchar(50);
insert into dbo.table3(1,'a');
select * from dbo.table3;
If I execute this in SSMS I get rows effected for each select statement and also table is created. If I execute the same through package in SSIS, how will get messages for each of them?
I assume your data lies on SQL Server. With selects, you could use data flow tasks and row counts instead of Excecute Sql's.
For inserts and updates there's a few ways to get affected rowcount, like this: https://stackoverflow.com/a/1834264/5605866
or like this: http://microsoft-ssis.blogspot.fi/2011/03/rowcount-for-execute-sql-statement.html
Basically the same thing but with a bit different syntax.
You can use the Row Count transaformation after the Data source and save it the variable. Can refer to this get the number of rows returned from the Source that SHOULD be processed.
Hope this help.

Sql server drop table not working

I have a table with almost 45 million rows. I was updating a field of it with the query:
update tableName set columnX = Right(columnX, 10)
I didn't do tran or commit but directly ran the query. During the execution of query, after an hour unfortunately power failure occurred and now when i try to run select query it takes too much time and returns nothing. Even drop table doesn't work. I don't know what is the problem.
I don't know what is the problem.
SQL server is rolling back your update statement..you can monitor the status of rollback ,using many ways
1.
kill sessionid with status only
2.By using DMV
select
der.session_id,
der.command,
der.status,
der.percent_complete
from sys.dm_exec_requests as der
where command IN ('killed/rollback',’rollback’)
Dont try to restart SQLServer,as this may prolong the status..

Connection scoped temp tables across stored procedures

I'm working on a data virtualization solution. The user is able to write his own SQL queries as filters for a query i make. I would like not having to run this filter query every time i select something from the database(It will likely be a complex series of joins).
My idea was to use a # temp table at script level and keep the connection alive. This #temp table would then be selected from but updated only when the user changes the filter. The idea being i can actually use it from stored procedures and the table is scoped to that connection.
I got the idea from someone who suggested to use dynamic sql and ## global temp tables named with the connection process ID so to make each connection have a unique global temp table. This was to overcome sharing temp tables across stored procedures. But it seems a bit clumsy.
I did a quick test with the below code and seemed to work fine
-- Run script at connection open from some app
SELECT * INTO #test
FROM dataTable
-- Now we can use stored procedures with #test table
EXECUTE selectFromTempTable
EXECUTE updateTempTable #sqlFilterString
EXECUTE selectFromTempTable
Only real problem i can see is the connection have to be kept alive for the duration which can be a few hours maybe. A single user can have multiple connections running at the same time. The number of users on a single database server would be like max 20.
If its a huge issue i could make it so the application can close and open them as needed so each user only have 1 connection open at a time. And maybe even then close it if not in use, and reopen when needed again with the delay of having to wait for the query to run.
Would this be bad practice? or kill any performance benefit from not running the filter query? This is on SQL Server 2008 and up.
I think I would create a permanent table, using the spid (process ID) as a key value. Each connection has its own process ID, so anyone can use it to identify their entries in the table:
create table filter(
spid int,
filternum int,
filterstring varchar(255),
<other cols> );
create unique index filterindx on filter(spid, filternum);
Then when a user creates filter entries:
delete from filter where spid = ##spid
insert into filter(spid, filternum, filterstring) select ##spid, 1, 'some sql thing'
insert into filter(spid, filternum, filterstring) select ##spid, 2, 'some other sql thing'
Then you can access each user's filter values by selecting where spid = ##spid etc

'LINQ query plan' horribly inefficient but 'Query Analyser query plan' is perfect for same SQL!

I have a LINQ to SQL query that generates the following SQL :
exec sp_executesql N'SELECT COUNT(*) AS [value]
FROM [dbo].[SessionVisit] AS [t0]
WHERE ([t0].[VisitedStore] = #p0) AND (NOT ([t0].[Bot] = 1)) AND
([t0].[SessionDate] > #p1)',N'#p0 int,#p1 datetime',
#p0=1,#p1='2010-02-15 01:24:00'
(This is the actual SQL taken from SQL Profiler on SQL Server 2008.)
The query plan generated when I run this SQL from within Query Analyser is perfect.
It uses an index containing VisitedStore, Bot, SessionDate.
The query returns instantly.
However when I run this from C# (with LINQ) a different query plan is used that is so inefficient it doesn't even return in 60 seconds. This query plan is trying to do a key lookup on the clustered primary key which contains a couple million rows. It has no chance of returning.
What I just can't understand though is that the EXACT same SQL is being run - either from within LINQ or from within Query Analyser yet the query plan is different.
I've ran the two queries many many times and they're now running in isolation from any other queries. The date is DateTime.Now.AddDays(-7), but I've even hardcoded that date to eliminate caching problems.
Is there anything i can change in LINQ to SQL to affect the query plan or try to debug this further? I'm very very confused!
This is a relatively common problem that surprised me too when I first saw it. The first thing to do is ensure your statistics are up to date. You can check the age of statistics with:
SELECT
object_name = Object_Name(ind.object_id),
IndexName = ind.name,
StatisticsDate = STATS_DATE(ind.object_id, ind.index_id)
FROM SYS.INDEXES ind
order by STATS_DATE(ind.object_id, ind.index_id) desc
Statistics should be updated in a weekly maintenance plan. For a quick fix, issue the following command to update all statistics in your database:
exec sp_updatestats
Apart from the statistics, another thing you can check is the SET options. They can be different between Query Analyzer and your Linq2Sql application.
Another possibility is that SQL Server is using an old cached plan for your Linq2Sql query. Plans can be cached on a per-user basis, so if you run Query Analyser as a different user, that can explain different plans. Normally you could add Option (RECOMPILE) to the application query, but I guess that's hard with Linq2Sql. You can clear the entire cache with DBCC FREEPROCCACHE and see if that speeds up the Linq2Sql query.
switched to a stored procedure and the same SQL works fine. would really like to know what's going on but can't spend any more time on this now. fortunately in this instance the query was not too dynamic.
hopefully this at least helps anyone in the same boat as me

SqlDataAdapter.Fill method slow

Why would a stored procedure that returns a table with 9 columns, 89 rows using this code take 60 seconds to execute (.NET 1.1) when it takes < 1 second to run in SQL Server Management Studio? It's being run on the local machine so little/no network latency, fast dev machine
Dim command As SqlCommand = New SqlCommand(procName, CreateConnection())
command.CommandType = CommandType.StoredProcedure
command.CommandTimeout = _commandTimeOut
Try
Dim adapter As new SqlDataAdapter(command)
Dim i as Integer
For i=0 to parameters.Length-1
command.Parameters.Add(parameters(i))
Next
adapter.Fill(tableToFill)
adapter.Dispose()
Finally
command.Dispose()
End Try
my paramter array is typed (for this SQL it's only a single parameter)
parameters(0) = New SqlParameter("#UserID", SqlDbType.BigInt, 0, ParameterDirection.Input, True, 19, 0, "", DataRowVersion.Current, userID)
The Stored procedure is only a select statement like so:
ALTER PROC [dbo].[web_GetMyStuffFool]
(#UserID BIGINT)
AS
SELECT Col1, Col2, Col3, Col3, Col3, Col3, Col3, Col3, Col3
FROM [Table]
First, make sure you are profiling the performance properly. For example, run the query twice from ADO.NET and see if the second time is much faster than the first time. This removes the overhead of waiting for the app to compile and the debugging infrastructure to ramp up.
Next, check the default settings in ADO.NET and SSMS. For example, if you run SET ARITHABORT OFF in SSMS, you might find that it now runs as slow as when using ADO.NET.
What I found once was that SET ARITHABORT OFF in SSMS caused the stored proc to be recompiled and/or different statistics to be used. And suddenly both SSMS and ADO.NET were reporting roughly the same execution time. Note that ARITHABORT is not itself the cause of the slowdown, it's that it causes a recompilation, and you are ending up with two different plans due to parameter sniffing. It is likely that parameter sniffing is the actual problem needing to be solved.
To check this, look at the execution plans for each run, specifically the sys.dm_exec_cached_plans table. They will probably be different.
Running 'sp_recompile' on a specific stored procedure will drop the associated execution plan from the cache, which then gives SQL Server a chance to create a possibly more appropriate plan at the next execution of the procedure.
Finally, you can try the "nuke it from orbit" approach of cleaning out the entire procedure cache and memory buffers using SSMS:
DBCC DROPCLEANBUFFERS
DBCC FREEPROCCACHE
Doing so before you test your query prevents usage of cached execution plans and previous results cache.
Here is what I ended up doing:
I executed the following SQL statement to rebuild the indexes on all tables in the database:
EXEC <databasename>..sp_MSforeachtable #command1='DBCC DBREINDEX (''*'')', #replacechar='*'
-- Replace <databasename> with the name of your database
If I wanted to see the same behavior in SSMS, I ran the proc like this:
SET ARITHABORT OFF
EXEC [dbo].[web_GetMyStuffFool] #UserID=1
SET ARITHABORT ON
Another way to bypass this is to add this to your code:
MyConnection.Execute "SET ARITHABORT ON"
I ran into the same issue, but when I've rebuilt indexes on SQL table, it worked fine, so you might want to consider rebuilding index on sql server side
Why not make it a DataReader instead of DataAdapter, it looks like you have a singel result set and if you aren't going to be pushing changes back in the DB and don't need constraints applied in .NET code you shouldn't use the Adapter.
EDIT:
If you need it to be a DataTable you can still pull the data from the DB via a DataReader and then in .NET code use the DataReader to populate a DataTable. That should still be faster than relying on the DataSet and DataAdapter
I don't know "Why" it's so slow per se - but as Marcus is pointing out - comparing Mgmt Studio to filling a dataset is apples to oranges. Datasets contain a LOT of overhead. I hate them and NEVER use them if I can help it.
You may be having issues with mismatches of old versions of the SQL stack or some such (esp given you are obviously stuck in .NET 1.1 as well) The Framework is likely trying to do database equivilant of "Reflection" to infer schema etc etc etc
One thing to consider try with your unfortunate constraint is to access the database with a datareader and build your own dataset in code. You should be able to find samples easily via google.

Resources