Why would a stored procedure that returns a table with 9 columns, 89 rows using this code take 60 seconds to execute (.NET 1.1) when it takes < 1 second to run in SQL Server Management Studio? It's being run on the local machine so little/no network latency, fast dev machine
Dim command As SqlCommand = New SqlCommand(procName, CreateConnection())
command.CommandType = CommandType.StoredProcedure
command.CommandTimeout = _commandTimeOut
Try
Dim adapter As new SqlDataAdapter(command)
Dim i as Integer
For i=0 to parameters.Length-1
command.Parameters.Add(parameters(i))
Next
adapter.Fill(tableToFill)
adapter.Dispose()
Finally
command.Dispose()
End Try
my paramter array is typed (for this SQL it's only a single parameter)
parameters(0) = New SqlParameter("#UserID", SqlDbType.BigInt, 0, ParameterDirection.Input, True, 19, 0, "", DataRowVersion.Current, userID)
The Stored procedure is only a select statement like so:
ALTER PROC [dbo].[web_GetMyStuffFool]
(#UserID BIGINT)
AS
SELECT Col1, Col2, Col3, Col3, Col3, Col3, Col3, Col3, Col3
FROM [Table]
First, make sure you are profiling the performance properly. For example, run the query twice from ADO.NET and see if the second time is much faster than the first time. This removes the overhead of waiting for the app to compile and the debugging infrastructure to ramp up.
Next, check the default settings in ADO.NET and SSMS. For example, if you run SET ARITHABORT OFF in SSMS, you might find that it now runs as slow as when using ADO.NET.
What I found once was that SET ARITHABORT OFF in SSMS caused the stored proc to be recompiled and/or different statistics to be used. And suddenly both SSMS and ADO.NET were reporting roughly the same execution time. Note that ARITHABORT is not itself the cause of the slowdown, it's that it causes a recompilation, and you are ending up with two different plans due to parameter sniffing. It is likely that parameter sniffing is the actual problem needing to be solved.
To check this, look at the execution plans for each run, specifically the sys.dm_exec_cached_plans table. They will probably be different.
Running 'sp_recompile' on a specific stored procedure will drop the associated execution plan from the cache, which then gives SQL Server a chance to create a possibly more appropriate plan at the next execution of the procedure.
Finally, you can try the "nuke it from orbit" approach of cleaning out the entire procedure cache and memory buffers using SSMS:
DBCC DROPCLEANBUFFERS
DBCC FREEPROCCACHE
Doing so before you test your query prevents usage of cached execution plans and previous results cache.
Here is what I ended up doing:
I executed the following SQL statement to rebuild the indexes on all tables in the database:
EXEC <databasename>..sp_MSforeachtable #command1='DBCC DBREINDEX (''*'')', #replacechar='*'
-- Replace <databasename> with the name of your database
If I wanted to see the same behavior in SSMS, I ran the proc like this:
SET ARITHABORT OFF
EXEC [dbo].[web_GetMyStuffFool] #UserID=1
SET ARITHABORT ON
Another way to bypass this is to add this to your code:
MyConnection.Execute "SET ARITHABORT ON"
I ran into the same issue, but when I've rebuilt indexes on SQL table, it worked fine, so you might want to consider rebuilding index on sql server side
Why not make it a DataReader instead of DataAdapter, it looks like you have a singel result set and if you aren't going to be pushing changes back in the DB and don't need constraints applied in .NET code you shouldn't use the Adapter.
EDIT:
If you need it to be a DataTable you can still pull the data from the DB via a DataReader and then in .NET code use the DataReader to populate a DataTable. That should still be faster than relying on the DataSet and DataAdapter
I don't know "Why" it's so slow per se - but as Marcus is pointing out - comparing Mgmt Studio to filling a dataset is apples to oranges. Datasets contain a LOT of overhead. I hate them and NEVER use them if I can help it.
You may be having issues with mismatches of old versions of the SQL stack or some such (esp given you are obviously stuck in .NET 1.1 as well) The Framework is likely trying to do database equivilant of "Reflection" to infer schema etc etc etc
One thing to consider try with your unfortunate constraint is to access the database with a datareader and build your own dataset in code. You should be able to find samples easily via google.
Related
I am trying to create a global temp table using the results from one query, which can then be selected as a table and manipulated further several times without having to reprocess the data over and over.
This works perfectly in SQL management studio, but when I try to add the table through an Excel query, the table can be referenced at that time, but it is not created in Temporary Tables in the tempdb database.
I have broken it down into a simple example.
If I run this in SQL management studio, the result of 1 is returned as expected, and the table ##testtable1 is created in Temporary Tables
set nocount on;
select 1 as 'Val1', 2 as 'Val2' into ##testtable1
select Val1 from ##testtable1
I can then run another select on this table, even in a different session, as you'd expect. E.g.
Select Val2 from ##testtable1
If I don't drop ##testtable1, running the below in a query in Excel returns the result of 2 as you'd expect.
Select Val2 from ##testtable1
However, if I run the same Select... into ##testtable1 query directly in Excel, that correctly returns the result of 1, but the temptable is not created.
If I then try to run
Select Val2 from ##testtable1
As a separate query, it errors saying "Invalid object name '##testtable1'
The table is not listed within Temporary Tables in SQL management studio.
It is as if it is performing a drop on the table after the query has finished executing, even though I am not calling a drop.
How can I resolve this?
Read up on global temp tables(GTT). They persist as long as there is a session referencing it. In SSMS, if you close the session that created the GTT prior to using it in another session, the GTT would be discarded. This is what is happening in Excel. Excel creates a connection, executes and disconnects. Since there are no sessions using the GTT when Excel disconnects, the GTT is discarded.
I would highly recommend you create a normal table rather than use a GTT. Because of their temporary nature and dependence on an active session, you may get inconsistent results when using a GTT. If you create a normal table instead, you can be certain it will still exist when you try to use it later.
The code to create/clean the table is pretty simple.
IF OBJECT_ID('db.schema.tablename') IS NOT NULL
TRUNCATE TABLE [tablename]
ELSE
CREATE [tablename]...
GO
You can change the truncate to a delete to clean up a specific set of data and place it at the start of each one of your queries.
is it possible you could use a view? assuming that you are connecting to 5 DBs on the same server can you union the data together in a view:
CREATE VIEW [dbo].[testView]
AS
SELECT *
FROM database1.dbo.myTable
UNION
SELECT *
FROM database2.dbo.myTable
Then in excel:
Data> New Query > From Database > FromSQL Server Database
enter DB server
Select the view from the appropriate DB - done :)
OR call the view however you are doing it (e.g. vba etc.)
equally you could use a stored procedure and call that from VBA .. basically anything that moves more of the complexity to the server side to make your life easier :D
You can absolutely do this. Notice how I'm building a temp table from SQL called 'TmpSql' ...this could be any query you want. Then I set it to recordset 1. Then I create another recordset 2, that goes and gets the temp table data.
Imagine if you were looping on the first cn.Execute where TmpSql is changing.. This allows you to build a Temporary table coming from many sources or changing variables. This is a powerful solution.
cn.open "Provider= ..."
sql = "Select t.* Into #TTable From (" & TmpSql & ") t "
Set rs1 = cn.Execute(sql)
GetTmp = "Select * From #TTable"
rs2.Open GetTmp, cn, adOpenDynamic, adLockBatchOptimistic
If Not rs2.EOF Then Call Sheets("Data").Range("A2").CopyFromRecordset(rs2)
rs2.Close
rs1.Close
cn.Close
So I have a stored procedure in SQL Server. I've simplified its code (for this question) to just this:
CREATE PROCEDURE dbo.DimensionLookup as
BEGIN
select DimensionID, DimensionField from DimensionTable
inner join Reference on Reference.ID = DimensionTable.ReferenceID
END
In SSIS on SQL Server 2012, I have a Lookup component with the following source command:
EXECUTE dbo.DimensionLookup WITH RESULT SETS (
(DimensionID int, DimensionField nvarchar(700) )
)
When I run this procedure in Preview mode in BIDS, it returns the two columns correctly. When I run the package in BIDS, it runs correctly.
But when I deploy it out to the SSIS catalog (the same server the database is on), point it to the same data sources, etc. - it fails with the message:
EXECUTE statement failed because its WITH RESULT SETS clause specified 2 column(s) for result set number 1, but the statement sent
3 column(s) at run time.
Steps Tried So Far:
Adding a third column to the result set - I get a different error, VS_NEEDSNEWMETADATA - which makes sense, kind of proof there's no third column.
SQL Profiler - I see this:
exec sp_prepare #p1 output,NULL,N'EXECUTE dbo.DimensionLookup WITH RESULT SETS ((
DimensionID int, DimensionField nvarchar(700)))',1
SET FMTONLY ON exec sp_execute 1 SET FMTONLY OFF
So it's trying to use FMTONLY to get the result set data ... needless to say, running SET FMTONLY ON and then running the command in SSMS myself yields .. just the two columns.
SET NOTCOUNT ON - Nothing changed.
So, two other interesting things:
I deployed it out to my local SQL 2012 install and it worked fine, same connections, etc. So it may be a server / database configuration. Not sure what if anything it is, I didn't install the dev server and my own install was pretty much click through vanilla.
Perhaps the most interesting thing. If I remove the join from the procedure's statement so it just becomes
select DimensionID, DimensionField from DimensionTable
It goes back to just sending 2 columns in the result set! So adding a join, without adding any additional output columns, ups the result set to 3 columns. Even if I add 6 more joins, just 3 columns. So one guess is its some sort of metadata column that only gets activated when there's a join.
Anyway, as you can imagine, it's driving me kind of mad. I have a workaround to load the data into a temp table and just return that, but why won't this work? What extra column is being sent back? Why only when I add a join?
Gah!
So all credit to billinkc: The reason is because of a patch.
In Version 11.0.2100.60, SSIS Lookup SQL command metadata is gathered using the old SET FMTONLY method. Unfortunately, this doesn't work in 2012, as the Books Online entry on SET FMTONLY helpfully notes:
Do not use this feature. This feature has been replaced by sp_describe_first_result_set.
Too bad they didn't follow their own advice!
This has been patched as of version 11.0.2218.0. Metadata is correctly gathered using the sp_describe_first_result_set system stored procedure.
This can happen if the specified WITH results set in SSIS identifies that there are more columns than being returned by the stored proc being called. Check your stored proc and ensure that you have the correct number of output columns as the WITH results set.
I'm working on upsizing a suite of MS Access backend databases to SQL Server. I've scripted the SQL to create the table schemas in SQL Server. Now I am trying to populate the tables. Most of the tables have autonumber primary keys. Here's my general approach:
For each TblName in LinkedTableNames
'Create linked table "temp_From" that links to the existing mdb'
'Create linked table "temp_To" that links to the new SQL server table
ExecutePassThru "SET IDENTITY_INSERT " & TblName & " ON"
db.Execute "INSERT INTO temp_To SELECT * FROM temp_From", dbFailOnError
ExecutePassThru "SET IDENTITY_INSERT " & TblName & " OFF"
Next TblName
The first insert happens immediately. Subsequent insert attempts fail with the error: "Cannot insert explicit value for identity column in table 'TblName' when IDENTITY_INSERT is set to OFF."
I added a Resume statement for that specific error and also a timer. It turns out that the error continues for exactly 600 seconds (ten minutes) and then the insert proceeds successfully.
Does MS Access automatically refresh its ODBC sessions every 10 minutes? Is there a way to force that to happen faster? Am I missing something obvious?
Background info for those who will immediately want to say "Use the Upsizing Wizard":
I'm not using the built-in upsizing wizard because I need to be able to script the whole operation from start to finish. The goal is to get this running in a test environment before executing the switch at the client location.
I found an answer to my first question. The ten minutes is a setting buried in the registry under the Jet engine key:
'Jet WinXP/ Win7 32-bit:'
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Jet\4.0\Engines\ODBC\ConnectionTimeout
'Jet Win7 64-bit:'
HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Jet\4.0\Engines\ODBC\ConnectionTimeout
'ACE WinXP/ Win7 32-bit:'
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Access Connectivity Engine\Engines\ODBC\ConnectionTimeout
'ACE Win7 64-bit:'
HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\MicrosoftAccess Connectivity Engine\Engines\ODBC\ConnectionTimeout
It is documented here for ACE:
ConnectionTimeout: The number of seconds a cached connection can remain idle before timing out. The default is 600 (values are of type REG_DWORD).
This key was set to the default of 600. That's 600 seconds or 10 minutes. I reduced that to ten seconds and the code sped up accordingly.
This is by no means the full solution, because setting the default that low is sure to cause issues elsewhere. In fact, Tony Toews once recommended that the default might better be increased when using DSN-less connections.
I'm still hoping to find an answer to the second part of my question, namely, is there a way to force the refresh to happen faster.
UPDATE: The reason this is even necessary is that the linked tables use a different session than ADO pass-through queries. I ran a test using SQL Profiler. Here are some brief results:
TextData SPID
-------------------------------------------
SET IDENTITY_INSERT dbo.TblName ON 50
SET IDENTITY_INSERT "dbo"."TblName" ON 49
exec sp_executesql N'INSERT INTO "d... 49
SET IDENTITY_INSERT dbo.TblName OFF 50
SET IDENTITY_INSERT dbo.NextTbl ON 50
SET IDENTITY_INSERT "dbo"."NextTbl" ON 49
exec sp_executesql N'INSERT INTO "d... 49
What's going on here is that my ADO commands are running in a different session (#49) than my linked tables (#50). Access sees that I'm setting the value for an identity column so it helpfully sets IDENTITY_INSERT ON for that table. However, it never sets IDENTITY_INSERT OFF. I turn it off manually, but that's happening in a different session.
This explains why setting the ODBC session timeout low works. It's just an ugly workaround for the fact that Access never turns off IDENTITY_INSERT on a table once it turns it on. Since IDENTITY_INSERT is sessions-specific, creating a new session is like hitting the reset button on IDENTITY_INSERT. Access can then turn it on for the next table and the setting will take effect because it's a brand new session.
Two thoughts, though not sure either will be useful because this is unfamiliar territory for me.
"Does MS Access automatically refresh its ODBC sessions every 10 minutes? Is there a way to force that to happen faster? Am I missing something obvious?"
In the Access 2003 Options dialog, on the Advanced tab, there is a setting for "ODBC refresh interval" and also settings for retries. Does adjusting those help ... or have any effect at all?
I wonder if you could avoid this problem by creating the SQL Server columns as plain numbers rather than autonumber, INSERT your data, then ALTER TABLE ... ALTER COLUMN to change them after the data has been inserted.
Access won't let me convert a numeric column to an autonumber if the table contains data, but ISTR SQL Server is more flexible on that score.
I found a convenient whereas not so beautiful solution to export many access tables to sql server and avoid the identity_insert problem:
I open a local table-recordset which lists all tables to be exported and I loop through the records (each table). In each loop I...
create an access application object
use the transfer database method on application object
terminate / quit the application object and loop again
Here is the sample code:
Public Sub exporttables()
Dim rst As Recordset
Dim access_object
'First create a local access table which lists all tables to be exported'
Set rst = CurrentDb.OpenRecordset("Select txt_tbl from ####your_table_of_tables####")
With rst
While Not .EOF
'generate a new object to avoid identity insert problem'
Set access_object = CreateObject("Access.Application")
'with access object open the database which holds the tables to be exported'
access_object.OpenCurrentDatabase "####C:\yoursourceaccessdb####.accdb"
access_object.DoCmd.TransferDatabase acExport, "ODBC Database", "ODBC;DSN=####your connection string to target SQL DB;", acTable, .Fields("txt_tbl"), .Fields("txt_tbl"), False, False
Debug.Print .Fields("txt_tbl") & " exported"
access_object.CloseCurrentDatabase
access_object.Application.Quit
Set access_object = Nothing
.MoveNext
Wend
End With
Set rst = Nothing
End Sub
I'm 100% sure that this question is a duplicate but I searched for a few hours and I didn't find anything.
My environment : windows server 2003, sql server 2005 , .net 2.0 (c#)
My problem :
When I run 5 request in the same time , one of my stored proc raises a time-out.
If , during the period the 5 request are waiting, I run in Management Studio, I try to call this stored proc with the same argument, I get my results in 0sec :)
I tried to see if I have too much connection opened but I can't see anything in activity monitor (I can see 14 item with "awaiting command").
So what is my problem ? I think it's a configuration missing , if it is, can you explain me how I will choose the value of this configuration.
Thanks
You can also try altering the isolation level of the select statement in the SP using a table hint.
For instance:
SELECT col1, col2, col3 FROM Table1 WITH (READUNCOMMITTED)
There are several other isolation levels but READ UNCOMMITTED is the lowest and will read from a table that is exclusively locked. The downside is you can get dirty reads.
If the issue is with locking, this might help.
I have a LINQ to SQL query that generates the following SQL :
exec sp_executesql N'SELECT COUNT(*) AS [value]
FROM [dbo].[SessionVisit] AS [t0]
WHERE ([t0].[VisitedStore] = #p0) AND (NOT ([t0].[Bot] = 1)) AND
([t0].[SessionDate] > #p1)',N'#p0 int,#p1 datetime',
#p0=1,#p1='2010-02-15 01:24:00'
(This is the actual SQL taken from SQL Profiler on SQL Server 2008.)
The query plan generated when I run this SQL from within Query Analyser is perfect.
It uses an index containing VisitedStore, Bot, SessionDate.
The query returns instantly.
However when I run this from C# (with LINQ) a different query plan is used that is so inefficient it doesn't even return in 60 seconds. This query plan is trying to do a key lookup on the clustered primary key which contains a couple million rows. It has no chance of returning.
What I just can't understand though is that the EXACT same SQL is being run - either from within LINQ or from within Query Analyser yet the query plan is different.
I've ran the two queries many many times and they're now running in isolation from any other queries. The date is DateTime.Now.AddDays(-7), but I've even hardcoded that date to eliminate caching problems.
Is there anything i can change in LINQ to SQL to affect the query plan or try to debug this further? I'm very very confused!
This is a relatively common problem that surprised me too when I first saw it. The first thing to do is ensure your statistics are up to date. You can check the age of statistics with:
SELECT
object_name = Object_Name(ind.object_id),
IndexName = ind.name,
StatisticsDate = STATS_DATE(ind.object_id, ind.index_id)
FROM SYS.INDEXES ind
order by STATS_DATE(ind.object_id, ind.index_id) desc
Statistics should be updated in a weekly maintenance plan. For a quick fix, issue the following command to update all statistics in your database:
exec sp_updatestats
Apart from the statistics, another thing you can check is the SET options. They can be different between Query Analyzer and your Linq2Sql application.
Another possibility is that SQL Server is using an old cached plan for your Linq2Sql query. Plans can be cached on a per-user basis, so if you run Query Analyser as a different user, that can explain different plans. Normally you could add Option (RECOMPILE) to the application query, but I guess that's hard with Linq2Sql. You can clear the entire cache with DBCC FREEPROCCACHE and see if that speeds up the Linq2Sql query.
switched to a stored procedure and the same SQL works fine. would really like to know what's going on but can't spend any more time on this now. fortunately in this instance the query was not too dynamic.
hopefully this at least helps anyone in the same boat as me