Stored procedure not working in VB6 - sql-server

I have a stored procedure that is called very often as it is used to retrieve an account statement. The actual stored procedure takes around 10ms in a query window in MSSMS, and works generally well, but SOMETIMES decides to time out (timeout set to 120 sec) in my VB6 application. The SP joins tables in between 2 databases, one containing the current transactions (DB #1) and the other containing archived transactions (DB #2). Using 'sp_who2', no SPID seems to be hogging or blocking the system.
This is the SQL variable I set:
DECLARE #rtnRecs int;
strSQL = "EXEC spA_StatementData
#sAccountNr = '123abc',
#bIncludeHistory = 1,
#bShowAllTransactions = 1,
#iValidRecords = #rtnRecs OUTPUT"
The method I use in VB6 is:
rs.Open sql, con, adOpenStatic
where rs is the ADODB.Recordset and con is a connection to the database.
This code works well for a long while, say 2 months, and is used by several operators. It then suddenly, for no apparent reason, stops working - but still works fine in MSSMS.
I am emphasizing VB6 as that's where the problem first appeared, but the same thing is happening in my VB.net code.
One thing of note is that the '#bIncludeHistory' parameter is the condition that sets the JOIN to the archive database (DB #2). When '#bIncludeHistory' is set to 0, no timeout occurs.
Resetting the service does the trick, but only as a last resort.
Is there anything else I can try?
Thanks

Beware of parameter sniffing in your stored proc. Try this
CREATE PROC spA_StatementData (
#sAccountNr VARCHAR(1000)
, #bIncludeHistory BIT
, ...
) AS
SET NOCOUNT ON
DECLARE #_sAccountNr VARCHAR(1000)
, #_bIncludeHistory BIT
, ...
--- prevent parameter sniffing
SELECT #_sAccountNr = #sAccountNr
, #_bIncludeHistory = #bIncludeHistory
, ...
--- use local #_sAccountNr, #_bIncludeHistory, etc. instead of parmeter variables

The same problem happened to me, I missed the following code in the STORE PROCEDURE
SET NOCOUNT ON
Hope this helps.
Make sure your SP have this code.

Related

Passing a table rrecordset to SQL server

Good afternoon experts!
I have encountered a very different type of problem than usual. In the past I have pass a single line to the server via 'Pass through Query' and at times when I need to pass more than a single record, I utilise the loop function to send the data to the server multiple times. However if I have over 40 lines of record that loop will take a considerable amount of time to complete. I am just wondering if there is a way to send a table set to the server in 1 move instead of X number of moves using loop.
This is the code I am using on Access side attached to a button within the form (recordsource is a local access table):
Dim db As dao.Database
Dim rs As dao.Recordset
Dim qdf As dao.QueryDef
Dim strSQL As String
Set db = CurrentDb
Set rs = Me.Recordset
rs.MoveLast
rs.MoveFirst
Do While Not rs.EOF
Set qdf = db.QueryDefs("Qry_Send_ClientData") 'local pass through query
strSQL = "EXEC dbo.SP_Client_Referral #JunctionID='" & Me.ClientID & "', #Note='" & Replace(Me.Txt_Note, "'", "") & "', #Value1='" & Txt_Value1 & "', #Value2='" & Txt_Value2 & "'"
qdf.SQL = strSQL
db.Execute "Qry_Send_ClientData"
rs.MoveNext
Loop
Msgbox "All Client Added!", , "Add client"
Now on the SQL server side I have the following Store Procedure (dbo.SP_Client_Referral) that receives the data from pass through query and insert the line of code onto a specific table
#ClientID AS NVARCHAR(15),
#Note As NVARCHAR(500),
#Value1 As NVARCHAR(50),
#Value2 As NVARCHAR(50)
AS
BEGIN
SET NOCOUNT ON;
BEGIN
INSERT INTO dbo.Client_Data(ClientID, Note, Value_1, Value_2)
SELECT #ClientID, #Note, #Value1, #Value2
END
END
For single record or even up to maybe 10 Records this method is relatively fast. However as the number of record increases, the amount of time required can be quite long. If there is a way to pass a table (i.e. Access side using SELECT * from LocalTable) to the SQL server as oppose to line by line would definitely save quite a lot of time. Just wondering if this method exists and if so how would I send a table and what must I use on SQL server side in the SP to receive a table record. Alternatively I may have to continue using this single line method and possibly making it more efficient so that it will execute faster.
Many thanks in advance for your assistance!
Actually, the fastest approach?
Well, it is one that seems VERY counter intuitive, and I could give a VERY long explain as to why. However, try this, you find it runs 10 times or better then what you have now. In fact, it may well be closer to a 100x then what you have.
We shall assume that we have a standard linked tale to dbo.Client_Data. Likely the link is Client_Data, or even dbo_Cliet_Data.
So, use this:
Dim rs As DAO.Recordset
Dim rsIns As DAO.Recordset
If Me.Dirty = True Then Me.Dirty = False ' write any pending data
Set rsIns = CurrentDb.OpenRecordset("dbo_Client_Data", dbOpenDynaset, dbSeeChanges)
Set rs = Me.RecordsetClone
rs.MoveFirst
Do While Not rs.EOF
With rsIns
.AddNew
!ClientID = rs!ClientID
!Note = Me.Txt_Note
!Value_1 = Me.Txt_Value1
!Value_2 = Me.Txt_Value2
.Update
End With
rs.MoveNext
Loop
rsIns.Close
MsgBox "All Client Added!", , "Add client"
Note a number of bonus in above. Our code is clean - we do NOT have to worry about data types such as dates, or your messy quote's issue. If dates were involved, we again could just assign without having to worry about delimiters. We also get the bonus of injection protection to boot!
We also used me.RecordSetClone. This is not a must do. It will help performance but MOST significant is when you move the record pointer, the form record position does not attempt to follow along. this will get rid of a lot of potential flicker. It can also eliminate HUGE issues if a on-current event exists on that form.
So, while a VERY good idea (recordsetclone), it not the main reason for the huge performance increase you will see here. RecordSetClone is a the same as me.RecordSet, but you can "move" and traverse the recordset without the main form following.
So, really, the most "basic" code approach, and one that would work say on access without SQL Server turns out to be the best approach. It less code, less messy code, and will have saved you all the trouble to setup + build a SQL Server stored procedure. All your concepts were not necessary, and worse they will cause a performance penalty. Try the above concept.
Access will bulk up and manage the multiple inserts as one go. The concept and idea that always using SQL update/insert commands as compared to reocrdsets is a REALLY HUGE urban myth that so many access developers fall for. It is not true. What is REALLY the issue is if you can replace a VBA loop of a huge number of say separate executed updates with ONE single SQL update statement, then yes, you are miles ahead (to use one SQL update over some VBA loop).
However, if you have to do multiple operations and each operation is on a single row? Well in place of "many separate" SQL updates, then in this case, (being) the vast majority of cases, then a recordset will run circles around a whole bunch of separate update/insert commands to achieve the same goal. Its not even close, and you get 10x if not 100 times better performance by using the above concepts.
You could try passing XML data to the stored procedure.
DECLARE #rowData XML
SELECT #rowData = '<data><record clientid="01" Notes="somenotes" Value1="val1" Value2="val2" /><record clientid="02" Notes="somenotes 2" Value1="val1-2" Value2="val2-2" /></data>'
SELECT X.custom.value('#clientid', 'varchar(max)'),
X.custom.value('#Notes', 'varchar(max)'),
X.custom.value('#Value1', 'varchar(max)'),
X.custom.value('#Value2', 'varchar(max)')
FROM #rowData.nodes('/data/record') X(custom)

Why might a stored procedure take longer to execute from a VB.net application than from SSMS? [duplicate]

Here is the SQL
SELECT tal.TrustAccountValue
FROM TrustAccountLog AS tal
INNER JOIN TrustAccount ta ON ta.TrustAccountID = tal.TrustAccountID
INNER JOIN Users usr ON usr.UserID = ta.UserID
WHERE usr.UserID = 70402 AND
ta.TrustAccountID = 117249 AND
tal.trustaccountlogid =
(
SELECT MAX (tal.trustaccountlogid)
FROM TrustAccountLog AS tal
INNER JOIN TrustAccount ta ON ta.TrustAccountID = tal.TrustAccountID
INNER JOIN Users usr ON usr.UserID = ta.UserID
WHERE usr.UserID = 70402 AND
ta.TrustAccountID = 117249 AND
tal.TrustAccountLogDate < '3/1/2010 12:00:00 AM'
)
Basicaly there is a Users table a TrustAccount table and a TrustAccountLog table.
Users: Contains users and their details
TrustAccount: A User can have multiple TrustAccounts.
TrustAccountLog: Contains an audit of all TrustAccount "movements". A TrustAccount is associated with multiple TrustAccountLog entries.
Now this query executes in milliseconds inside SQL Server Management Studio, but for some strange reason it takes forever in my C# app and even timesout (120s) sometimes.
Here is the code in a nutshell. It gets called multiple times in a loop and the statement gets prepared.
cmd.CommandTimeout = Configuration.DBTimeout;
cmd.CommandText = #"SELECT tal.TrustAccountValue FROM TrustAccountLog AS tal
INNER JOIN TrustAccount ta ON ta.TrustAccountID = tal.TrustAccountID
INNER JOIN Users usr ON usr.UserID = ta.UserID
WHERE usr.UserID = #UserID1 AND
ta.TrustAccountID = #TrustAccountID1 AND
tal.trustaccountlogid =
(
SELECT MAX (tal.trustaccountlogid) FROM TrustAccountLog AS tal
INNER JOIN TrustAccount ta ON ta.TrustAccountID = tal.TrustAccountID
INNER JOIN Users usr ON usr.UserID = ta.UserID
WHERE usr.UserID = #UserID2 AND
ta.TrustAccountID = #TrustAccountID2 AND
tal.TrustAccountLogDate < #TrustAccountLogDate2
)";
cmd.Parameters.Add("#TrustAccountID1", SqlDbType.Int).Value = trustAccountId;
cmd.Parameters.Add("#UserID1", SqlDbType.Int).Value = userId;
cmd.Parameters.Add("#TrustAccountID2", SqlDbType.Int).Value = trustAccountId;
cmd.Parameters.Add("#UserID2", SqlDbType.Int).Value = userId;
cmd.Parameters.Add("#TrustAccountLogDate2", SqlDbType.DateTime).Value =TrustAccountLogDate;
// And then...
reader = cmd.ExecuteReader();
if (reader.Read())
{
double value = (double)reader.GetValue(0);
if (System.Double.IsNaN(value))
return 0;
else
return value;
}
else
return 0;
In my experience the usual reason why a query runs fast in SSMS but slow from .NET is due to differences in the connection's SET-tings. When a connection is opened by either SSMS or SqlConnection, a bunch of SET commands are automatically issued to set up the execution environment. Unfortunately SSMS and SqlConnection have different SET defaults.
One common difference is SET ARITHABORT. Try issuing SET ARITHABORT ON as the first command from your .NET code.
SQL Profiler can be used to monitor which SET commands are issued by both SSMS and .NET so you can find other differences.
The following code demonstrates how to issue a SET command but note that this code has not been tested.
using (SqlConnection conn = new SqlConnection("<CONNECTION_STRING>")) {
conn.Open();
using (SqlCommand comm = new SqlCommand("SET ARITHABORT ON", conn)) {
comm.ExecuteNonQuery();
}
// Do your own stuff here but you must use the same connection object
// The SET command applies to the connection. Any other connections will not
// be affected, nor will any new connections opened. If you want this applied
// to every connection, you must do it every time one is opened.
}
If this is parameter sniffing, try to add option(recompile) to the end of your query.
I would recommend creating a stored procedure to encapsulate logic in a more manageable way. Also agreed - why do you pass 5 parameters if you need only three, judging by the example?
Can you use this query instead?
select TrustAccountValue from
(
SELECT MAX (tal.trustaccountlogid), tal.TrustAccountValue
FROM TrustAccountLog AS tal
INNER JOIN TrustAccount ta ON ta.TrustAccountID = tal.TrustAccountID
INNER JOIN Users usr ON usr.UserID = ta.UserID
WHERE usr.UserID = 70402 AND
ta.TrustAccountID = 117249 AND
tal.TrustAccountLogDate < '3/1/2010 12:00:00 AM'
group by tal.TrustAccountValue
) q
And, for what it's worth, you are using ambiguous date format, depending on the language settings of the user executing query. For me for example, this is 3rd of January, not 1st of March. Check this out:
set language us_english
go
select ##language --us_english
select convert(datetime, '3/1/2010 12:00:00 AM')
go
set language british
go
select ##language --british
select convert(datetime, '3/1/2010 12:00:00 AM')
The recommended approach is to use 'ISO' format yyyymmdd hh:mm:ss
select convert(datetime, '20100301 00:00:00') --midnight 00, noon 12
Had the same issue in a test environment, although the live system (on the same SQL server) was running fine. Adding OPTION (RECOMPILE) and also OPTION (OPTIMIZE FOR (#p1 UNKNOWN)) did not help.
I used SQL Profiler to catch the exact query that the .NET client was sending and found that this was wrapped with exec sp_executesql N'select ... and that the parameters had been declared as nvarchar - even though the columns being compared are simple varchar.
Putting the captured query text into SSMS confirmed it runs just as slowly as it does from the .NET client.
I found that changing the type of the parameters to DbType.AnsiString cleared up the problem:
p = cm.CreateParameter();
p.ParameterName = "#company";
p.Value = company;
p.DbType = DbType.AnsiString;
cm.Parameters.Add(p);
I could never explain why the test and live environments had such marked difference in performance.
Hope your specific issue is resolved by now since it is an old post.
Following SET options has potential to affect plan resuse (complete list at the end)
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_NULLS ON
GO
SET ARITHABORT ON
GO
Following two statements are from msdn - SET ARITHABORT
Setting ARITHABORT to OFF can negatively impact query optimization leading to performance issues.
The default ARITHABORT setting for SQL Server Management Studio is ON. Client applications setting ARITHABORT to OFF can receive different query plans making it difficult to troubleshoot poorly performing queries. That is, the same query can execute fast in management studio but slow in the application.
Another interesting topic to understand is Parameter Sniffing as outlined in Slow in the Application, Fast in SSMS? Understanding Performance Mysteries - by Erland Sommarskog
Still another possibility is with conversion (internally) of VARCHAR columns into NVARCHAR while using Unicode input parameter as outlined in Troubleshooting SQL index performance on varchar columns - by Jimmy Bogard
OPTIMIZE FOR UNKNOWN
In SQL Server 2008 and above, consider OPTIMIZE FOR UNKNOWN . UNKNOWN: Specifies that the query optimizer use statistical data instead of the initial value to determine the value for a local variable during query optimization.
OPTION (RECOMPILE)
Use "OPTION (RECOMPILE)" instead of "WITH RECOMPILE" if recompiliing is the only solution. It helps in Parameter Embedding Optimization. Read Parameter Sniffing, Embedding, and the RECOMPILE Options - by Paul White
SET Options
Following SET options can affect plan-reuse, based on msdn - Plan Caching in SQL Server 2008
ANSI_NULL_DFLT_OFF 2. ANSI_NULL_DFLT_ON 3. ANSI_NULLS 4. ANSI_PADDING 5. ANSI_WARNINGS 6. ARITHABORT 7. CONCAT_NULL_YIELDS_NUL 8. DATEFIRST 9. DATEFORMAT 10. FORCEPLAN 11. LANGUAGE 12. NO_BROWSETABLE 13. NUMERIC_ROUNDABORT 14. QUOTED_IDENTIFIER
Most likely the problem lies in the criterion
tal.TrustAccountLogDate < #TrustAccountLogDate2
The optimal execution plan will be highly dependent on the value of the parameter, passing 1910-01-01 (which returns no rows) will most certainly cause a different plan than 2100-12-31 (which returns all rows).
When the value is specified as a literal in the query, SQL server knows which value to use during plan generation. When a parameter is used, SQL server will generate the plan only once and then reuse it, and if the value in a subsequent execution differs too much from the original one, the plan will not be optimal.
To remedy the situation, you can specify OPTION(RECOMPILE) in the query. Adding the query to a stored procedure won't help you with this particular issue, unless
you create the procedure WITH RECOMPILE.
Others have already mentioned this ("parameter sniffing"), but I thought a simple explanation of the concept won't hurt.
It might be type conversion issues. Are all the IDs really SqlDbType.Int on the data tier?
Also, why have 4 parameters where 2 will do?
cmd.Parameters.Add("#TrustAccountID1", SqlDbType.Int).Value = trustAccountId;
cmd.Parameters.Add("#UserID1", SqlDbType.Int).Value = userId;
cmd.Parameters.Add("#TrustAccountID2", SqlDbType.Int).Value = trustAccountId;
cmd.Parameters.Add("#UserID2", SqlDbType.Int).Value = userId;
Could be
cmd.Parameters.Add("#TrustAccountID", SqlDbType.Int).Value = trustAccountId;
cmd.Parameters.Add("#UserID", SqlDbType.Int).Value = userId;
Since they are both assigned the same variable.
(This might be causing the server to make a different plan since it expects four different variables as op. to. 4 constants - making it 2 variables could make a difference for the server optimization.)
Sounds possibly related to parameter sniffing? Have you tried capturing exactly what the client code sends to SQL Server (Use profiler to catch the exact statement) then run that in Management Studio?
Parameter sniffing: SQL poor stored procedure execution plan performance - parameter sniffing
I haven't seen this in code before, only in procedures, but it's worth a look.
In my case the problem was that my Entity Framework was generating queries that use exec sp_executesql.
When the parameters don't exactly match in type the execution plan does not use indexes because it decides to put the conversion into the query itself.
As you can imagine this results in a much slower performance.
in my case the column was defined as CHR(3) and the Entity Framework was passing N'str' in the query which cause a conversion from nchar to char. So for a query that looks like this:
ctx.Events.Where(e => e.Status == "Snt")
It was generating an SQL query that looks something like this:
FROM [ExtEvents] AS [Extent1] ...
WHERE (N''Snt'' = [Extent1].[Status]) ...
The easiest solution in my case was to change the column type, alternatively you can wrestle with your code to make it pass the right type in the first place.
Since you appear to only ever be returning the value from one row from one column then you can use ExecuteScalar() on the command object instead, which should be more efficient:
object value = cmd.ExecuteScalar();
if (value == null)
return 0;
else
return (double)value;
I had this problem today and this solve my problem:
https://www.mssqltips.com/sqlservertip/4318/sql-server-stored-procedure-runs-fast-in-ssms-and-slow-in-application/
I put on the begining of my SP this: Set ARITHABORT ON
Holp this help you!
You don't seem to be closing your data reader - this might start to add up over a number of iterations...
I had a problem with a different root cause that exactly matched the title of this question's symptoms.
In my case the problem was that the result set was held open by the application's .NET code while it looped through every returned record and executed another three queries against the database! Over several thousand rows this misleadingly made the original query look like it had been slow to complete based on timing information from SQL Server.
The fix was therefore to refactor the .NET code making the calls so that it doesn't hold the result set open while processing each row.
I realise the OP doesn't mention the use of stored procedures but there is an alternative solution to parameter sniffing issues when using stored procedures that is less elegant but has worked for me when OPTION(RECOMPILE) doesn't appear to do anything.
Simply copy your parameters to variables declared in the procedure and use those instead.
Example:
ALTER PROCEDURE [ExampleProcedure]
#StartDate DATETIME,
#EndDate DATETIME
AS
BEGIN
--reassign to local variables to avoid parameter sniffing issues
DECLARE #MyStartDate datetime,
#MyEndDate datetime
SELECT
#MyStartDate = #StartDate,
#MyEndDate = #EndDate
--Rest of procedure goes here but refer to #MyStartDate and #MyEndDate
END
I have just had this exact issue. A select running against a view that returned a sub second response in SSMS. But run through sp_executesql it took 5 to 20 seconds. Why? Because when I looked at the query plan when run through sp_executesql it did not use the correct indexes. It was also doing index scans instead of seeks. The solution for me was simply to create a simple sp that executed the query with the passed parameter. When run through sp_executesql it used the correct indexes and did seeks not scans. If you want to improve it even further make sure to use command.CommandType = CommandType.StoredProcedure when you have a sp then it does not use sp_executesql it just uses EXEC but this only shaved ms off the result.
This code ran sub second on a db with millions of records
public DataTable FindSeriesFiles(string StudyUID)
{
DataTable dt = new DataTable();
using (SqlConnection connection = new SqlConnection(connectionString))
{
connection.Open();
using (var command = new SqlCommand("VNA.CFIND_SERIES", connection))
{
command.CommandType = CommandType.StoredProcedure;
command.Parameters.AddWithValue("#StudyUID", StudyUID);
using (SqlDataReader reader = command.ExecuteReader())
{
dt.Load(reader);
}
return dt;
}
}
}
Where the stored procedure simply contained
CREATE PROCEDURE [VNA].[CFIND_SERIES]
#StudyUID NVARCHAR(MAX)
AS BEGIN
SET NOCOUNT ON
SELECT *
FROM CFIND_SERIES_VIEW WITH (NOLOCK)
WHERE [StudyInstanceUID] = #StudyUID
ORDER BY SeriesNumber
END
This took 5 to 20 seconds (but the select is exactly the same as the contents of the VNA.CFIND_SERIES stored procedure)
public DataTable FindSeriesFiles(string StudyUID)
{
DataTable dt = new DataTable();
using (SqlConnection connection = new SqlConnection(connectionString))
{
connection.Open();
using (var command = connection.CreateCommand())
{
command.CommandText =" SELECT * FROM CFIND_SERIES_VIEW WITH (NOLOCK) WHERE StudyUID=#StudyUID ORDER BY SeriesNumber";
command.Parameters.AddWithValue("#StudyUID", StudyUID);
using (SqlDataReader reader = command.ExecuteReader())
{
dt.Load(reader);
}
return dt;
}
}
}
I suggest you try and create a stored procedure - which can be compiled and cached by Sql Server and thus improve performance

Issue with parameters in SQL Server stored procedures

I remember reading a while back that randomly SQL Server can slow down and / or take a stupidly long time to execute a stored procedure when it is written like:
CREATE PROCEDURE spMyExampleProc
(
#myParameterINT
)
AS
BEGIN
SELECT something FROM myTable WHERE myColumn = #myParameter
END
The way to fix this error is to do this:
CREATE PROCEDURE spMyExampleProc
(
#myParameterINT
)
AS
BEGIN
DECLARE #newParameter INT
SET #newParameter = #myParameter
SELECT something FROM myTable WHERE myColumn = #newParameter
END
Now my question is firstly is it bad practice to follow the second example for all my stored procedures? This seems like a bug that could be easily prevented with little work, but would there be any drawbacks to doing this and if so why?
When I read about this the problem was that the same proc would take varying times to execute depending on the value in the parameter, if anyone can tell me what this problem is called / why it occurs I would be really grateful, I cant seem to find the link to the post anywhere and it seems like a problem that could occur for our company.
The problem is "parameter sniffing" (SO Search)
The pattern with #newParameter is called "parameter masking" (also SO Search)
You could always use the this masking pattern but it isn't always needed. For example, a simple select by unique key, with no child tables or other filters should behave as expected every time.
Since SQL Server 2008, you can also use the OPTIMISE FOR UNKNOWN (SO). Also see Alternative to using local variables in a where clause and Experience with when to use OPTIMIZE FOR UNKNOWN

Force SET IDENTITY_INSERT to take effect faster from MS Access

I'm working on upsizing a suite of MS Access backend databases to SQL Server. I've scripted the SQL to create the table schemas in SQL Server. Now I am trying to populate the tables. Most of the tables have autonumber primary keys. Here's my general approach:
For each TblName in LinkedTableNames
'Create linked table "temp_From" that links to the existing mdb'
'Create linked table "temp_To" that links to the new SQL server table
ExecutePassThru "SET IDENTITY_INSERT " & TblName & " ON"
db.Execute "INSERT INTO temp_To SELECT * FROM temp_From", dbFailOnError
ExecutePassThru "SET IDENTITY_INSERT " & TblName & " OFF"
Next TblName
The first insert happens immediately. Subsequent insert attempts fail with the error: "Cannot insert explicit value for identity column in table 'TblName' when IDENTITY_INSERT is set to OFF."
I added a Resume statement for that specific error and also a timer. It turns out that the error continues for exactly 600 seconds (ten minutes) and then the insert proceeds successfully.
Does MS Access automatically refresh its ODBC sessions every 10 minutes? Is there a way to force that to happen faster? Am I missing something obvious?
Background info for those who will immediately want to say "Use the Upsizing Wizard":
I'm not using the built-in upsizing wizard because I need to be able to script the whole operation from start to finish. The goal is to get this running in a test environment before executing the switch at the client location.
I found an answer to my first question. The ten minutes is a setting buried in the registry under the Jet engine key:
'Jet WinXP/ Win7 32-bit:'
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Jet\4.0\Engines\ODBC\ConnectionTimeout
'Jet Win7 64-bit:'
HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Jet\4.0\Engines\ODBC\ConnectionTimeout
'ACE WinXP/ Win7 32-bit:'
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Access Connectivity Engine\Engines\ODBC\ConnectionTimeout
'ACE Win7 64-bit:'
HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\MicrosoftAccess Connectivity Engine\Engines\ODBC\ConnectionTimeout
It is documented here for ACE:
ConnectionTimeout: The number of seconds a cached connection can remain idle before timing out. The default is 600 (values are of type REG_DWORD).
This key was set to the default of 600. That's 600 seconds or 10 minutes. I reduced that to ten seconds and the code sped up accordingly.
This is by no means the full solution, because setting the default that low is sure to cause issues elsewhere. In fact, Tony Toews once recommended that the default might better be increased when using DSN-less connections.
I'm still hoping to find an answer to the second part of my question, namely, is there a way to force the refresh to happen faster.
UPDATE: The reason this is even necessary is that the linked tables use a different session than ADO pass-through queries. I ran a test using SQL Profiler. Here are some brief results:
TextData SPID
-------------------------------------------
SET IDENTITY_INSERT dbo.TblName ON 50
SET IDENTITY_INSERT "dbo"."TblName" ON 49
exec sp_executesql N'INSERT INTO "d... 49
SET IDENTITY_INSERT dbo.TblName OFF 50
SET IDENTITY_INSERT dbo.NextTbl ON 50
SET IDENTITY_INSERT "dbo"."NextTbl" ON 49
exec sp_executesql N'INSERT INTO "d... 49
What's going on here is that my ADO commands are running in a different session (#49) than my linked tables (#50). Access sees that I'm setting the value for an identity column so it helpfully sets IDENTITY_INSERT ON for that table. However, it never sets IDENTITY_INSERT OFF. I turn it off manually, but that's happening in a different session.
This explains why setting the ODBC session timeout low works. It's just an ugly workaround for the fact that Access never turns off IDENTITY_INSERT on a table once it turns it on. Since IDENTITY_INSERT is sessions-specific, creating a new session is like hitting the reset button on IDENTITY_INSERT. Access can then turn it on for the next table and the setting will take effect because it's a brand new session.
Two thoughts, though not sure either will be useful because this is unfamiliar territory for me.
"Does MS Access automatically refresh its ODBC sessions every 10 minutes? Is there a way to force that to happen faster? Am I missing something obvious?"
In the Access 2003 Options dialog, on the Advanced tab, there is a setting for "ODBC refresh interval" and also settings for retries. Does adjusting those help ... or have any effect at all?
I wonder if you could avoid this problem by creating the SQL Server columns as plain numbers rather than autonumber, INSERT your data, then ALTER TABLE ... ALTER COLUMN to change them after the data has been inserted.
Access won't let me convert a numeric column to an autonumber if the table contains data, but ISTR SQL Server is more flexible on that score.
I found a convenient whereas not so beautiful solution to export many access tables to sql server and avoid the identity_insert problem:
I open a local table-recordset which lists all tables to be exported and I loop through the records (each table). In each loop I...
create an access application object
use the transfer database method on application object
terminate / quit the application object and loop again
Here is the sample code:
Public Sub exporttables()
Dim rst As Recordset
Dim access_object
'First create a local access table which lists all tables to be exported'
Set rst = CurrentDb.OpenRecordset("Select txt_tbl from ####your_table_of_tables####")
With rst
While Not .EOF
'generate a new object to avoid identity insert problem'
Set access_object = CreateObject("Access.Application")
'with access object open the database which holds the tables to be exported'
access_object.OpenCurrentDatabase "####C:\yoursourceaccessdb####.accdb"
access_object.DoCmd.TransferDatabase acExport, "ODBC Database", "ODBC;DSN=####your connection string to target SQL DB;", acTable, .Fields("txt_tbl"), .Fields("txt_tbl"), False, False
Debug.Print .Fields("txt_tbl") & " exported"
access_object.CloseCurrentDatabase
access_object.Application.Quit
Set access_object = Nothing
.MoveNext
Wend
End With
Set rst = Nothing
End Sub

SqlDataAdapter.Fill method slow

Why would a stored procedure that returns a table with 9 columns, 89 rows using this code take 60 seconds to execute (.NET 1.1) when it takes < 1 second to run in SQL Server Management Studio? It's being run on the local machine so little/no network latency, fast dev machine
Dim command As SqlCommand = New SqlCommand(procName, CreateConnection())
command.CommandType = CommandType.StoredProcedure
command.CommandTimeout = _commandTimeOut
Try
Dim adapter As new SqlDataAdapter(command)
Dim i as Integer
For i=0 to parameters.Length-1
command.Parameters.Add(parameters(i))
Next
adapter.Fill(tableToFill)
adapter.Dispose()
Finally
command.Dispose()
End Try
my paramter array is typed (for this SQL it's only a single parameter)
parameters(0) = New SqlParameter("#UserID", SqlDbType.BigInt, 0, ParameterDirection.Input, True, 19, 0, "", DataRowVersion.Current, userID)
The Stored procedure is only a select statement like so:
ALTER PROC [dbo].[web_GetMyStuffFool]
(#UserID BIGINT)
AS
SELECT Col1, Col2, Col3, Col3, Col3, Col3, Col3, Col3, Col3
FROM [Table]
First, make sure you are profiling the performance properly. For example, run the query twice from ADO.NET and see if the second time is much faster than the first time. This removes the overhead of waiting for the app to compile and the debugging infrastructure to ramp up.
Next, check the default settings in ADO.NET and SSMS. For example, if you run SET ARITHABORT OFF in SSMS, you might find that it now runs as slow as when using ADO.NET.
What I found once was that SET ARITHABORT OFF in SSMS caused the stored proc to be recompiled and/or different statistics to be used. And suddenly both SSMS and ADO.NET were reporting roughly the same execution time. Note that ARITHABORT is not itself the cause of the slowdown, it's that it causes a recompilation, and you are ending up with two different plans due to parameter sniffing. It is likely that parameter sniffing is the actual problem needing to be solved.
To check this, look at the execution plans for each run, specifically the sys.dm_exec_cached_plans table. They will probably be different.
Running 'sp_recompile' on a specific stored procedure will drop the associated execution plan from the cache, which then gives SQL Server a chance to create a possibly more appropriate plan at the next execution of the procedure.
Finally, you can try the "nuke it from orbit" approach of cleaning out the entire procedure cache and memory buffers using SSMS:
DBCC DROPCLEANBUFFERS
DBCC FREEPROCCACHE
Doing so before you test your query prevents usage of cached execution plans and previous results cache.
Here is what I ended up doing:
I executed the following SQL statement to rebuild the indexes on all tables in the database:
EXEC <databasename>..sp_MSforeachtable #command1='DBCC DBREINDEX (''*'')', #replacechar='*'
-- Replace <databasename> with the name of your database
If I wanted to see the same behavior in SSMS, I ran the proc like this:
SET ARITHABORT OFF
EXEC [dbo].[web_GetMyStuffFool] #UserID=1
SET ARITHABORT ON
Another way to bypass this is to add this to your code:
MyConnection.Execute "SET ARITHABORT ON"
I ran into the same issue, but when I've rebuilt indexes on SQL table, it worked fine, so you might want to consider rebuilding index on sql server side
Why not make it a DataReader instead of DataAdapter, it looks like you have a singel result set and if you aren't going to be pushing changes back in the DB and don't need constraints applied in .NET code you shouldn't use the Adapter.
EDIT:
If you need it to be a DataTable you can still pull the data from the DB via a DataReader and then in .NET code use the DataReader to populate a DataTable. That should still be faster than relying on the DataSet and DataAdapter
I don't know "Why" it's so slow per se - but as Marcus is pointing out - comparing Mgmt Studio to filling a dataset is apples to oranges. Datasets contain a LOT of overhead. I hate them and NEVER use them if I can help it.
You may be having issues with mismatches of old versions of the SQL stack or some such (esp given you are obviously stuck in .NET 1.1 as well) The Framework is likely trying to do database equivilant of "Reflection" to infer schema etc etc etc
One thing to consider try with your unfortunate constraint is to access the database with a datareader and build your own dataset in code. You should be able to find samples easily via google.

Resources