Force SET IDENTITY_INSERT to take effect faster from MS Access - sql-server

I'm working on upsizing a suite of MS Access backend databases to SQL Server. I've scripted the SQL to create the table schemas in SQL Server. Now I am trying to populate the tables. Most of the tables have autonumber primary keys. Here's my general approach:
For each TblName in LinkedTableNames
'Create linked table "temp_From" that links to the existing mdb'
'Create linked table "temp_To" that links to the new SQL server table
ExecutePassThru "SET IDENTITY_INSERT " & TblName & " ON"
db.Execute "INSERT INTO temp_To SELECT * FROM temp_From", dbFailOnError
ExecutePassThru "SET IDENTITY_INSERT " & TblName & " OFF"
Next TblName
The first insert happens immediately. Subsequent insert attempts fail with the error: "Cannot insert explicit value for identity column in table 'TblName' when IDENTITY_INSERT is set to OFF."
I added a Resume statement for that specific error and also a timer. It turns out that the error continues for exactly 600 seconds (ten minutes) and then the insert proceeds successfully.
Does MS Access automatically refresh its ODBC sessions every 10 minutes? Is there a way to force that to happen faster? Am I missing something obvious?
Background info for those who will immediately want to say "Use the Upsizing Wizard":
I'm not using the built-in upsizing wizard because I need to be able to script the whole operation from start to finish. The goal is to get this running in a test environment before executing the switch at the client location.

I found an answer to my first question. The ten minutes is a setting buried in the registry under the Jet engine key:
'Jet WinXP/ Win7 32-bit:'
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Jet\4.0\Engines\ODBC\ConnectionTimeout
'Jet Win7 64-bit:'
HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Jet\4.0\Engines\ODBC\ConnectionTimeout
'ACE WinXP/ Win7 32-bit:'
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Access Connectivity Engine\Engines\ODBC\ConnectionTimeout
'ACE Win7 64-bit:'
HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\MicrosoftAccess Connectivity Engine\Engines\ODBC\ConnectionTimeout
It is documented here for ACE:
ConnectionTimeout: The number of seconds a cached connection can remain idle before timing out. The default is 600 (values are of type REG_DWORD).
This key was set to the default of 600. That's 600 seconds or 10 minutes. I reduced that to ten seconds and the code sped up accordingly.
This is by no means the full solution, because setting the default that low is sure to cause issues elsewhere. In fact, Tony Toews once recommended that the default might better be increased when using DSN-less connections.
I'm still hoping to find an answer to the second part of my question, namely, is there a way to force the refresh to happen faster.
UPDATE: The reason this is even necessary is that the linked tables use a different session than ADO pass-through queries. I ran a test using SQL Profiler. Here are some brief results:
TextData SPID
-------------------------------------------
SET IDENTITY_INSERT dbo.TblName ON 50
SET IDENTITY_INSERT "dbo"."TblName" ON 49
exec sp_executesql N'INSERT INTO "d... 49
SET IDENTITY_INSERT dbo.TblName OFF 50
SET IDENTITY_INSERT dbo.NextTbl ON 50
SET IDENTITY_INSERT "dbo"."NextTbl" ON 49
exec sp_executesql N'INSERT INTO "d... 49
What's going on here is that my ADO commands are running in a different session (#49) than my linked tables (#50). Access sees that I'm setting the value for an identity column so it helpfully sets IDENTITY_INSERT ON for that table. However, it never sets IDENTITY_INSERT OFF. I turn it off manually, but that's happening in a different session.
This explains why setting the ODBC session timeout low works. It's just an ugly workaround for the fact that Access never turns off IDENTITY_INSERT on a table once it turns it on. Since IDENTITY_INSERT is sessions-specific, creating a new session is like hitting the reset button on IDENTITY_INSERT. Access can then turn it on for the next table and the setting will take effect because it's a brand new session.

Two thoughts, though not sure either will be useful because this is unfamiliar territory for me.
"Does MS Access automatically refresh its ODBC sessions every 10 minutes? Is there a way to force that to happen faster? Am I missing something obvious?"
In the Access 2003 Options dialog, on the Advanced tab, there is a setting for "ODBC refresh interval" and also settings for retries. Does adjusting those help ... or have any effect at all?
I wonder if you could avoid this problem by creating the SQL Server columns as plain numbers rather than autonumber, INSERT your data, then ALTER TABLE ... ALTER COLUMN to change them after the data has been inserted.
Access won't let me convert a numeric column to an autonumber if the table contains data, but ISTR SQL Server is more flexible on that score.

I found a convenient whereas not so beautiful solution to export many access tables to sql server and avoid the identity_insert problem:
I open a local table-recordset which lists all tables to be exported and I loop through the records (each table). In each loop I...
create an access application object
use the transfer database method on application object
terminate / quit the application object and loop again
Here is the sample code:
Public Sub exporttables()
Dim rst As Recordset
Dim access_object
'First create a local access table which lists all tables to be exported'
Set rst = CurrentDb.OpenRecordset("Select txt_tbl from ####your_table_of_tables####")
With rst
While Not .EOF
'generate a new object to avoid identity insert problem'
Set access_object = CreateObject("Access.Application")
'with access object open the database which holds the tables to be exported'
access_object.OpenCurrentDatabase "####C:\yoursourceaccessdb####.accdb"
access_object.DoCmd.TransferDatabase acExport, "ODBC Database", "ODBC;DSN=####your connection string to target SQL DB;", acTable, .Fields("txt_tbl"), .Fields("txt_tbl"), False, False
Debug.Print .Fields("txt_tbl") & " exported"
access_object.CloseCurrentDatabase
access_object.Application.Quit
Set access_object = Nothing
.MoveNext
Wend
End With
Set rst = Nothing
End Sub

Related

Using SESSION_CONTEXT in SQL from an MS Access VBA frontend

I am migrating a MS Access back end database into SQL server.
The existing MS Access front end needs to be be retained.
I am connecting the Access front-end to the SQL database using a service account so that individual users have no direct access to SQL.
I want to record UserId's on record Add and Update actions, but I do not want to have to specify the fields on every call.
I have a hidden table open in Access to maintain a persistent connection to the SQL database.
I created a Session Context object with the UserId in Access using a Sub I call on Access startup, and I have even called the Sub directly before running the record insert.
Sub SqlSetUser()
Dim qdef As DAO.QueryDef
Set qdef = CurrentDb.CreateQueryDef("")
qdef.Connect = CurrentDb.TableDefs("dbo_User").Connect
qdef.SQL = "EXEC sys.sp_set_session_context #key = N'UserId', #value = '" & GetUser() & "';"
qdef.ReturnsRecords = False ''avoid 3065 error
qdef.Execute
End Sub
I created a trigger in a SQL table to extract the UserId and add it to the record being added with a similar trigger to handle updates;
CREATE TRIGGER [dbo].[ReferenceItemAdd]
on [dbo].[ReferenceItem]
FOR INSERT
AS
BEGIN
SET NOCOUNT ON;
declare #UserId as int = try_cast((Select SESSION_CONTEXT(N'UserId')) as int)
UPDATE ReferenceItem set AddDate = getdate(), AddUserId = #UserId
from INSERTED i, ReferenceItem a
where i.ReferenceItemId = a.ReferenceItemId
SET NOCOUNT OFF;
END
It only works if I stop the code via a breakpoint and then continue. If I allow the code to run the record is inserted, and the AddDate is set correctly by the trigger but the UserId comes back with NULL.
How can the UserId be made accessible for a trigger in MS-SQL from an MS-Access front-end?
I don’t grasp your notes about increased security here?
If you have a client web browser, and a web server, then you certainly have a web server that can update the SQL database with a service account because you have a WHOLE web server between the client and the SQL database.
You have NONE of the above.
Eg:
qdef.Connect = CurrentDb.TableDefs("dbo_User").Connect
qdef.SQL = "EXEC
Right, so you have linked tables and above is a connection string that is directly hitting the database, and even able to execute stored procedures. I assume this connection is the SAME one used by the linked tables? (anyway, we can leave the supposed security issue for another day – what you have here is plane jane linked tables, and they are directly updating the database, and even able to execute stored procedure code as per your above example.
Next up:
We assume one SQL logon is being used here?
Your code should work, what looks wrong is this:
declare #UserId as int = try_cast((Select SESSION_CONTEXT(N'UserId')) as int)
Why are you casting the above to a int? Is not GetUser() you set a string? (your example code has ‘ ‘ around the text – so it assumed to be a character type.
And it is a direct variable assignment – you don’t need the select.
You should be using:
DECLARE #UserID as varchar(25) = CAST(SESSION_CONTEXT(N'UserId') AS varchar(25))
I don’t know if the session will remain constant for all the linked tables. I would 100% ensure that all linked tables have the exact same connection string. You should be able to execute your code one time on startup to set that session value. However, I not 100% sure that a single session will always be used here (you can come back and confirm this - as I am rather interested if this is the case).

Excel - SQL Query - ## Temp Table

I am trying to create a global temp table using the results from one query, which can then be selected as a table and manipulated further several times without having to reprocess the data over and over.
This works perfectly in SQL management studio, but when I try to add the table through an Excel query, the table can be referenced at that time, but it is not created in Temporary Tables in the tempdb database.
I have broken it down into a simple example.
If I run this in SQL management studio, the result of 1 is returned as expected, and the table ##testtable1 is created in Temporary Tables
set nocount on;
select 1 as 'Val1', 2 as 'Val2' into ##testtable1
select Val1 from ##testtable1
I can then run another select on this table, even in a different session, as you'd expect. E.g.
Select Val2 from ##testtable1
If I don't drop ##testtable1, running the below in a query in Excel returns the result of 2 as you'd expect.
Select Val2 from ##testtable1
However, if I run the same Select... into ##testtable1 query directly in Excel, that correctly returns the result of 1, but the temptable is not created.
If I then try to run
Select Val2 from ##testtable1
As a separate query, it errors saying "Invalid object name '##testtable1'
The table is not listed within Temporary Tables in SQL management studio.
It is as if it is performing a drop on the table after the query has finished executing, even though I am not calling a drop.
How can I resolve this?
Read up on global temp tables(GTT). They persist as long as there is a session referencing it. In SSMS, if you close the session that created the GTT prior to using it in another session, the GTT would be discarded. This is what is happening in Excel. Excel creates a connection, executes and disconnects. Since there are no sessions using the GTT when Excel disconnects, the GTT is discarded.
I would highly recommend you create a normal table rather than use a GTT. Because of their temporary nature and dependence on an active session, you may get inconsistent results when using a GTT. If you create a normal table instead, you can be certain it will still exist when you try to use it later.
The code to create/clean the table is pretty simple.
IF OBJECT_ID('db.schema.tablename') IS NOT NULL
TRUNCATE TABLE [tablename]
ELSE
CREATE [tablename]...
GO
You can change the truncate to a delete to clean up a specific set of data and place it at the start of each one of your queries.
is it possible you could use a view? assuming that you are connecting to 5 DBs on the same server can you union the data together in a view:
CREATE VIEW [dbo].[testView]
AS
SELECT *
FROM database1.dbo.myTable
UNION
SELECT *
FROM database2.dbo.myTable
Then in excel:
Data> New Query > From Database > FromSQL Server Database
enter DB server
Select the view from the appropriate DB - done :)
OR call the view however you are doing it (e.g. vba etc.)
equally you could use a stored procedure and call that from VBA .. basically anything that moves more of the complexity to the server side to make your life easier :D
You can absolutely do this. Notice how I'm building a temp table from SQL called 'TmpSql' ...this could be any query you want. Then I set it to recordset 1. Then I create another recordset 2, that goes and gets the temp table data.
Imagine if you were looping on the first cn.Execute where TmpSql is changing.. This allows you to build a Temporary table coming from many sources or changing variables. This is a powerful solution.
cn.open "Provider= ..."
sql = "Select t.* Into #TTable From (" & TmpSql & ") t "
Set rs1 = cn.Execute(sql)
GetTmp = "Select * From #TTable"
rs2.Open GetTmp, cn, adOpenDynamic, adLockBatchOptimistic
If Not rs2.EOF Then Call Sheets("Data").Range("A2").CopyFromRecordset(rs2)
rs2.Close
rs1.Close
cn.Close

Turn off IDENTITY_INSERT for Dataset insert

I am using a dataset to insert data being converted from an older database. The requirement is to maintain the current Order_ID numbers.
I've tried using:
SET IDENTITY_INSERT orders ON;
This works when I'm in SqlServer Management Studio, I am able to successfully
INSERT INTO orders (order_Id, ...) VALUES ( 1, ...);
However, it does not allow me to do it via the dataset insert that I'm using in my conversion script. Which looks basically like this:
dsOrders.Insert(oldorderId, ...);
I've run the SQL (SET IDENTITY_INSERT orders ON) during the process too. I know that I can only do this against one table at a time and I am.
I keep getting this exception:
Exception when attempting to insert a value into the orders table
System.Data.SqlClient.SqlException: Cannot insert explicit value for identity column in table 'orders' when IDENTITY_INSERT is set to OFF.
Any ideas?
Update
AlexS & AlexKuznetsov have mentioned that Set Identity_Insert is a connection level setting, however, when I look at the SQL in SqlProfiler, I notice several commands.
First - SET IDENTITY_INSERT DEAL ON
Second - exec sp_reset_connection
Third to n - my various sql commands including select & insert's
There is always an exec sp_reset_connection between the commands though, I believe that this is responsible for the loss of value on the Identity_Insert setting.
Is there a way to stop my dataset from doing the connection reset?
You have the options mixed up:
SET IDENTITY_INSERT orders ON
will turn ON the ability to insert specific values (that you specify) into a table with an IDENTITY column.
SET IDENTITY_INSERT orders OFF
Turns that behavior OFF again and the normal behavior (you can't specify values for IDENTITY columns since they are auto-generated) is reinstated.
Marc
You want to do SET IDENTITY_INSERT ON to allow you to insert into identity columns.
It seems a bit backwards, but that's the way it works.
It seems that you're doing everything right: SET IDENTITY_INSERT orders ON is the right way on SQL Server's side. But the problem is that you're using datasets. From the code you've provided I can say that you're using typed dataset - the one that was generated in Visual Studio based on the database.
If this is the case (most likely) then this dataset contains a constraint that does not allows you to set values for orderId field, i.e. it's the code that does not allow specifying explicit value, not SQL Server. You should go to dataset designer and edit properties of orderId field: set AutoIncrement and ReadOnly to false. But the same changes can be performed in run time. This will allow you to add a row with explicit value for orderId to a dataset and later save it to SQL Server table (you will still need SET IDENTITY_INSERT).
Also note that IDENTITY_INSERT is a connection-level setting so you need to be sure that you're executing corresponding SET exactly for the same connection that you will be using to save your changes to the database.
I would use Profiler to determine whether your SET IDENTITY_INSERT orders ON;
is issued from the same connection as your subsequent inserts, as well as the exact SQL being executed during inserts.
AlexS was correct, the problem was the Insert_Identity worked, but it is a connection level setting, so I needed to set the Insert_Identity within a transaction.
I used Ryan Whitaker's TableAdapterHelper code
and I created an update command on my tableadapter that ran the Identity_Insert. I then had to create a new Insert command with the Identity column specified. I then ran this code
SqlTransaction transaction = null;
try
{
using (myTableAdapter myAdapter = new myTableAdapter())
{
transaction = TableAdapterHelper.BeginTransaction(myAdapter);
myAdapter.SetIdentityInsert();
myAdapter.Insert(myPK,myColumn1,myColumn2,...);
}
transaction.Commit();
}
catch(Exception ex)
{
transaction.Rollback();
}
finally
{
transaction.Dispose();
}
In case that you still have problems with "insert_identity" , you can try to use a complete insert statement like:
insert into User(Id, Name) values (1,'jeff')

Inserting NULL in an nvarchar fails in MSAccess

I'm experiencing something a bit strange.
I have a table on SQL Server 2008, say StockEvent that contains a Description field defined as nVarchar(MAX).
The field is set to be Nullable, has no default value and no index on it.
That table is linked into an Access 2007 application, but if I explicitly insert a NULL into the field, I'm systematically getting:
Run-time Error '3155' ODBC--insert on a linked table 'StockEvent' failed.
So the following bits of code in Access both reproduce the error:
Public Sub testinsertDAO()
Dim db As DAO.Database
Dim rs As DAO.Recordset
Set db = CurrentDb
Set rs = db.OpenRecordset("StockEvent", _
dbOpenDynaset, _
dbSeeChanges + dbFailOnError)
rs.AddNew
rs!Description = Null
rs.Update
rs.Close
Set rs = Nothing
Set db = Nothing
End Sub
Public Sub testinsertSQL()
Dim db As DAO.Database
Set db = CurrentDb
db.Execute "INSERT INTO StockEvent (Description) VALUES (NULL);", _
dbSeeChanges
Set db = Nothing
End Sub
However, if I do the same thing from the SQL Server Management Studio, I get no error and the record is correctly inserted:
INSERT INTO StockEvent (Description) VALUES (NULL);
It doesn't appear to be machine-specific: I tried on 3 different SQL Server installations and 2 different PCs and the results are consistent.
I initially though that the problem may be in my Access application somewhere, but I isolated the code above into its own Access database, with that unique table linked to it and the results are consistent.
So, is there some known issue with Access, or ODBC and inserting NULL values to nvarchar fields?
Update.
Thanks for the answers so far.
Still no luck understanding why though ;-(
I tried with an even smaller set of assumptions: I created a new database in SQL Server with a single table StockEvent defined as such:
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[StockEvent](
[ID] [int] IDENTITY(1,1) NOT NULL,
[Description] [nvarchar](max) NULL
) ON [PRIMARY]
GO
Then linked that table though ODBC into the test Access 2007 application.
That application contains no forms, nothing except the exact 2 subroutines above.
If I click on the linked table, I can edit data and add new records in datasheet mode.
Works fine.
If I try any of the 2 subs to insert a record, they fail with the 3155 error message.
(The table is closed and not referenced anywhere else and the edit datasheet is closed.)
If I try the SQL insert query in SQL Server Management Studio, it works fine.
Now for the interesting bit:
It seems that anything as big or bigger than nvarchar(256), including nvarchar(MAX) will fail.
Anything with on or below nvarchar(255) works.
It's like Access was considering nvarchar as a simple string and not a memo if its size is larger than 255.
Even stranger, is that varchar(MAX) (wihout the n) actually works!
What I find annoying is that Microsoft's own converter from Access to SQL Server 2008 converts Memo fields into nvarchar(MAX), so I would expect this to work.
The problem now is that I need nvarchar as I'm dealing with Unicode...
OK, I may have found a related answer: Ms Access linking table with nvarchar(max).
I tried using the standard SQL Server driver instead of the SQL Server Native Client driver and nvarchar(MAX) works as expected with that older driver.
It really annoys me that this seems to be a long-standing, unfixed, bug.
There is no valid reason why nvarchar should be erroneously interpreted as a string by one driver and as a memo when using another.
In both cases, they appear as memo when looking a the datatype under the table design view in Access.
If someone has any more information, please leave it on this page. I'm sure others will be glad to find it.
That should be legal syntax. Is it possible that the field you are try to give a null value is linked to other fields that don't allow null values?
Potential concurrency problem... Is the record open by another instance of Access on the same or a different machine, or does a form bound to the table have the record open in the same instance of Access on the same machine?
Renaud, try putting something in one of the other fields when you do the insert.
Also, try inserting an empty string ("") instead of a null.
Renaud,
Did you try running a SQL Profiler trace? If you look at the Errors and Warnings category it should kick out an error if your insert failed as a result of a SQL Server constraint.
If you don't see any errors, you can safely assume that the problem is in your application.
Also, are you sure you're actually connected to SQL Server? Is CurrentDB not the same variable you're using in your Access test loop?
i got annother issue (here my post: link text
In some very rare cases an error arises when saving a row with a changed memo field - same construct explained in my former post but driving sql2000-servers and it's appropriate odbc-driver (SQL SERVER).
The only weired fix is: to expand the table structure on sql-server with a column of datatype [timestamp] and refresh the odbc-links. That works and releases the show-stopper in this column on this one row ...
Maybe this info can help someone - for me it's history in going further to odbc with sql2008 in changing the datatypes [text] to [varchar(max)].

SqlDataAdapter.Fill method slow

Why would a stored procedure that returns a table with 9 columns, 89 rows using this code take 60 seconds to execute (.NET 1.1) when it takes < 1 second to run in SQL Server Management Studio? It's being run on the local machine so little/no network latency, fast dev machine
Dim command As SqlCommand = New SqlCommand(procName, CreateConnection())
command.CommandType = CommandType.StoredProcedure
command.CommandTimeout = _commandTimeOut
Try
Dim adapter As new SqlDataAdapter(command)
Dim i as Integer
For i=0 to parameters.Length-1
command.Parameters.Add(parameters(i))
Next
adapter.Fill(tableToFill)
adapter.Dispose()
Finally
command.Dispose()
End Try
my paramter array is typed (for this SQL it's only a single parameter)
parameters(0) = New SqlParameter("#UserID", SqlDbType.BigInt, 0, ParameterDirection.Input, True, 19, 0, "", DataRowVersion.Current, userID)
The Stored procedure is only a select statement like so:
ALTER PROC [dbo].[web_GetMyStuffFool]
(#UserID BIGINT)
AS
SELECT Col1, Col2, Col3, Col3, Col3, Col3, Col3, Col3, Col3
FROM [Table]
First, make sure you are profiling the performance properly. For example, run the query twice from ADO.NET and see if the second time is much faster than the first time. This removes the overhead of waiting for the app to compile and the debugging infrastructure to ramp up.
Next, check the default settings in ADO.NET and SSMS. For example, if you run SET ARITHABORT OFF in SSMS, you might find that it now runs as slow as when using ADO.NET.
What I found once was that SET ARITHABORT OFF in SSMS caused the stored proc to be recompiled and/or different statistics to be used. And suddenly both SSMS and ADO.NET were reporting roughly the same execution time. Note that ARITHABORT is not itself the cause of the slowdown, it's that it causes a recompilation, and you are ending up with two different plans due to parameter sniffing. It is likely that parameter sniffing is the actual problem needing to be solved.
To check this, look at the execution plans for each run, specifically the sys.dm_exec_cached_plans table. They will probably be different.
Running 'sp_recompile' on a specific stored procedure will drop the associated execution plan from the cache, which then gives SQL Server a chance to create a possibly more appropriate plan at the next execution of the procedure.
Finally, you can try the "nuke it from orbit" approach of cleaning out the entire procedure cache and memory buffers using SSMS:
DBCC DROPCLEANBUFFERS
DBCC FREEPROCCACHE
Doing so before you test your query prevents usage of cached execution plans and previous results cache.
Here is what I ended up doing:
I executed the following SQL statement to rebuild the indexes on all tables in the database:
EXEC <databasename>..sp_MSforeachtable #command1='DBCC DBREINDEX (''*'')', #replacechar='*'
-- Replace <databasename> with the name of your database
If I wanted to see the same behavior in SSMS, I ran the proc like this:
SET ARITHABORT OFF
EXEC [dbo].[web_GetMyStuffFool] #UserID=1
SET ARITHABORT ON
Another way to bypass this is to add this to your code:
MyConnection.Execute "SET ARITHABORT ON"
I ran into the same issue, but when I've rebuilt indexes on SQL table, it worked fine, so you might want to consider rebuilding index on sql server side
Why not make it a DataReader instead of DataAdapter, it looks like you have a singel result set and if you aren't going to be pushing changes back in the DB and don't need constraints applied in .NET code you shouldn't use the Adapter.
EDIT:
If you need it to be a DataTable you can still pull the data from the DB via a DataReader and then in .NET code use the DataReader to populate a DataTable. That should still be faster than relying on the DataSet and DataAdapter
I don't know "Why" it's so slow per se - but as Marcus is pointing out - comparing Mgmt Studio to filling a dataset is apples to oranges. Datasets contain a LOT of overhead. I hate them and NEVER use them if I can help it.
You may be having issues with mismatches of old versions of the SQL stack or some such (esp given you are obviously stuck in .NET 1.1 as well) The Framework is likely trying to do database equivilant of "Reflection" to infer schema etc etc etc
One thing to consider try with your unfortunate constraint is to access the database with a datareader and build your own dataset in code. You should be able to find samples easily via google.

Resources