SQL Server 2005 Linked Server to DB2 Performance issue - sql-server

I have a SQL Server 2005 machine with a JDE DB2 set up as a linked server.
For some reason the performance of any queries from this box to the db2 box are horrible.
For example. The following takes 7 mins to run from Management Studio
SELECT *
FROM F42119
WHERE SDUPMJ >= 107256
Whereas it takes seconds to run in iSeries Navigator
Any thoughts? I'm assuming some config issue.

In certain searches SQL Server will decide to pull the entire table down to itself and sort and search the data within SQL Server instead of sending the query to the remote server. This is usually a problem with collation settings.
Make sure the provider has the following options set:
Data Access,
Collation Compatible,
Use Remote Collation
Then create a new Linked Server using the provider and select the following provider options
Dynamic Parameters,
Nested Queries,
Allow In Process
After setting the options change the query slightly to get a new query plan.

It might be a memory issue on your SQL Server machine. I recently learned that linked server queries use memory allocation by the OS. Whereas native SQL Server queries use memory pre-allocated by SQL Server. If your SQL Server machine is configured to use 90% or more of the server's memory, I would scale that back a bit. Maybe 60% is the right place to be.
Another thing to check is the SQL Server processor priority. Make sure "Boost SQL Server priority" is not enabled.
I assume you are going through ODBC for access. Remember that you are not writing native db2 queries here, but instead ODBC sql queries. If you only need read-only data, you may want to try configuring your ODBC datasource to read-only mode (if that is an option).

In a project with DB2 integration, I replaced every query via direct select or view by stored procedures calling the OPENQUERY function.
My interpretation is that SqlServer fetches the whole table before applying the WHERE conditions, whereas OPENQUERY passes the SQL statement directly to the db driver.
Anyway, performance was acceptable after the modifications.

My first thought would go to the drivers. Years ago I had to link DB2 to SQL Server 2000 and it was extremely difficult to find the correct combination of drivers and setup parameters that would work...
So maybe I'm biased because of that, but I would try upgrading or downgrading the driver or changing the setup so that the DB2 driver can run INPROC (if it's not already doing so).

I've had several issues with DB2 as a linked a server. I do not know if it will address your problems, but here is what fixed mine:
1) Enabled lazy close support and pre-fetch during EXECUTE in the ODBC settings
2) Add "FOR FETCH ONLY" on all selects
3) Query using the SELECT * FROM OPENROWSET(LinkedServerName, 'SQL Command') method

Related

Delphi with SQL Server: OLEDB vs. Native Client drivers

I have been told that SQL Native Client is supposed to be faster than the OLEDB drivers. So I put together a utility to do a load-test between the two - and am getting mixed results. Sometimes one is faster, sometimes the other is, no matter what the query may be (simple select, where clause, joining, order by, etc.). Of course the server does the majority of the workload, but I'm interested in the time it takes between the data coming into the PC to the time the data is accessible within the app.
The load tests consist of very small queries which return very large datasets. For example, I do select * from SysTables and this table has 50,000+ records. After receiving the data, I do another load of looping through the results (using while not Q.eof ... Q.next ... etc.). I've also tried adding some things to the query - such as order by Val where Val is a varchar(100) field.
Here's a sample of my load tester, numbers on very bottom are averages...
So really, what are the differences between the two? I do know that OLE is very flexible and supports many different database engines, whereas Native Client is specific to SQL Server alone. But what else is going on behind the scenes? And how does that affect how Delphi uses these drivers?
This is specifically using ADO via the TADOConnection component and TADOQuery as well.
I'm not necessarily looking or asking for ways to improve performance - I just need to know what are the differences between the drivers.
As stated by Microsoft:
SQL Server Native Client is a stand-alone data access application programming interface (API), used for both OLE DB and ODBC, that was introduced in SQL Server 2005. SQL Server Native Client combines the SQL OLE DB provider and the SQL ODBC driver into one native dynamic-link library (DLL).
From my understanding, ADO is just an Object Oriented application-level DB layer over OleDB. It will use OleDB in all cases. What changes is the provider used. If you specify the SQLNCLI10 provider, you'll use the latest version of the protocol. If you specify the SQLOLEDB provider, you'll use the generic SQL Server 2000 + protocol.
As such:
ADO -> OleDB -> SQLNCLI10 provider -> MS SQL Server (MSSQL 2000, 2005 or 2008 protocol)
ADO -> OleDB -> SQLOLEDB provider -> MS SQL Server (MSSQL 2000 protocol)
About performance, I think you won't have a big difference. Like always, it will depend on the data processed.
But it is IMHO recommended to use best fitted provider for your database. Some kind of data (like var(maxchar) or Int64) is told to be best handled. And the SQLNCLI10 provider has been updated, so I guess it is more tuned.
In your question you are mxing OLE and SQL Native Client. Probably you are mean few things in the same time:
OLE -> OLEDB, which is obsolescent generic data access technology;
OLE -> "SQL Server OLEDB Provider", which is SQL Server 2000 OLEDB provider;
SQL Server Native Client, which is SQL Server 2005 and higher client software. And it includes as OLEDB provider, as ODBC driver.
If to talk about OLEDB providers and supported SQL Server versions, then:
"SQL Server OLEDB Provider" (SQLOLEDB) supports SQL Server 2000 protocol;
"SQL Server Native Client 9" (SQLNCLI) supports SQL Server 2000 and 2005 protocols;
"SQL Server Native Client 10" supports SQL Server 2000, 2005 and 2008 protocols.
You did not sayd what SQL Server version you are using. In general, best is to use SQL Server OLEDB provider corresponding to your SQL Server version. Otherwise you can run into incompatibility between server and client versions.
Abstractly comparing, I can only speculate about differences between SQLNCLI and SQLOLEDB:
One is more correctly uses server protocol;
One is using more advanced protocol features;
One performs more processing, what heps to handle more situations;
One uses more generic / optimized data represenation.
Without correct benchmark application and environment it is hard to accept your comparision results, because they may depend on multiple factors.
I think you should concentrate on optimizing the:
sql server engine and database settings
your queries
your data schema
Difference in speed between connection libraries is so small, even negligible, that it may cause a very tiny slowdown of systems and in very specific scenarios
Short answer:
It doesn't matter.
Long answer:
The difference in performance between the 2 client libs is relatively negligible compared to the Server execution + Network data transfer, which is what you are mostly measuring, hence the inconclusive test data. There is a good chance that you use the same low level layer in both cases anyway with only a minor difference in indirection on top of it.
As a matter of fact, if your tests show no visible difference, it just proves that the slowness is not related with the choice of the client lib and optimization should be searched elsewhere.
For your present test, you should use the SQL Profiler to measure the queries execution time on the Server at the same time you run your test, you would see that they vary also quite a bit. Subtracting those numbers from the test end results would give you the timing for the bundle Client time + Network transfer.
Network performance is quite variable and has more impact on your test than you would think. Try having someone streaming video at the same time you run your test and you will see... (Have had that with my former company; tuning the SQL was not the answer in this case )
While it certainly could be at the database end, I think there is a lot to look at in the overall system - at least your test system. In general, it is hard to do timing if the work you are asking the database to do is very small compared to the overall work. So in general, is the database task a big job or simply the retrieval of one data item? Are you using stored procedures or simple queries? Is your test preparing any stored procedures before running the test? Do you get consistent times each time you run any test in sucession?
The query execution time tells you how well the database engine (and any schema/query optimization) work well. Here what you use doesn't matter. ODBC/OLEDB/Native whatever just passes the query along to the database and it is executed there
The time it takes to read from the first record to the last tells you how well the data access layer and your network perfom. Here you time how well data are returned and "cached" on your client. Depending on the data, the network settings may be important. For example if your tables use "large" records, a larger MTU may requires less packets (and less roundtrips) to send them to the client.
Anyway, before looking for a solution, you have to identify the problem. Profile your application, both client side and server side (SQL Server has good tools for that), and find what exactly makes it slower. Then and only then you can look for the correct solution. Maybe the data access layer is not the problem. 20,000 records is a small dataset today, not a large one.
You cannot use the native clients with ADO, as is.
ADO does not understand the XML SQL Server data type. The field type:
field: ADOField;
field := recordset.Fields.Items["SomeXmlColumn"];
Attempting to access field.Value throws an EOleException:
Source: Microsoft Cursor Engine
ErrorCode: 0x80040E21 (E_ITF_0E21)
Message: Multiple-step operation generated errors. Check each status value
The native client drivers (e.g. SQLNCLI, SQLNCLI10, SQLNCLI11) present an Xml data type to ADO as
field.Type_ = 141
While the legacy SQLOLEDB driver presents an Xml data type to ADO as adLongVarWChar, a unicode string:
field.Type_ = 203 //adLongVarWChar
And the VARIANT contained in field.Value is a WideString (technically known as a BSTR):
TVarData(field.Value).vtype = 8 //VT_BSTR
The solution, as noted by Microsoft:
Using ADO with SQL Server Native Client
Existing ADO applications can access and update XML, UDT, and large value text and binary field values using the SQLOLEDB provider. The new larger varchar(max), nvarchar(max), and varbinary(max) data types are returned as the ADO types adLongVarChar, adLongVarWChar and adLongVarBinary respectively. XML columns are returned as adLongVarChar, and UDT columns are returned as adVarBinary. However, if you use the SQL Server Native Client OLE DB provider (SQLNCLI11) instead of SQLOLEDB, you need to make sure to set the DataTypeCompatibility keyword to "80" so that the new data types will map correctly to the ADO data types.
They also note:
If you do not need to use any of the new features introduced in SQL Server 2005, there is no need to use the SQL Server Native Client OLE DB provider; you can continue using your current data access provider, which is typically SQLOLEDB.
Also, besides the lack of support for the XML data type, Delphi ADO does not recognize columns defined in SQL Server as TIME (DBTYPE_DBTIME2=145) or DATETIMEOFFSET (DBTYPE_DBTIMESTAMPOFFSET=146); trying to use those fields in your application will cause multiple errors like 'Invalid Variant Value' or some controls (like TDBGrid) will simply drop the field entirely.
Seems like the lack of support for DBTYPE_DBTIME2=145 is a bug/QC-issue since there is already ftTime support (it's also not clear to me why SQL Server doesn't return DBTYPE_DBTIME which Delphi does support), the XML and Offset types have no clear TFieldType mapping.
Data Type Support for OLE DB Date/Time Improvements

Evaluate performance of SQL Server installation

We ported a database server from SQLServer 2005 to SQLServer 2008 (SP1). The new server has more Processors (4 Quadcore versus 1 Quadcore ) and more memory (4GB versus 64GB).
Processors are 2.1Ghz(new) versus 2.0Ghz(old).
The new OS is Windows Server 2008 and the old is Windows Server 2003.
The databases were transfered via backup/restore and run in native SQL Server 2008 mode (not in SQLServer 2005 compatability mode ).
Some queries on the new server run slower than before. These queries use indexed views. The queryplan looks the same on both systems.
Most of the queries perform equal.
My task is now to decide if we have a problem with our SQLServer installation, if the we have a problem with the database or if this is an expexted result.
I first want to compare performance of both
Sytems
SQLServer installations.
Is there an easy way to do this?
Has anybody had comparabele results on new SQLServer installations?
Before you check your hardware/OS, make sure you:
update statistics
rebuild all indexes
and then run your tests again. Also, are the editions of SQL Server identical? There are differences in how you have to write queries against indexed views based on the edition (Standard vs. Enterprise) of SQL Server.
Also, confirm that your indexed views are still indexed properly by selecting 1 row from them and observing the query plan. You should see only one table in the resulting plan.
The easiest way to collect performance of both systems is to run a PAL, and collect the approriate data.
PAL has extra counter sets for SQL Server. It will collect and analyse the data, and let you know where you have an issue.
PAL can be found here
http://www.codeplex.com/PAL
Also an important issu is to location of the filegroups. How is the underlying storrage system defined? It usually has a huge impact on the SQL Server. (You should talk about spindels here, and not raw size...) Make sure your database files are not sharing the resources with anyone else

Restore SQL Server 2008 database to SQL Server 2000

I have to move an entire database from a SQL Server 2008 machine to a SQL Server 2000 machine.
I created a backup using Management Studio 2008, copied it to the hard drive of the 2000 box, and from withing Management Studio 2008, I choose Restore Database to the 2000 box.
I get an error message stating, "The media family on device ... is incorrectly formed. SQL Server cannot restore this media family".
If I use Enterprise Manager 2000 I get the same error.
Is there a way to move a whole database from the newer SQL server to the older?
The only thing I can think of is to recreate the whole structure and then copy data from a live database. So, create scripts that will create the tables, views, and sp's, and then create scripts to copy the data from the existing database.
As others already said there is no default way to do this. It’s just not supported. Here are more extensive details on how to do this properly and avoid any migration issues.
You need to generate scripts for structure and data and then execute these on SQL 2000 (like others already said) but there are couple things to take into account.
Generate scripts in SSMS
Make sure to check option for scripting data for SQL 2000 to avoid issues when trying to create something like geography type column on SQL 2000.
Make sure to review execution order of scripts to avoid dependency based errors
This is a great option for small to medium size databases and requires some knowledge of SQL Server (dependencies, differences between versions and such)
Third party tools
Idea is to use third party database comparison tools such as ApexSQL Diff or Data Diff
Good side is that these will take care of script execution and differences between versions
Not so good is the fact that you’ll need to pay for these after trial ends
I’ve used these two tools successfully but you can’t go wrong with any other tool on the market. Here is a list of other tools in this category.
you can't move backups from a newer version to an older, in that case you can script your database, execute it in the 2000 box, then you can use the standard data transfer to transfer any data you want
Provided you have a network connection between the machines use SSIS. Much easier and a lot less messing around.
You can use Script Generator for your database and then select in the properties form : General-> Script for server version : SQL Server 2000.
The script generator will show you things which not compatible with your server version.
I've heard you can only do it by generating the SQL statement dump from the DB administrator tool and re-running those queries on the target older database.
You can generate a script that will recreate all the objects and transfer all the data...as long as everything in the db is valid in SQL 2000. So no ROW_NUMBER(), no PARTITION, no CTEs, no datetime2, hierarchy or several other field types, no EXECUTE AS, and lots of other goodness. Basically, there's a pretty good chance it's not possible unless your db is pretty basic.
We got a similar situation. A very low-tech but handy solution is:
backup and truncate the tables in SQL 2000.
create a LINKED server in SQL 2008, pointing to SQL 2000
run a select query at sysobjects to generate a query script for insert into LINKED SERVER.table select * from table
execute query script.

MS Access query design hangs on connection to SQL Server

Microsoft Access is a slick way to access data in a MS SQL Server backend database, but I've always had problems accessing (so to speak) large tables of data, especially when trying to toggle between results and design mode in Access.
Access gives me a number of nifty things, not the least of which is Crosstabs, but this hung connection to the server drives me a little crazy!
Does any MS Access gurus know how to optimize the ODBC connection so it isn't doing what appears to be full table scans when I just want to tweak and build my queries?
The ODBC driver will pass as much work as possible to SQL Server but as soon as you use a vba function like Nz or non-SQL Server syntax like PIVOT then the ODBC driver must pull back more data and indexes to get the work done on the client side.
As per other answer either build your views in SQL Server and link to the views or else use an Access Data Project.
NB: PIVOT queries with unknown number of columns cannot be handled in SQL Server in the same way that Access will do this natively - so if you run a pivot in Access against SQL Server data you will likely pull the whole table back. Pivot queries must be built in SQL Server using dynamic SQL techniques or else pre-saved views that have all the columns hard coded. Check out this link for one way to do this:
http://www.sqlservercentral.com/articles/Advanced+Querying/pivottableformicrosoftsqlserver/2434/
As others have said, the only way to improve performance on large tables is to have the SQL Server database engine do the work for you. A method of doing this which hasn't been mentioned is to use a pass-through query, which will enable you to keep all your code in MS Access, without having to create objects on the SQL Server:
http://support.microsoft.com/kb/303968
You will have to write SQL Server T-SQL rather than the Access dialect; however, SQL 2005 (when running in compatibility mode 90) does support a PIVOT command.
My similar problem was that the ORACLE ODBC connection hung after selecting the Link table/ODBC connection. Task manager said not responding after 10's of minutes. The connection then pings ORACLE for all available tables. I had turned on logging on the ORACLE ODBC Administrator, so it had to write all these things to the log, slowing any results by perhaps hours. The log was 60 MB one hour later, when I turned it off, then everything was fine!
To turn it off go to the Oracle installation/Network Administration/MS ODBC Adminstrator/Tracing tab and turn it OFF!
A good resource on ODBC is here: http://eis.bris.ac.uk/~ccmjs/odbc_section.html
Unfortunately Access is not able to push a lot of that work to the server, and yes, it will do huge table scans when designing queries against multiple tables or views in SQL Server.
You can build and tweak queries (views) in SQL Server using SSMS and store the views in SQL Server for a massive performance boost and still use Access for your front end.

Importing Access data into SQL Server using ColdFusion

This should be simple. I'm trying to import data from Access into SQL Server. I don't have direct access to the SQL Server database - it's on GoDaddy and they only allow web access. So I can't use the Management Studio tools, or other third-party Access upsizing programs that require remote access to the database.
I wrote a query on the Access database and I'm trying to loop through and insert each record into the corresponding SQL Server table. But it keeps erroring out. I'm fairly certain it's because of the HTML and God knows what other weird characters are in one of the Access text fields. I tried using CFQUERYPARAM but that doesn't seem to help either.
Any ideas would be helpful. Thanks.
Try using the GoDaddy SQL backup/restore tool to get a local copy of the database. At that point, use the SQL Server DTS tool to import the data. It's an easy to use, drag-and-drop graphical interface.
What error(s) get(s) thrown? What odd characters are you using? Are you referring to HTML markup, or extended (eg UTF-8) characters?
If possible, turn on Robust Error Reporting.
If the problem is the page timing out, you can either increase the timeout using the Admin, using the cfsetting tag, or rewrite your script to run a certain number of lines, and then forward to itself at the next start point.
You should be able to execute saved DTS packages in MS SQL Server from the application server's command line. Since this is the case, you can use <cfexecute> to issue a request to DTSRUNNUI.EXE. (See example) This is of course assuming you are on a server where the command is available.
It's never advisable to loop through records when a SQL Update can be used.
It's not clear from your question what database interface layer you are using, but it is possible with the right interfaces to insert data from a source outside a database if the interface being used supports both types of databases. This can be done in the FROM clause of your SQL statement by specifying not just the table name, but the connect string for the database. Assuming that your web host has ODBC drivers for Jet data (you're not actually using Access, which is the app development part -- you're only using the Jet database engine), the connect string should be sufficient.
EDIT: If you use the Jet database engine to do this, you should be able to specify the source table something like this (where tblSQLServer is a table in your Jet MDB that is linked via ODBC to your SQL Server):
INSERT INTO tblSQLServer (ID, OtherField )
SELECT ID, OtherField
FROM [c:\MyDBs\Access.mdb].tblSQLServer
The key point is that you are leveraging the Jet db engine here to do all the heavy lifting for you.

Resources