Access database - links to SQL Server and Oracle - sql-server

The application part of my database is in Access 2003, and I use tables that are linked from SQL server. Now, I have some tables that I have to link from an Oracle database. I link them through and ODBC connection and it works fine. Is it possible to link that Oracle link in SQL and then link that table as it is already linked in Access 2003? So, I want to use just one ODBC connection to SQL server and in SQL Server link that Oracle link.

Yes, I believe the double indirection structure you suggest should work "OK". That's because MS_SQL linked server sources are handled very much like local databases and can be queried individually, i.e. within queries not involving local databases.
Do note, however, that it could be much less efficient, since you introduce an extra "hop". Also, do look for possible issues with regards to type mapping as some types in oracle may get mapped to a slightly different type in SQL than when accessed directly from MS-Access. Such type mapping issues would however be easy to work-around.
Edit: To "establish a connection" between MS-SQL and Oracle servers
This concept is known as "linked server" in MS-SQL lingo. See this MSDN article for an overview and details about sp_addlinkedserver Stored Procedure. This latter document provides the connection parameters required for various sources, including Oracle or ODBC (i.e. for Oracle you can either use ODBC, which is generally easier but less efficient, and for Oracle versions 8 and up, an OLE DB driver, which as implied may be harder to confure, but provide better performance).
Again, even with the gain associated with the Oracle OLE DB driver, the extra hop may hinder the overall performance of your setup...

Related

Does ORACLE have any construct like Sql Server's schema?

I am generally a Sql Server coder, but we have a client who wants to move a system from Sql to ORACLE due to the new licensing model of Sql Server.
I know historically, ORACLE has no logical grouping of objects within a db/schema, along the lines of a Sql Server schema. It's been a while since I've done any real ORACLE work though, so I'm just wondering if somewhere along the line, they may have added such a construct?
The version of ORACLE we are porting the Sql Server database into is ORACLE 11g (11.2).
Traditionally, I've seen oracle developers do this using just a prefix on table/view/object names. So for example a Sql Server object users.OPTIONS might become USR_OPTIONS in ORACLE. This works to be sure, but it just feels really kludgey to me, as it's not so much an actual hierarchy, but is sort of "forcing" one in by simply using contorted names.
Oracle has schema support in Oracle 11gR2. Oracle schemas are tied to a user. You'll have to (somewhat confusingly) create a user for each schema that you're creating. This isn't a big deal, but some people find it distasteful.
Oracle 12c Enterprise Edition has a feature called Multitenant that allows for multiple databases on the same Oracle server in much the same way that SQL Server allows out of the box.

Delphi with SQL Server: OLEDB vs. Native Client drivers

I have been told that SQL Native Client is supposed to be faster than the OLEDB drivers. So I put together a utility to do a load-test between the two - and am getting mixed results. Sometimes one is faster, sometimes the other is, no matter what the query may be (simple select, where clause, joining, order by, etc.). Of course the server does the majority of the workload, but I'm interested in the time it takes between the data coming into the PC to the time the data is accessible within the app.
The load tests consist of very small queries which return very large datasets. For example, I do select * from SysTables and this table has 50,000+ records. After receiving the data, I do another load of looping through the results (using while not Q.eof ... Q.next ... etc.). I've also tried adding some things to the query - such as order by Val where Val is a varchar(100) field.
Here's a sample of my load tester, numbers on very bottom are averages...
So really, what are the differences between the two? I do know that OLE is very flexible and supports many different database engines, whereas Native Client is specific to SQL Server alone. But what else is going on behind the scenes? And how does that affect how Delphi uses these drivers?
This is specifically using ADO via the TADOConnection component and TADOQuery as well.
I'm not necessarily looking or asking for ways to improve performance - I just need to know what are the differences between the drivers.
As stated by Microsoft:
SQL Server Native Client is a stand-alone data access application programming interface (API), used for both OLE DB and ODBC, that was introduced in SQL Server 2005. SQL Server Native Client combines the SQL OLE DB provider and the SQL ODBC driver into one native dynamic-link library (DLL).
From my understanding, ADO is just an Object Oriented application-level DB layer over OleDB. It will use OleDB in all cases. What changes is the provider used. If you specify the SQLNCLI10 provider, you'll use the latest version of the protocol. If you specify the SQLOLEDB provider, you'll use the generic SQL Server 2000 + protocol.
As such:
ADO -> OleDB -> SQLNCLI10 provider -> MS SQL Server (MSSQL 2000, 2005 or 2008 protocol)
ADO -> OleDB -> SQLOLEDB provider -> MS SQL Server (MSSQL 2000 protocol)
About performance, I think you won't have a big difference. Like always, it will depend on the data processed.
But it is IMHO recommended to use best fitted provider for your database. Some kind of data (like var(maxchar) or Int64) is told to be best handled. And the SQLNCLI10 provider has been updated, so I guess it is more tuned.
In your question you are mxing OLE and SQL Native Client. Probably you are mean few things in the same time:
OLE -> OLEDB, which is obsolescent generic data access technology;
OLE -> "SQL Server OLEDB Provider", which is SQL Server 2000 OLEDB provider;
SQL Server Native Client, which is SQL Server 2005 and higher client software. And it includes as OLEDB provider, as ODBC driver.
If to talk about OLEDB providers and supported SQL Server versions, then:
"SQL Server OLEDB Provider" (SQLOLEDB) supports SQL Server 2000 protocol;
"SQL Server Native Client 9" (SQLNCLI) supports SQL Server 2000 and 2005 protocols;
"SQL Server Native Client 10" supports SQL Server 2000, 2005 and 2008 protocols.
You did not sayd what SQL Server version you are using. In general, best is to use SQL Server OLEDB provider corresponding to your SQL Server version. Otherwise you can run into incompatibility between server and client versions.
Abstractly comparing, I can only speculate about differences between SQLNCLI and SQLOLEDB:
One is more correctly uses server protocol;
One is using more advanced protocol features;
One performs more processing, what heps to handle more situations;
One uses more generic / optimized data represenation.
Without correct benchmark application and environment it is hard to accept your comparision results, because they may depend on multiple factors.
I think you should concentrate on optimizing the:
sql server engine and database settings
your queries
your data schema
Difference in speed between connection libraries is so small, even negligible, that it may cause a very tiny slowdown of systems and in very specific scenarios
Short answer:
It doesn't matter.
Long answer:
The difference in performance between the 2 client libs is relatively negligible compared to the Server execution + Network data transfer, which is what you are mostly measuring, hence the inconclusive test data. There is a good chance that you use the same low level layer in both cases anyway with only a minor difference in indirection on top of it.
As a matter of fact, if your tests show no visible difference, it just proves that the slowness is not related with the choice of the client lib and optimization should be searched elsewhere.
For your present test, you should use the SQL Profiler to measure the queries execution time on the Server at the same time you run your test, you would see that they vary also quite a bit. Subtracting those numbers from the test end results would give you the timing for the bundle Client time + Network transfer.
Network performance is quite variable and has more impact on your test than you would think. Try having someone streaming video at the same time you run your test and you will see... (Have had that with my former company; tuning the SQL was not the answer in this case )
While it certainly could be at the database end, I think there is a lot to look at in the overall system - at least your test system. In general, it is hard to do timing if the work you are asking the database to do is very small compared to the overall work. So in general, is the database task a big job or simply the retrieval of one data item? Are you using stored procedures or simple queries? Is your test preparing any stored procedures before running the test? Do you get consistent times each time you run any test in sucession?
The query execution time tells you how well the database engine (and any schema/query optimization) work well. Here what you use doesn't matter. ODBC/OLEDB/Native whatever just passes the query along to the database and it is executed there
The time it takes to read from the first record to the last tells you how well the data access layer and your network perfom. Here you time how well data are returned and "cached" on your client. Depending on the data, the network settings may be important. For example if your tables use "large" records, a larger MTU may requires less packets (and less roundtrips) to send them to the client.
Anyway, before looking for a solution, you have to identify the problem. Profile your application, both client side and server side (SQL Server has good tools for that), and find what exactly makes it slower. Then and only then you can look for the correct solution. Maybe the data access layer is not the problem. 20,000 records is a small dataset today, not a large one.
You cannot use the native clients with ADO, as is.
ADO does not understand the XML SQL Server data type. The field type:
field: ADOField;
field := recordset.Fields.Items["SomeXmlColumn"];
Attempting to access field.Value throws an EOleException:
Source: Microsoft Cursor Engine
ErrorCode: 0x80040E21 (E_ITF_0E21)
Message: Multiple-step operation generated errors. Check each status value
The native client drivers (e.g. SQLNCLI, SQLNCLI10, SQLNCLI11) present an Xml data type to ADO as
field.Type_ = 141
While the legacy SQLOLEDB driver presents an Xml data type to ADO as adLongVarWChar, a unicode string:
field.Type_ = 203 //adLongVarWChar
And the VARIANT contained in field.Value is a WideString (technically known as a BSTR):
TVarData(field.Value).vtype = 8 //VT_BSTR
The solution, as noted by Microsoft:
Using ADO with SQL Server Native Client
Existing ADO applications can access and update XML, UDT, and large value text and binary field values using the SQLOLEDB provider. The new larger varchar(max), nvarchar(max), and varbinary(max) data types are returned as the ADO types adLongVarChar, adLongVarWChar and adLongVarBinary respectively. XML columns are returned as adLongVarChar, and UDT columns are returned as adVarBinary. However, if you use the SQL Server Native Client OLE DB provider (SQLNCLI11) instead of SQLOLEDB, you need to make sure to set the DataTypeCompatibility keyword to "80" so that the new data types will map correctly to the ADO data types.
They also note:
If you do not need to use any of the new features introduced in SQL Server 2005, there is no need to use the SQL Server Native Client OLE DB provider; you can continue using your current data access provider, which is typically SQLOLEDB.
Also, besides the lack of support for the XML data type, Delphi ADO does not recognize columns defined in SQL Server as TIME (DBTYPE_DBTIME2=145) or DATETIMEOFFSET (DBTYPE_DBTIMESTAMPOFFSET=146); trying to use those fields in your application will cause multiple errors like 'Invalid Variant Value' or some controls (like TDBGrid) will simply drop the field entirely.
Seems like the lack of support for DBTYPE_DBTIME2=145 is a bug/QC-issue since there is already ftTime support (it's also not clear to me why SQL Server doesn't return DBTYPE_DBTIME which Delphi does support), the XML and Offset types have no clear TFieldType mapping.
Data Type Support for OLE DB Date/Time Improvements

migrate data from MS SQL to PostgreSQL?

I've looked around and can't seem to find anything that answers this specific question.
What is the simplest way to move data from an MS SQL Server 2005 DB to a Postgres install (8.x)?
I've looked into several utilities like "Full Convert Enterprise", etc, and they all fail for one reason or another, ranging from strange errors that make it blow up to inserting nulls rather than actual data (wth?).
I'm looking at a DB with all table except for a single view, no stored procs, functions, etc.
At this point I'm about to write a small utility to do it for me, I just can't believe that's necessary. Surely there's something somewhere that can do this? I'm not even too worried about cost, although free is preferable :)
I don't know why nobody has mentioned the simplest and easiest way using robust MS SQL Server Management Studio.
Simply you just need to use the built-in SSIS Import/export feature. You can follow these steps:
Firstly, you need to install the PostgreSQL ODBC Driver for Windows. It's very important to install the correct version in terms of CPU arch (x86/x64).
Inside Management Studio, Right click on your database: Tasks -> Export Data
Choose SQL Server Native Client as the data source.
Choose .Net Framework Data Provider for ODBC as the destination driver.
Set the Connection String to your database in the following form:
Driver={PostgreSQL ODBC Driver(UNICODE)};Server=;Port=;Database=;UID=;PWD=
In the next page, you just need to select which tables you want to export. SQL Server will generate a default mapping and you are free to edit it. Probably you`ll encounter some Type Mismatch problems which take some time to solve. For example, if you have a boolean column in SQL Server you should export it as int4.
Microsoft Docs hosts a detailed description of connecting to PostgreSQL through ODBC.
PS: if you want to see your installed ODBC Driver, you need to check it via ODBC Data Source Administrator.
Take a look at the Software Catalogue. Under Administration/development tools I see DBConvert for MS SQL & PostgreSQL. Probably there are other similar tools listed.
You can use the MS DTS functionality (renamed to SSIS in the latest version I think). One issue with the DTS is that I've been unable to make it do a commit after each row when loading the data into pg. Which is fine if you only have a couple of 100k rows or so, but it's really very slow.
I usually end up writing a small script that dumps the data out of SQLServer in CSV format, and then use COPY WITH CSV on the PostgreSQL side.
Both those only take care of the data though. Taking care of the schema is a bit harder, since datatypes don't necessarily map straight over. But it can easily be scripted together with a static load of the schema. If the schema is simple (just varchar/int datatypes for example), that part can also easily be scripted off the data in INFORMATION_SCHEMA.
Well there are .NET bindings for MS SQL Server 2005 (obviously) and also for PostgreSQL. So it would only take a few lines of code to code up a program that could transfer data safely from one to the other. The view would probably have to be done manually as Postgres doesn't use the same language for views as SQL Server.
This answer is to help summarize current connection string because someone may overlooked the comment.
Current version of ODBC connection string is:
For 32-bit system
Driver={PostgreSQL UNICODE};Server=192.168.1.xxx;Port=5432;Database=yourDBname;Uid=postgres;Pwd=admin;
For 64-bit system
Driver={PostgreSQL UNICODE(x64)};Server=192.168.1.xxx;Port=5432;Database=yourDBname;Uid=postgres;Pwd=admin;
You can check the driver name by typing ODBC in windows search.
And open ODBC Data Source Administrator

Sql Server x64 and x86 Linked Server

I have a Visual FoxPro table that I need to access from Sql Server. In Sql Server x86, I would just create a linked server. Unfortunately, there is no x64 driver for VFP - so Sql Server x64 can't create a linked server to it.
So far, I've come up with following options - none of which I'm particularly fond of:
Set up an x86 Sql Server to be used as a relay, so that queries go from x64 -> x86 -> VFP.
I don't really care for this, as in addition to being dev, I'm also sysadmin. So, this means I need to patch, maintain, and monitor yet another Sql Server - and possibly yet another server (assuming I don't just use a separate instance).
Also, since the VFP provider doesn't work with 4 part syntax, I have to use OPENQUERY. Thinking of all the single quote escaping that'd need to happen to have an OPENQUERY statement embedded into another OPENQUERY statement makes my head spin....
Create a CLR Table Valued Function, though the assembly would (presumably?) also be x64 - so I'd have to go out of proc (IPC? Webservice?) to actually run queries
Turns out that TVFs require a schema, so this option isn't as clean as I initially thought. I did a spike to get a WCF client into MSSQL, which returns a single column of XML that can then be parsed with the Sql XML datatype functions. It works, and is actually a little bit nicer to query than OPENQUERY since it actually takes variables as parameters. That saves me most of the single quote and EXEC dance.
Of course, WCF inside Sql is wholly unsupported, and smells like a pretty big hack. I have pretty serious reservations on performance and reliability.
Stop making queries from Sql Server to VFP, and rewrite a good bit of client code
Obviously, this is the "right" answer. But, there is a good deal of client code that relies on joins between Sql Server tables and VFP tables. Rewriting this stuff to populate a temp table or do client side joins seems like a rather large burden.
Here's hoping someone can suggest a better alternative, or some similar experiences.
It's a nasty problem, I agree.
SSIS run in 32-bit mode to import the data on a regular basis (perhaps on demand, in a job triggered by the same SP) to a SQL Server native table is another option if you can stand the delay. It would depend on the frequency of data change and problems with chance of slightly out of date data.
I think I found an alternative. Microsoft has released an updated driver for Access, which comes in both 32bit and 64bit flavors. Like the original Jet OleDB driver, this will allow you to access dBase file formats from SQL Server x64.
The only restriction is that the DBF must be in one of the dBASE formats supported by ISAM. I have done a few tests using a dBASE IV format and it seems to work, using the following connection string.
Provider=Microsoft.ACE.OLEDB.12.0;Data Source=c:\folder;Extended Properties=dBASE IV;User ID=Admin;Password=;

SQL Server 2005 Linked Server to DB2 Performance issue

I have a SQL Server 2005 machine with a JDE DB2 set up as a linked server.
For some reason the performance of any queries from this box to the db2 box are horrible.
For example. The following takes 7 mins to run from Management Studio
SELECT *
FROM F42119
WHERE SDUPMJ >= 107256
Whereas it takes seconds to run in iSeries Navigator
Any thoughts? I'm assuming some config issue.
In certain searches SQL Server will decide to pull the entire table down to itself and sort and search the data within SQL Server instead of sending the query to the remote server. This is usually a problem with collation settings.
Make sure the provider has the following options set:
Data Access,
Collation Compatible,
Use Remote Collation
Then create a new Linked Server using the provider and select the following provider options
Dynamic Parameters,
Nested Queries,
Allow In Process
After setting the options change the query slightly to get a new query plan.
It might be a memory issue on your SQL Server machine. I recently learned that linked server queries use memory allocation by the OS. Whereas native SQL Server queries use memory pre-allocated by SQL Server. If your SQL Server machine is configured to use 90% or more of the server's memory, I would scale that back a bit. Maybe 60% is the right place to be.
Another thing to check is the SQL Server processor priority. Make sure "Boost SQL Server priority" is not enabled.
I assume you are going through ODBC for access. Remember that you are not writing native db2 queries here, but instead ODBC sql queries. If you only need read-only data, you may want to try configuring your ODBC datasource to read-only mode (if that is an option).
In a project with DB2 integration, I replaced every query via direct select or view by stored procedures calling the OPENQUERY function.
My interpretation is that SqlServer fetches the whole table before applying the WHERE conditions, whereas OPENQUERY passes the SQL statement directly to the db driver.
Anyway, performance was acceptable after the modifications.
My first thought would go to the drivers. Years ago I had to link DB2 to SQL Server 2000 and it was extremely difficult to find the correct combination of drivers and setup parameters that would work...
So maybe I'm biased because of that, but I would try upgrading or downgrading the driver or changing the setup so that the DB2 driver can run INPROC (if it's not already doing so).
I've had several issues with DB2 as a linked a server. I do not know if it will address your problems, but here is what fixed mine:
1) Enabled lazy close support and pre-fetch during EXECUTE in the ODBC settings
2) Add "FOR FETCH ONLY" on all selects
3) Query using the SELECT * FROM OPENROWSET(LinkedServerName, 'SQL Command') method

Resources