SQL Server Linked Server to Progress is slow with an openquery view - sql-server

We have a SQL Server database setup with a Linked Server setup connecting to a Progress OpenEdge database. We created a SQL Server view (for using with SSRS) of some of the OpenEdge tables using code similar to the following:
CREATE VIEW accounts AS SELECT * FROM OPENQUERY(myLinkedServerName,
'SELECT * FROM PUB.accounts')
CREATE VIEW clients AS SELECT * FROM OPENQUERY(myLinkedServerName,
'SELECT * FROM PUB.clients')
For some reason the queries seem to bring back the whole table and then filter on the SQL side instead of executing the query on the Progress side.
Anybody know why or how to remedy the situation?
Thanks

Is it any faster when executed as a native OpenEdge SQL query? (You can use the sqlexp command line tool to run the query from a proenv prompt.)
If it is not then the issue may be that you need to run UPDATE STATISTICS on the database.
http://knowledgebase.progress.com/articles/Article/20992
You may also need to run dbtool to adjust field widths (OpenEdge fields are all variable width and can be over-stuffed -- which gives SQL clients fits.)
http://knowledgebase.progress.com/articles/Article/P24496

Related

Oracle and SQL Server DBLink Performance Issue (Direct select from table#dblink vs create view)

I've successfully create dblink to access SQL Server 2000 from Oracle 12. I'm accessing this using Oracle PL/SQL. performing
select * from table1#dblink where id=1
instantly produce output to pl/sql windows. But if I create view first and then perform select statement, the result significantly slower.
"create view view1 as select * from table1#dblink;"
"select * from view1 where id=1"
from my understanding, it just select from the same table and I'm creating the view just to simplify the name.
Thank you.
Looks like the problem you are facing is basically the site on which the query is running. When you run a query with tables accessed via dblinks Oracle has 2 options: it could retrieve all data from the dblink and then apply all filter and join conditions on Oracle's server or push those conditions to remote site (SQL server in your case) and retrieve already filtered information. You can check query plans to see the differences. Also you can control this behavior by using DRIVING_SITE hint, try to create the view with this hint, your query should work as fast as the first one:
create view view1 as select /*+ DRIVING_SITE(t)*/ * from table1#dblink t

Inserting results from a very large OPENQUERY without distributed transactions

I am trying to insert rows into a Microsoft SQL Server 2014 table from a query that hits a linked Oracle 11g server. I have read only access rights on the linked server. I have traditionally used OPENQUERY to to do something like the following:
INSERT INTO <TABLE> SELECT * FROM OPENQUERY(LINKED_SERVER, <SQL>)
Over time the SQL queries I run have been getting progressively more complex and recently surpassed the OPENQUERY limit of 8000 characters. The general consensus on the web appears to be to switch to something like the following:
INSERT INTO <TABLE> EXECUTE(<SQL>) AT LINKED_SERVER
However, this seems to require that distributed transactions are enabled on the linked server, which isn't an option this project. Are there any other possible solutions I am missing?
Can you get your second method to work if you disable the "remote proc transaction promotion" linked server option?
EXEC master.dbo.sp_serveroption
#server = 'YourLinkedServerName',
#optname = 'remote proc transaction promotion',
#optvalue = 'false'
If SQL Server Integration Services is installed/available, you could do this with an SSIS package. SQL Server Import/Export Wizard can automate a lot of the package configuration/setup for you.
Here's a previous question with some useful links on SSIS to Oracle:
Connecting to Oracle Database using Sql Server Integration Services
If you're interested in running it via T-SQL, here's an article on executing SSIS packages from a stored proc:
http://www.databasejournal.com/features/mssql/executing-a-ssis-package-from-stored-procedure-in-sql-server.html
I've been in a similar situation before, what worked for me was to decompose the large query string while still using the query method below. (I did not have the luxury of SSIS).
FROM OPENQUERY(LINKED_SERVER, < SQL >)
Instead of Inserting directly into your table, move your main result set into a local temporary landing table first (could be a physical or temp table).
Decompose your < SQL > query by moving transformation and business logic code into SQL Server boundary out of the < SQL > query.
If you have joins in your < SQL > query bring these result sets across to SQL Server as well and then join locally to your main result set.
Finally perform your insert locally.
Clear your staging area.
There are various approaches (like wrapping your open queries in Views) but I like flexibility and found that reducing the size of my open queries to the minimum, storing and transforming locally yielded better results.
Hope this helps.

Creating a snapshot database to another SQL Server

I'm trying to save the values of several columns of one table to another table on a different server. I am using SQL Server. I would like to do this without running external programs that query from this database and insert the results into the new database. Is there any way to do this from within the SQL Server Management Studio?
This is a recurring event that occurs every hour. I have tried scheduling maintenance tasks that execute custom T-SQL scripts but I'm having trouble getting the connection to the remote server.
Any help would be appreciated.
If you can set up the remote server as a linked server you should be able to configure the SQL Server Agent to execute jobs that contain queries that access tables on both the local and linked server. Remember that you might have to configure the access rights for the account used to run SQL Server Agent so that it has permissions to read/write tables on both servers.
This practice might not be without issues though as this article discusses.
You can use a 4 part name like;
INSERT [InstanceName].[DatabaseName].[SchemaName].[TableName]
SELECT * FROM [SourceInstanceName].[SourceDatabaseName].[SourceSchemaName].[SourceTableName]
But first you will have to set the remote server as a linked server as so;
https://msdn.microsoft.com/en-us/library/aa560998.aspx

Alternative SQL Server SMO methods

I use SQL Server SMO methods to get SQL Server data such as list of databases.
Is there way to execute above action using ADO.net without sending TSQL parameter to SQL Server?
You can also inspect the system catalog views, e.g.
SELECT * FROM sys.databases
to get a list of databases, or
SELECT * FROM sys.tables
to get a list of tables inside the current database.
Read all about the system catalog views on MSDN.
But I don't understand what you mean here:
Is there way to execute above action using ADO.net without sending TSQL parameter to SQL Server?
???? what exactly (and why?) do you not want to send to ADO.NET or SQL Server?!?!? Please elaborate....

Can't find table object name

I have an application in classic ASP, and a database in SQL server 2005.
I transfer the database in SQL server express edition and I have one strange problem, I can see the tables in the database in this way:
information_Schema.dbo.test, so when I execute SQL command
select * From test
I get error that it can't find the table.
When I execute
select * From information_Schema.dbo.test
I do get results.
The problem is that my application is many many files and I can't rewrite the SQL commands.
Is there any way to find a solution in SQL without changing anything in my application?
I would guess you are not connecting to the information_Schema database but to some other db that does not contain the table. Did you put the table in the wrong place(Information_Schema doesn't sound like a typical application db location to me) or is your connection wrong?

Resources