Using the results of a query in OPENQUERY - sql-server

I have a SQL Server 2005 database that is linked to an Oracle database. What I want to do is run a query to pull some ID numbers out of it, then find out which ones are in Oracle.
So I want to take the results of this query:
SELECT pidm
FROM sql_server_table
And do something like this to query the Oracle database (assuming that the results of the previous query are stored in #pidms):
OPENQUERY(oracledb,
'
SELECT pidm
FROM table
WHERE pidm IN (' +
#pidms + ')')
GO
But I'm having trouble thinking of a good way to do this. I suppose that I could do an inner join of queries similar to these two. Unfortunately, there are a lot of records to pull within a limited timeframe so I don't think that will be a very performant option to choose.
Any suggestions? I'd ideally like to do this with as little Dynamic SQL as possible.

Ahhhh, pidms. Brings back bad memories! :)
You could do the join, but you would do it like this:
select sql.pidm,sql.field2 from sqltable as sql
inner join
(select pidm,field2 from oracledb..schema.table) as orcl
on
sql.pidm = orcl.pidm
I'm not sure if you could write a PL/SQL procedure that would take a table variable from sql...but maybe.....no, I doubt it.

Store openquery results in a temp table, then do an inner join between the SQL table and the temp table.

I don't think you can do a join since OPENQUERY requires a pure string (as you wrote above).

BG: Actually JOIN IN SQLServer to Oracle by OpenQuery works, avoiding #tmp table and allowing JOIN to SQL without Param* - ex.
[SQL SP] LEFT JOIN OPENQUERY(ORADB,
'SELECT COUNT(distinct O.ORD_NUM) LCNT,
O.ORD_MAIN_NUM
FROM CUSTOMER.CUST_FILE C
JOIN CUSTOMER.ORDER_NEW O
ON C.ID = O.ORD_ID
WHERE C.CUS_ID NOT IN (''2'',''3'')
GROUP BY O.ORD_MAIN_MACNUM') LC
ON T.ID = LC.ORD_MAIN_ID*
Cheers, Bill Gibbs

Related

Is there a way to identify if a column is not been used in a SQL Server Database?

I have a big database and a lot tables and I would like to identify what columns are not been called by any store procedure or any query, or not in use.
I'm not sure if this is what you're looking for, but in your DB under views there's a folder for System Views - three of which are the following. Looking in all_objects look for the table name, then use the object_id of that table to select from the other queries. There may be other meta-data here that is appropriate for your need.
SELECT *
FROM sys.all_objects
SELECT *
FROM sys.all_columns
WHERE object_id = 981578535
SELECT *
FROM sys.all_views
WHERE object_id = 981578535
After an investigation I have found a new feature of SQL Server called Query Store where you can find in disk the executions with other information. If you have a SQL Server 2016 instance you can find it in the properties of the data base. There you can change the amount of days to capture and then you can in theory try to find the columns in question. The idea of this technology is to give you more options for performance tuning.
You can find the information with this query :
SELECT TOP 10 qt.query_sql_text, q.query_id,
qt.query_text_id, p.plan_id, rs.last_execution_time
FROM sys.query_store_query_text AS qt
JOIN sys.query_store_query AS q
ON qt.query_text_id = q.query_text_id
JOIN sys.query_store_plan AS p
ON q.query_id = p.query_id
JOIN sys.query_store_runtime_stats AS rs
ON p.plan_id = rs.plan_id
where qt.query_sql_text LIKE '%ColumnToFind%'
ORDER BY rs.last_execution_time;
Credits: Query Store

How to handle performance issue of update query inside of the SQL stored procedure?

I created a sql query for update a table but it has a performance problem. From my asp.net platform will have timeout from sql database when I call below update from stored procedure. Can you help me to make it more faster and help db timeout problem?
UPDATE e SET e.X=tablo.X,E.Y=ISNULL(tablo.Y,0),e.V=tablo.V,e.ADRES=tablo.ADRES,e.A=tablo.A
FROM tbltabloor e
JOIN tablo_NOT tablo ON e.Z=tablo.Z AND E.T=convert(datetime,CONVERT(varchar(8),tablo.T,112)) AND E.U=tablo.U
WHERE NOT exists (select * from tablo_NOT t where t.Z =e.Z AND E.T=convert(datetime,CONVERT(varchar(8),t.T,112)) AND E.U=t.U
AND E.X=t.X AND ISNULL(t.Y,0)=ISNULL(E.Y,0) AND E.V=t.V AND E.ADRES=t.ADRES
)
Did you create indexes on all join fields between tables?
To accelerate processing without changing the structure of your current query, you must create these indexes.
Note that for fields calculated as _convert (datetime, CONVERT (varchar (8), tablo.T, 112))_, you must create function based index and here is a useful link that can help you:
[http://stackoverflow.com/questions/22168213/function-based-indexes-in-sql-server][1].
Normally with this, executing your query will not result in a timeout.
hope this can help.
Take a look for any missing applicable indexes here using this:
SELECT
statement AS [database.scheme.table],
column_id , column_name, column_usage,
migs.user_seeks, migs.user_scans,
migs.last_user_seek, migs.avg_total_user_cost,
migs.avg_user_impact
FROM sys.dm_db_missing_index_details AS mid
CROSS APPLY sys.dm_db_missing_index_columns (mid.index_handle)
INNER JOIN sys.dm_db_missing_index_groups AS mig
ON mig.index_handle = mid.index_handle
INNER JOIN sys.dm_db_missing_index_group_stats AS migs
ON mig.index_group_handle=migs.group_handle
ORDER BY mig.index_group_handle, mig.index_handle, column_id
Read more about this here Missing Index Help
Also, try to remove the convert date stuff if at all possible from your where clause as that will definitely slow your query down.

How to get a list of all tables in two different databases

I'm trying to create a little SQL script (in SQL Server Management Studio) to get a list of all tables in two different databases. The goal is to find out which tables exist in both databases and which ones only exist in one of them.
I have found various scripts on SO to list all the tables of one database, but so far I wasn't able to get a list of tables of multiple databases.
So: is there a way to query SQL Server for all tables in a specific database, e.g. SELECT * FROM ... WHERE databaseName='first_db' so that I can join this with the result for another database?
SELECT * FROM database1.INFORMATION_SCHEMA.TABLES
UNION ALL
SELECT * FROM database2.INFORMATION_SCHEMA.TABLES
UPDATE
In order to compare the two lists, you can use FULL OUTER JOIN, which will show you the tables that are present in both databases as well as those that are only present in one of them:
SELECT *
FROM database1.INFORMATION_SCHEMA.TABLES db1
FULL JOIN database2.INFORMATION_SCHEMA.TABLES db2
ON db1.TABLE_NAME = db2.TABLE_NAME
ORDER BY COALESCE(db1.TABLE_NAME, db2.TABLE_NAME)
You can also add WHERE db1.TABLE_NAME IS NULL OR db2.TABLE_NAME IS NULL to see only the differences between the databases.
As far as I know, you can only query tables for the active database. But you could store them in a temporary table, and join the result:
use db1
insert #TableList select (...) from sys.tables
use db2
insert #TableList2 select (...) from sys.tables
select * from #TableList tl1 join Tablelist2 tl2 on ...
Just for completeness, this is the query I finally used (based on Andriy M's answer):
SELECT * FROM DB1.INFORMATION_SCHEMA.Tables db1
LEFT OUTER JOIN DB2.INFORMATION_SCHEMA.Tables db2
ON db1.TABLE_NAME = db2.TABLE_NAME
ORDER BY db1.TABLE_NAME
To find out which tables exist in db2, but not in db1, replace the LEFT OUTER JOIN with a RIGHT OUTER JOIN.

SQL Server uncorrelated subquery very slow

I have a simple, uncorrelated subquery that performs very poorly on SQL Server. I'm not very experienced at reading execution plans, but it looks like the inner query is being executed once for every row in the outer query, even though the results are the same each time. What can I do to tell SQL Server to execute the inner query only once?
The query looks like this:
select *
from Record record0_
where record0_.RecordTypeFK='c2a0ffa5-d23b-11db-9ea3-000e7f30d6a2'
and (
record0_.EntityFK in (
select record1_.EntityFK
from Record record1_
join RecordTextValue textvalues2_ on record1_.PK=textvalues2_.RecordFK
and textvalues2_.FieldFK = '0d323c22-0ec2-11e0-a148-0018f3dde540'
and (textvalues2_.Value like 'O%' escape '~')
)
)
Analyze your SQL Statement and put it in the Database Engine Tuning Advisor (2005+) and see what indexes it suggests.
I don't think you're giving SQL Server enough credit on determining the best way to run a query. Make sure you have indexes on the fields in your joins and where clauses.
This may be an alternative to your query, but will probably run the same:
select record0_.*
from Record record0_
inner join Record record1_
on record0_.EntityFK = record1_.EntityFK
inner join RecordTextValue textvalues2_
on record1_.PK=textvalues2_.RecordFK
and textvalues2_.FieldFK = '0d323c22-0ec2-11e0-a148-0018f3dde540'
and (textvalues2_.Value like 'O%' escape '~')
where record0_.RecordTypeFK='c2a0ffa5-d23b-11db-9ea3-000e7f30d6a2'
You should be able to change this to a straight forward join, does that help:
select r.*
from Record r
join ( select record1_.EntityFK
from Record record1_
join RecordTextValue textvalues2_ on record1_.PK=textvalues2_.RecordFK
and textvalues2_.FieldFK = '0d323c22-0ec2-11e0-a148-0018f3dde540'
and (textvalues2_.Value like 'O%' escape '~')
) s on s.EntityFK = r.EntityFK
where r.RecordTypeFK='c2a0ffa5-d23b-11db-9ea3-000e7f30d6a2'
This looks a lot more sensible.. (but pretty much the same query)
select r.*
from Record r
join ( select ri.EntityFK
from Record ri
join RecordTextValue t on ri.PK=t.RecordFK
where
t.FieldFK = '0d323c22-0ec2-11e0-a148-0018f3dde540'
and t.Value like 'O%'
) s on s.EntityFK = r.EntityFK
where r.RecordTypeFK='c2a0ffa5-d23b-11db-9ea3-000e7f30d6a2'

How to fetch data from two different sql servers?

I have an inline query, in which I have one table1 in server1 and another table2 in server2.
I need to join these two tables, and fetch data.
I can do this like connect to one server, get data and connect to next server...fetch data.
and join them.
But is there any other better way. I have heard about Linked servers. Will that help here ?
Thanks in advance !!!
Yes, set up a linked server on one server to the other. Then you can just do a normal query with a join. It would look something like this:
SELECT t1.Col1
, t2.ColA
FROM server1Table t1
INNER JOIN SERVER2.dbname.dbo.tableName t2 ON t1.TheId = t2.TheId
this assumes you're running the query on Server1. You can also have two linked servers and reference them both using [servername].[dbname].[schema].[table] and then use in SQL as normal.
Alternatively, you can use OPENROWSET (but linked server is easiest if you're able to set that up). OpenRowSets look like this:
SELECT t1.Col1
, t2.ColA
FROM server1Table t1
INNER JOIN OPENROWSET('SQLNCLI', 'Server=Server2;Trusted_Connection=yes;',
'SELECT t2.ColA, t2.TheId FROM dbname.dbo.tableName') AS t2
ON t1.TheId = t2.TheId
and then you can just join on 'a' as if it's a local table. Under the hood it's probably pulling all the data down to your local database, so you should consider adding WHERE to the inner query to restrict rows, and only get the columns you need.

Resources