I have a connection pool for database A and database B. I am moving some Node.JS code over to Go (I'm using SQL Server if that matters), and some of the queries are doing this:
db.A.Query(`
select ... from some_table;
select ... from B..other_table;
`)
Is it better to do it that way, or like:
db.A.Query(...)
db.B.Query(...)
I read this line:
create one sql.DB object for each distinct datastore you need to access
from here. And only now do I realize I read 'datastore' as 'database', so now I'm not even sure if it's efficient to have these two database connection pools!
Thank you for any help!
For most scenarios and SQL Server client programs sending multiple SELECT queries in a batch is not materially more efficient. Perhaps if the queries returned very small result sets, and you ran them at very high frequency, you could see a material difference. But in the paradigm case, whether you send the queries in one or two batches won't matter much.
It won't matter to SQL Server at all, so the only difference will be in the client/server network traffic.
SSMS will let you compare the client statistics between running queries in a one-batch script and a multi-batch script. EG running
select top 10 * from sys.objects
select top 5 * from sys.columns
and then
select top 10 * from sys.objects
GO
select top 5 * from sys.columns
in SSMS outputs the following client statistics
We just migrated our database environment from server 1 to server 2.
We are in SQL Server 2014 (old and new servers).
In the previous environment, we had a DATABASE_1 with a Table_a (in which there is an clustered index) and a DATABASE_2 that contains a synonym_a referencing the DATABASE_1.dbo.table_a. The query using this synonym (SELECT with JOIN) was running fine (select top 10000 in 1s).
Now, we have one server with DATABASE_1 with the Table_a, and another server (linked server) with DATABASE_2 with synonym_a.
The same query is running very slow. I can see the execution plan is different between the 2 environment. The index in table_a is not used in the new environment.
We tried to add WITH INDEX but it is not possible to specify an index hint for a remote data source. We need the synonym (because the same code is deployed automatically in different site and can't have the name of server/database in the code of our queries, stored procedure). And we can't replace the view by a stored procedure.
Does anyone have a solution for this problem?
when you use link server execute plan couldn't see the index of another server that you have link.So when use query by join server A send the request to another server by itself plan and ignore the index by the way
try the chance by use this query
select * from (
select * from Table_a order by (column that index in server 2)
)a inner join (
select * from server2.db.owner.table_b order by column)b
on a.id=b.fkid
We have external table created, we need to run select on the table and select all the records, the select runs very very slow. Its not completing even after 30 mins, the table contains around 2millon recs
We also need to query this table from another DB and even this runs very very slow, doesn't return even after 30 mins.
Select is of the form:
select col1, col2,...col3 from ext_table;
Need help in:
1. Any suggestions on reducing the time taken for execution?
Note: we need to select entire content of the table so where condition might not be used.
Thanks in advance.
If you are not using the WHERE clause to push parameters to the remote database, then there is no way to optimize the performance of the query. You are returning the whole table.
My suggestion is to use SQL Data Sync to have a local copy of the table on this SQL Database that synchronizes with the remote Azure SQL Database at X interval of time.
I've received a database that was previously on SQL Server 2008R2 but was just put on a SQL Server 2014 instance. There were no maintenance tasks run of any kind run on the database since 2014 (e.g. Rebuilding of indexes, updating statistics, etc.).
Once we ran update statistics as part of our regularly scheduled maintenance that we do on a set schedule, the performance of some queries has taken a massive hit to the point where some select statements will seem to never finish.
The queries have some CASE...WHEN statements in them, but I wouldn't expect there to be such a performance hit. Does anybody have any thoughts on what might cause such issues?
I've tried updating the compatibility level to 120 since it was on 100 when the database first came in but, that didn't make any difference on the performance.
If you have only just moved the database, give the system some time to build up its execution plans and cache. Also, do your index maintenance and then something like this for the stats. Dont use sp_updatestats though as it just uses a sample of data not a full scan.
what results do you get for this:
SELECT
[sch].[name] + '.' + [so].[name] AS [TableName] ,
[ss].[name] AS [Statistic],
[sp].[last_updated] AS [StatsLastUpdated] ,
[sp].[rows] AS [RowsInTable] ,
[sp].[rows_sampled] AS [RowsSampled] ,
[sp].[modification_counter] AS [RowModifications],
Convert (decimal(18,2),(convert(numeric,[sp].[modification_counter]) / convert(numeric,[sp].[rows]) * 100)) as [Percent_changed]
FROM [sys].[stats] [ss]
JOIN [sys].[objects] [so] ON [ss].[object_id] = [so].[object_id]
JOIN [sys].[schemas] [sch] ON [so].[schema_id] = [sch].[schema_id]
OUTER APPLY [sys].[dm_db_stats_properties]([so].[object_id],
[ss].[stats_id]) sp
WHERE [so].[type] = 'U'
AND [sp].[modification_counter] > 0
And [sp].[last_updated] < getdate()-1
ORDER BY [Percent_changed] DESC
While in Management Studio, I am trying to run a query/do a join between two linked servers.
Is this a correct syntax using linked db servers:
select foo.id
from databaseserver1.db1.table1 foo,
databaseserver2.db1.table1 bar
where foo.name=bar.name
Basically, do you just preface the db server name to the db.table ?
The format should probably be:
<server>.<database>.<schema>.<table>
For example:
DatabaseServer1.db1.dbo.table1
Update: I know this is an old question and the answer I have is correct; however, I think any one else stumbling upon this should know a few things.
Namely, when querying against a linked server in a join situation the ENTIRE table from the linked server will likely be downloaded to the server the query is executing from in order to do the join operation. In the OP's case, both table1 from DB1 and table1 from DB2 will be transferred in their entirety to the server executing the query, presumably named DB3.
If you have large tables, this may result in an operation that takes a long time to execute. After all it is now constrained by network traffic speeds which is orders of magnitude slower than memory or even disk transfer speeds.
If possible, perform a single query against the remote server, without joining to a local table, to pull the data you need into a temp table. Then query off of that.
If that's not possible then you need to look at the various things that would cause SQL server to have to load the entire table locally. For example using GETDATE() or even certain joins. Others performance killers include not giving appropriate rights.
See http://thomaslarock.com/2013/05/top-3-performance-killers-for-linked-server-queries/ for some more info.
SELECT * FROM OPENQUERY([SERVER_NAME], 'SELECT * FROM DATABASE_NAME..TABLENAME')
This may help you.
For those having trouble with these other answers , try OPENQUERY
Example:
SELECT * FROM OPENQUERY([LinkedServer], 'select * from [DBName].[schema].[tablename]')
If you still find issue with <server>.<database>.<schema>.<table>
Enclose server name in []
You need to specify the schema/owner (dbo by default) as part of the reference. Also, it would be preferable to use the newer (ANSI-92) join style.
select foo.id
from databaseserver1.db1.dbo.table1 foo
inner join databaseserver2.db1.dbo.table1 bar
on foo.name = bar.name
select * from [Server].[database].[schema].[tablename]
This is the correct way to call.
Be sure to verify that the servers are linked before executing the query!
To check for linked servers call:
EXEC sys.sp_linkedservers
right click on a table and click script table as select
select name from drsql01.test.dbo.employee
drslq01 is servernmae --linked serer
test is database name
dbo is schema -default schema
employee is table name
I hope it helps to understand, how to execute query for linked server
Usually direct queries should not be used in case of linked server because it heavily use temp database of SQL server. At first step data is retrieved into temp DB then filtering occur. There are many threads about this. It is better to use open OPENQUERY because it passes SQL to the source linked server and then it return filtered results e.g.
SELECT *
FROM OPENQUERY(Linked_Server_Name , 'select * from TableName where ID = 500')
For what it's worth, I found the following syntax to work the best:
SELECT * FROM [LINKED_SERVER]...[TABLE]
I couldn't get the recommendations of others to work, using the database name. Additionally, this data source has no schema.
In sql-server(local) there are two ways to query data from a linked server(remote).
Distributed query (four part notation):
Might not work with all remote servers. If your remote server is MySQL then distributed query will not work.
Filters and joins might not work efficiently. If you have a simple query with WHERE clause, sql-server(local) might first fetch entire table from the remote server and then apply the WHERE clause locally. In case of large tables this is very inefficient since a lot of data will be moved from remote to local. However this is not always the case. If the local server has access to remote server's table statistics then it might be as efficient as using openquery More details
On the positive side T-SQL syntax will work.
SELECT * FROM [SERVER_NAME].[DATABASE_NAME].[SCHEMA_NAME].[TABLE_NAME]
OPENQUERY
This is basically a pass-through. The query is fully processed on the remote server thus will make use of index or any optimization on the remote server. Effectively reducing the amount of data transferred from the remote to local sql-server.
Minor drawback of this approach is that T-SQL syntax will not work if the remote server is anything other than sql-server.
SELECT * FROM OPENQUERY([SERVER_NAME], 'SELECT * FROM DATABASE_NAME.SCHEMA_NAME.TABLENAME')
Overall OPENQUERY seems like a much better option to use in majority of the cases.
I have done to find out the data type in the table at link_server using openquery and the results were successful.
SELECT * FROM OPENQUERY (LINKSERVERNAME, '
SELECT DATA_TYPE, COLUMN_NAME
FROM [DATABASENAME].INFORMATION_SCHEMA.COLUMNS
WHERE
TABLE_NAME =''TABLENAME''
')
Its work for me
Following Query is work best.
Try this Query:
SELECT * FROM OPENQUERY([LINKED_SERVER_NAME], 'SELECT * FROM [DATABASE_NAME].[SCHEMA].[TABLE_NAME]')
It Very helps to link MySQL to MS SQL
PostgreSQL:
You must provide a database name in the Data Source DSN.
Run Management Studio as Administrator
You must omit the DBName from the query:
SELECT * FROM OPENQUERY([LinkedServer], 'select * from schema."tablename"')
For MariaDB (and so probably MySQL), attempting to specify the schema using the three-dot syntax did not work, resulting in the error "invalid use of schema or catalog". The following solution worked:
In SSMS, go to Server Objects > Linked Servers > Providers > MSDASQL
Ensure that "Dynamic parameter", "Level zero only", and "Allow inprocess" are all checked
You can then query any schema and table using the following syntax:
SELECT TOP 10 *
FROM LinkedServerName...[SchemaName.TableName]
Source: SELECT * FROM MySQL Linked Server using SQL Server without OpenQuery
Have you tried adding " around the first name?
like:
select foo.id
from "databaseserver1".db1.table1 foo,
"databaseserver2".db1.table1 bar
where foo.name=bar.name