I just deployed a new stored proc to our test environment only to have it fail upon execution due to the fact the test system didn't contain a table that the stored proc relied upon. I believe this is due to deferred name resolution.
The thing is, I feel that at times in the past, I have attempted to create stored procs that failed due to missing dependencies. I could be wrong though.
Anyway, is it possible to somehow enforce name resolution during creation of a stored proc? If so, is there any way to get this working with sqlcmd as well as SSMS?
This way we could find out about missing dependencies upon the rollout of scripts rather upon their first execution.
On a side note, I was interested to read about this apparent deviation from the MSDN doco regarding how deferred resolution works.
Edit: We have a mix of 2005/2008 (out of my control), so I'd need a 2005 solution to work on both instances.
You could investigate WITH SCHEMABINDING though that may not work for you for the reasons indicated in the comments to the connect item linked to by Damien.
If on SQL Server 2008 you could also look at sys.sql_expression_dependencies
CREATE PROC bar
AS
SELECT *
FROM DoesNotExist
JOIN AlsoDoesNotExist ON 1=1
GO
CREATE TABLE DoesNotExist
(
X INT
)
GO
SELECT OBJECT_NAME(referencing_id) AS referencing_entity_name,
referenced_entity_name
FROM sys.sql_expression_dependencies
WHERE referenced_id IS NULL
Returns
referencing_entity_name referenced_entity_name
------------------------------ ------------------------------
bar AlsoDoesNotExist
Related
In my SQL Server 2012 database, I have a linked server reference to a second SQL Server database that I need to pull records from and update accordingly.
I have the following update statement that I am trying to run:
UPDATE
Linked_Tbl
SET
Transferred = 1
FROM
MyLinkedServer.dbo.MyTable Linked_Tbl
JOIN
MyTable Local_Tbl ON Local_Tbl.LinkedId = Linked_Tbl.Id
JOIN
MyOtherTable Local_Tbl2 ON Local_Tbl.LocalId = Local_Tbl2.LocalId
Which I had to stop after an hour of running as it was still executing.
I've read online and found solutions stating that the best solution is to create a stored procedure on the Linked Server itself to execute the update statement rather than run it over the wire.
The problems I have are:
I don't have the ability to create any procedures on the other server.
Even if I could create that procedure, I would need to pass through all the Ids to the stored procedure for the update and I'm not sure how to do that efficiently with thousands of Ids (this, obviously, is the smaller of the issues, though since I can't create that procedure in the first place).
I'm hoping there are other solutions people may have managed to come up with given that it's often the case you don't have permissions to make changes to a different server.
Any ideas??
I am not sure, whether it can give more performance, you an try:
UPDATE
Linked_Tbl
SET
Transferred = 1
FROM OPENDATASOURCE([MyLinkedServer],'select Id, LocalId,Transferred from remotedb.dbo.MyTable') AS Linked_Tbl
JOIN MyTable Local_Tbl
ON Local_Tbl.LinkedId = Linked_Tbl.Id
JOIN MyOtherTable Local_Tbl2
ON Local_Tbl.LocalId = Local_Tbl2.LocalId
SQL Server has Deferred Name Resolution feature, read here for details:
https://msdn.microsoft.com/en-us/library/ms190686(v=sql.105).aspx
In that page, all it's talking is stored procedure so it seems Deferred Name Resolution only works for stored procedures and not for functions and I did some testing.
create or alter function f2(#i int)
returns table
as
return (select fff from xxx)
go
Note the table xxx does not exist. When I execute the above CREATE statement, I got the following message:
Msg 208, Level 16, State 1, Procedure f2, Line 4 [Batch Start Line 22]
Invalid object name 'xxx'.
It seems that SQL Server instantly found the non-existent table xxx and it proved Deferred Name Resolution doesn't work for functions. However when I slightly change it as follows:
create or alter function f1(#i int)
returns int
as
begin
declare #x int;
select #x = fff from xxx;
return #x
end
go
I can successfully execute it:
Commands completed successfully.
When executing the following statement:
select dbo.f1(3)
I got this error:
Msg 208, Level 16, State 1, Line 34
Invalid object name 'xxx'.
So here it seems the resolution of the table xxx was deferred. The most important differences between these two cases is the return type. However I can't explain when Deferred Name Resolution will work for functions and when not. Can anyone help me to understand this? Thanks in advance.
It feels like you were looking for understanding of why your particular example didn't work. Quassnoi's answer was correct but didn't offer a reason so I went searching and found this MSDN Social answer by Erland Sommarskog. The interesting part:
However, it does not extend to views and inline-table functions. For
stored procedures and scalar functions, all SQL Server stores in the
database is the text of the module. But for views and inline-table
functions (which are parameterised view by another name) SQL Server
stores metadata about the columns etc. And that is not possible if the
table is missing.
Hope that helps with understanding why :-)
EDIT:
I did take some time to confirm Quassnoi's comment that sys.columns as well as several other tables did contain some metadata about the inline function so I am unsure if there is other metadata not written. However I thought I would add a few other notes I was able to find that may help explain in conjunction.
First a quote from Wayne Sheffield's blog:
In the MTVF, you see only an operation called “Table Valued Function”. Everything that it is doing is essentially a black box – something is happening, and data gets returned. For MTVFs, SQL can’t “see” what it is that the MTVF is doing since it is being run in a separate context. What this means is that SQL has to run the MTVF as it is written, without being able to make any optimizations in the query plan to optimize it.
Then from the SQL Server 2016 Exam 70-761 by Itzik Ben-Gan (Skill 3.1):
The reason that it's called an inline function is because SQL Server inlines, or expands, the inner query definition, and constructs an internal query directly against the underlying tables.
So it seems the inline function essentially returns a query and is able to optimize it with the outer query, not allowing the black-box approach and thus not allowing deferred name resolution.
What you have in your first example is an inline function (it does not have BEGIN/END).
Inline functions can only be table-valued.
If you used a multi-statement table-valued function for you first example, like this:
CREATE OR ALTER FUNCTION
fn_test(#a INT)
RETURNS #ret TABLE
(
a INT
)
AS
BEGIN
INSERT
INTO #ret
SELECT a
FROM xxx
RETURN
END
, it would compile alright and fail at runtime (if xxx would not exist), same as a stored procedure or a scalar UDF would.
So yes, DNR does work for all multi-statement functions (those with BEGIN/END), regardless of their return type.
So I have a stored procedure in SQL Server. I've simplified its code (for this question) to just this:
CREATE PROCEDURE dbo.DimensionLookup as
BEGIN
select DimensionID, DimensionField from DimensionTable
inner join Reference on Reference.ID = DimensionTable.ReferenceID
END
In SSIS on SQL Server 2012, I have a Lookup component with the following source command:
EXECUTE dbo.DimensionLookup WITH RESULT SETS (
(DimensionID int, DimensionField nvarchar(700) )
)
When I run this procedure in Preview mode in BIDS, it returns the two columns correctly. When I run the package in BIDS, it runs correctly.
But when I deploy it out to the SSIS catalog (the same server the database is on), point it to the same data sources, etc. - it fails with the message:
EXECUTE statement failed because its WITH RESULT SETS clause specified 2 column(s) for result set number 1, but the statement sent
3 column(s) at run time.
Steps Tried So Far:
Adding a third column to the result set - I get a different error, VS_NEEDSNEWMETADATA - which makes sense, kind of proof there's no third column.
SQL Profiler - I see this:
exec sp_prepare #p1 output,NULL,N'EXECUTE dbo.DimensionLookup WITH RESULT SETS ((
DimensionID int, DimensionField nvarchar(700)))',1
SET FMTONLY ON exec sp_execute 1 SET FMTONLY OFF
So it's trying to use FMTONLY to get the result set data ... needless to say, running SET FMTONLY ON and then running the command in SSMS myself yields .. just the two columns.
SET NOTCOUNT ON - Nothing changed.
So, two other interesting things:
I deployed it out to my local SQL 2012 install and it worked fine, same connections, etc. So it may be a server / database configuration. Not sure what if anything it is, I didn't install the dev server and my own install was pretty much click through vanilla.
Perhaps the most interesting thing. If I remove the join from the procedure's statement so it just becomes
select DimensionID, DimensionField from DimensionTable
It goes back to just sending 2 columns in the result set! So adding a join, without adding any additional output columns, ups the result set to 3 columns. Even if I add 6 more joins, just 3 columns. So one guess is its some sort of metadata column that only gets activated when there's a join.
Anyway, as you can imagine, it's driving me kind of mad. I have a workaround to load the data into a temp table and just return that, but why won't this work? What extra column is being sent back? Why only when I add a join?
Gah!
So all credit to billinkc: The reason is because of a patch.
In Version 11.0.2100.60, SSIS Lookup SQL command metadata is gathered using the old SET FMTONLY method. Unfortunately, this doesn't work in 2012, as the Books Online entry on SET FMTONLY helpfully notes:
Do not use this feature. This feature has been replaced by sp_describe_first_result_set.
Too bad they didn't follow their own advice!
This has been patched as of version 11.0.2218.0. Metadata is correctly gathered using the sp_describe_first_result_set system stored procedure.
This can happen if the specified WITH results set in SSIS identifies that there are more columns than being returned by the stored proc being called. Check your stored proc and ensure that you have the correct number of output columns as the WITH results set.
While in Management Studio, I am trying to run a query/do a join between two linked servers.
Is this a correct syntax using linked db servers:
select foo.id
from databaseserver1.db1.table1 foo,
databaseserver2.db1.table1 bar
where foo.name=bar.name
Basically, do you just preface the db server name to the db.table ?
The format should probably be:
<server>.<database>.<schema>.<table>
For example:
DatabaseServer1.db1.dbo.table1
Update: I know this is an old question and the answer I have is correct; however, I think any one else stumbling upon this should know a few things.
Namely, when querying against a linked server in a join situation the ENTIRE table from the linked server will likely be downloaded to the server the query is executing from in order to do the join operation. In the OP's case, both table1 from DB1 and table1 from DB2 will be transferred in their entirety to the server executing the query, presumably named DB3.
If you have large tables, this may result in an operation that takes a long time to execute. After all it is now constrained by network traffic speeds which is orders of magnitude slower than memory or even disk transfer speeds.
If possible, perform a single query against the remote server, without joining to a local table, to pull the data you need into a temp table. Then query off of that.
If that's not possible then you need to look at the various things that would cause SQL server to have to load the entire table locally. For example using GETDATE() or even certain joins. Others performance killers include not giving appropriate rights.
See http://thomaslarock.com/2013/05/top-3-performance-killers-for-linked-server-queries/ for some more info.
SELECT * FROM OPENQUERY([SERVER_NAME], 'SELECT * FROM DATABASE_NAME..TABLENAME')
This may help you.
For those having trouble with these other answers , try OPENQUERY
Example:
SELECT * FROM OPENQUERY([LinkedServer], 'select * from [DBName].[schema].[tablename]')
If you still find issue with <server>.<database>.<schema>.<table>
Enclose server name in []
You need to specify the schema/owner (dbo by default) as part of the reference. Also, it would be preferable to use the newer (ANSI-92) join style.
select foo.id
from databaseserver1.db1.dbo.table1 foo
inner join databaseserver2.db1.dbo.table1 bar
on foo.name = bar.name
select * from [Server].[database].[schema].[tablename]
This is the correct way to call.
Be sure to verify that the servers are linked before executing the query!
To check for linked servers call:
EXEC sys.sp_linkedservers
right click on a table and click script table as select
select name from drsql01.test.dbo.employee
drslq01 is servernmae --linked serer
test is database name
dbo is schema -default schema
employee is table name
I hope it helps to understand, how to execute query for linked server
Usually direct queries should not be used in case of linked server because it heavily use temp database of SQL server. At first step data is retrieved into temp DB then filtering occur. There are many threads about this. It is better to use open OPENQUERY because it passes SQL to the source linked server and then it return filtered results e.g.
SELECT *
FROM OPENQUERY(Linked_Server_Name , 'select * from TableName where ID = 500')
For what it's worth, I found the following syntax to work the best:
SELECT * FROM [LINKED_SERVER]...[TABLE]
I couldn't get the recommendations of others to work, using the database name. Additionally, this data source has no schema.
In sql-server(local) there are two ways to query data from a linked server(remote).
Distributed query (four part notation):
Might not work with all remote servers. If your remote server is MySQL then distributed query will not work.
Filters and joins might not work efficiently. If you have a simple query with WHERE clause, sql-server(local) might first fetch entire table from the remote server and then apply the WHERE clause locally. In case of large tables this is very inefficient since a lot of data will be moved from remote to local. However this is not always the case. If the local server has access to remote server's table statistics then it might be as efficient as using openquery More details
On the positive side T-SQL syntax will work.
SELECT * FROM [SERVER_NAME].[DATABASE_NAME].[SCHEMA_NAME].[TABLE_NAME]
OPENQUERY
This is basically a pass-through. The query is fully processed on the remote server thus will make use of index or any optimization on the remote server. Effectively reducing the amount of data transferred from the remote to local sql-server.
Minor drawback of this approach is that T-SQL syntax will not work if the remote server is anything other than sql-server.
SELECT * FROM OPENQUERY([SERVER_NAME], 'SELECT * FROM DATABASE_NAME.SCHEMA_NAME.TABLENAME')
Overall OPENQUERY seems like a much better option to use in majority of the cases.
I have done to find out the data type in the table at link_server using openquery and the results were successful.
SELECT * FROM OPENQUERY (LINKSERVERNAME, '
SELECT DATA_TYPE, COLUMN_NAME
FROM [DATABASENAME].INFORMATION_SCHEMA.COLUMNS
WHERE
TABLE_NAME =''TABLENAME''
')
Its work for me
Following Query is work best.
Try this Query:
SELECT * FROM OPENQUERY([LINKED_SERVER_NAME], 'SELECT * FROM [DATABASE_NAME].[SCHEMA].[TABLE_NAME]')
It Very helps to link MySQL to MS SQL
PostgreSQL:
You must provide a database name in the Data Source DSN.
Run Management Studio as Administrator
You must omit the DBName from the query:
SELECT * FROM OPENQUERY([LinkedServer], 'select * from schema."tablename"')
For MariaDB (and so probably MySQL), attempting to specify the schema using the three-dot syntax did not work, resulting in the error "invalid use of schema or catalog". The following solution worked:
In SSMS, go to Server Objects > Linked Servers > Providers > MSDASQL
Ensure that "Dynamic parameter", "Level zero only", and "Allow inprocess" are all checked
You can then query any schema and table using the following syntax:
SELECT TOP 10 *
FROM LinkedServerName...[SchemaName.TableName]
Source: SELECT * FROM MySQL Linked Server using SQL Server without OpenQuery
Have you tried adding " around the first name?
like:
select foo.id
from "databaseserver1".db1.table1 foo,
"databaseserver2".db1.table1 bar
where foo.name=bar.name
SQL Server 2000 here.
I'm trying to be an interim DBA, but don't know much about the mechanics of a database server, so I'm getting a little stuck. There's a client process that hits three views simultaneously. These three views query a remote server to pull back data.
What it looks like is that one of these queries will work, but the other two fail (client process says it times out, so I'm guessing a lock can do that). The querying process has a lock that sticks around until the SQL process is restarted (I got gutsy and tried to kill the spid once, but it wouldn't let go). Any queries to this database after the lock hang, and blame the first process for blocking it.
The process reports these locks... (apologies for the formatting, the preview functionality shows it as fully lined up).
spid dbid ObjId IndId Type Resource Mode Status
53 17 0 0 DB S GRANT
53 17 1445580188 0 TAB Sch-S GRANT
53 17 1445580188 0 TAB [COMPILE] X GRANT
I can't analyze that too well. Object 1445580188 is sp_bindefault, a system stored procedure in master. What's it hanging on to an exclusive lock for?
View code, to protect the proprietary...I only changed the names (they stayed consistent with aliases and whatnot) and tried to keep everything else exactly the same.
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER OFF
GO
ALTER view [dbo].[theView]
as
select
a.[column1] column_1
,b.[column2] column_2
,[column3]
,[column4]
,[column5]
,[column6]
,[column7]
,[column8]
,[column9]
,[column10]
,p.[column11]
,p.[column12]
FROM
[remoteServer].db1.dbo.[tableP] p
join [remoteServer].db2.dbo.tableA a on p.id2 = a.id
join [remoteServer].db2.dbo.tableB b on p.id = b.p_id
WHERE
isnumeric(b.code) = 1
GO
SET ANSI_NULLS OFF
GO
SET QUOTED_IDENTIFIER ON
GO
Take a look at this link. Are you sure it's views that are blocking and not stored procedures? To find out, run this query below with the ObjId from your table above. There are things that you can do to mitigate stored procedure re-compiles. The biggest thing is to avoid naming your stored procedures with an "sp_" prefix, see this article on page 10. Also avoid using if/else branches in the code, use where clauses with case statements instead. I hope this helps.
[Edit]:
I believe sp_binddefault/rule is used in conjunction with user defined types (UDT). Does your view make reference to any UDT's?
SELECT * FROM sys.Objects where object_id = 1445580188
Object 1445580188 is sp_bindefault in the master database, no? Also, it shows resource = "TAB" = table.
USE master
SELECT OBJECT_NAME(1445580188), OBJECT_ID('sp_bindefault')
USE mydb
SELECT OBJECT_NAME(1445580188)
If the 2nd query returns NULL, then the object is a work table.
I'm guessing it's a work table being generated to deal with the results locally.
The JOIN will happen locally and all data must be pulled across.
Now, I can't shed light on the compile lock: the view should be compiled already. This is complicated by the remote server access and my experience of compile locks is all related to stored procs.