I need to pass linked server name as variable to stored procedure right now after testing and research they all suggest to using dynamic sql and open query which I am using now. however I am not comfortable using it(sql injection) plus I need to call other user defined function to the query. I am looking for a more secure and direct call. Here is my SP
ALTER PROCEDURE [dbo].[GetBackUpStatus]
-- Add the parameters for the stored procedure here
#linkedServerName AS VARCHAR(100),
#exemptDB as VARCHAR(100)
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
DECLARE #Sql varchar(8000)
SET NOCOUNT ON;
set #Sql = 'select * from openquery (' + #linkedServerName + ' , ''SELECT SERVERPROPERTY(''''SERVERNAME'''') AS "SERVERNAME",
T1.NAME AS DatabaseName,
MAX(T2.backup_finish_date) AS last_db_backup_date,
CAST(COALESCE(DATEDIFF(second, MAX(T2.backup_finish_date) , CURRENT_TIMESTAMP)/ 3600.0, 9999.0) as NUMERIC(6,2)) AS [Hours Since Backup]
FROM master.sys.databases T1
LEFT OUTER JOIN msdb.dbo.backupset T2 ON T2.database_name = T1.NAME
WHERE T1.NAME NOT IN (''''tempdb'''')
GROUP BY T1.NAME
ORDER BY T1.NAME'')'
Exec (#Sql)
END
the purpose of this query is to get the server status and its database, I don't like this because of that confusing single quotes, this query will eventually grow as I develop and add function calls.
I tried this and something like this is what I wanted, since it is direct query and cleaner without those quotes. That's how I typically use linked server.
Select * from [' + #linkedservername + '].[schema].table
thanks
Solution for a large scale data model with hundreds of tables / objects.
Dynamic modification and cloning of a stored procedure for every linked-server.
It is kinda hiding any dynamic SQL under the hood.
How to
Create a stored procedure which interacts with an existing linked-server.
During a database deployment process:
Obtain the source code of the stored procedure.
Replace the name of the linked-server in the code.
If you want to create a new stored procedure (cloned), replace the name of the initial stored procedure in the code.
Create a cloned stored procedure or modify the current.
Repeat all steps for each required linked-server.
There are another variations for it.
Now, any external logic may decide which procedure to use.
You can check the existence of a linked-server or its related stored procedure.
For modifications and cloning, it is possible to use SQL Server or external tools, such as C#, CMD, etc.
For creation under SQL Server.
For decades i've used VIEWs as a synonym:
CREATE VIEW dbo.Banks AS
SELECT *
FROM OtherDatabase.dbo.Banks
i do this so i can abstract where the "real" table is. And when it changes, it's as simple as altering the view:
And this works well. It's doesn't cause the optimizer any issues, and i have been able to edit the view as required.
Synonyms
Starting with SQL Server 2005, Microsoft introduced synonyms:
CREATE SYNONYM dbo.Banks FOR OtherDatabase.dbo.Banks
It seems to work identically to the VIEW approach. Every execution plan i've looked at behaves identically.
Unfortunately it seems that synonyms are unable to provide one of their basic functions, functionality i need:
Provides a layer of abstraction that protects a client application from changes made to the name or location of the base object
You are not able to change where a synonym points. Because there is no ALTER SYNONYM statement, you first have to drop the synonym and then re-create the synonym with the same name, but point the synonym to the new location.
Do they have any redeeming quality?
Practically speaking, this isn't going to happen. i will just never do it. i won't use a mechanism that requires me to drop objects from a database in order to change a setting. i'm certainly not going to delete all the easily alterable VIEWs, replacing them with SYNONYMs, and have to explain to everyone why making everything harder is "better".
So my question is, is there anything i am losing by using views?
every execution plan looks identical to synonyms
i can easily change the "view synonym" at any time
Is there a virtue to a table or view synonym that i'm missing?
Aside from having to call RefreshAllViews in case i forgot that i made a table change somewhere
Even stored procedures
i don't even use synonyms for stored procedures:
CREATE PROCEDURE dbo.GetUSDNoonRateAsOf #tradeDate datetime AS
EXECUTE OtherDatabase.dbo.GetUSDNoonRateAsOf #tradeDate
Is there a value in synonyms that i am missing?
Update: RefreshAllViews procedure
We have a standard procedure in every database. Reordering, or inserting, columns wreaks havoc on views; so they have to be "refreshed".
CREATE PROCEDURE [dbo].[RefreshAllViews] AS
-- This sp will refresh all views in the catalog.
-- It enumerates all views, and runs sp_refreshview for each of them
SET NOCOUNT ON
DECLARE abc CURSOR FOR
SELECT TABLE_NAME AS ViewName
FROM INFORMATION_SCHEMA.VIEWS
ORDER BY newid()
OPEN abc
DECLARE #ViewName varchar(128)
--DECLARE #ParmDefinition NVARCHAR(500)
-- Build select string once
DECLARE #SQLString nvarchar(2048)
--SET #SQLString = N'EXECUTE sp_RefreshView #View'
--SET #ParmDefinition = N'#View nvarchar(128)'
FETCH NEXT FROM abc
INTO #ViewName
WHILE ##FETCH_STATUS = 0
BEGIN
IF #ViewName <> 'IndexServerNodes'
BEGIN
SET #SQLString = 'EXECUTE sp_RefreshView '+#ViewName
PRINT #SQLString
EXECUTE sp_ExecuteSQL #SQLString--, #ParmDefinition, #View = #ViewName
END
FETCH NEXT FROM abc
INTO #ViewName
END
CLOSE abc
DEALLOCATE abc
God knows why SQL Server doesn't do it for me.
A synonym is a much more transparent redirect. I prefer them over views because views need to be maintained. When you use SELECT * especially.
I'm not sure I buy that the lack of ALTER SYNONYM is a real blocker. The drop/create of a synonym is a very simple metadata operation, and will be very fast. Omitting error handling for brevity:
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
BEGIN TRANSACTION;
DROP SYNONYM ...
CREATE SYNONYM ...
COMMIT TRANSACTION;
Similarly, for stored procedures, if your base stored procedure interface changes (say, you add a parameter), you have to also change the wrapper procedure - not so with a synonym.
One downside is that you can create, say, an instead of trigger on a view, but you can't on a synonym. There are other operations you can't perform via a synonym (mostly DDL). And of course IntelliSense may not function correctly, depending on version.
Not being able to memorize the syntax seems like a made-up excuse to me. There are no fancy options or with clauses; just a 2-part name for the synonym, and a 2-, 3- or 4-part name for the object it refers to:
CREATE SYNONYM dbo.Something FOR Server.Database.dbo.SomethingElse;
If you can't memorize that, how did you create the synonym in the first place?
I also have a suggestion to thoroughly simplify your stored procedure (and prevent it from failing when any view is not in the dbo schema, or the procedure is executed by someone whose default schema is not the same as the view's schema, or the view has an ' or space in its name, or otherwise breaks any of the rules for identifiers (you can find them on this page)):
CREATE PROCEDURE [dbo].[RefreshAllViews]
AS
BEGIN
SET NOCOUNT ON;
DECLARE #sql NVARCHAR(MAX) = N'';
SELECT #sql += '
EXEC sp_refreshview ' + CHAR(39)
+ QUOTENAME(REPLACE(s.name,'''',''''''))
+ '.' + QUOTENAME(REPLACE(v.name,'''','''''')) + CHAR(39) + ';'
FROM sys.views AS v
INNER JOIN sys.schemas AS s
ON v.[schema_id] = s.[schema_id];
PRINT #sql;
EXEC sp_executesql #sql;
END
GO
At the very least, if you're going to keep the cursor, stop using the terrible default options (declare the cursor as LOCAL FAST_FORWARD), and use sys.views instead of INFORMATION_SCHEMA.
God knows why SQL Server doesn't do it for me.
Because SQL Server is software, and it isn't perfect - especially when it comes to dependencies. The main problem is that you are violating a best practice by using SELECT * in your views in the first place. shrug If you would accept your hang-ups about synonyms, you won't have to worry about that.
If a view references a table, and you subsequently add columns to that table, you must modify the view in order to “pick up” the new column—even if you use SELECT *. Synonyms will “pick up” those columns automatically. Here’s a sample script:
-- Set things up
CREATE TABLE Foo
(
Id int not null
,data varchar(10) not null
)
GO
INSERT Foo values (1,'one'),(2,'Two')
GO
CREATE SYNONYM synFoo for Foo
GO
CREATE VIEW vFooDelim as select Id, Data from Foo
GO
CREATE VIEW vFooStar as select * from Foo
GO
select * from Foo
select * from synFoo
select * from vFooDelim
select * from vFooStar
then,
-- Add a column
ALTER TABLE Foo
add MoreData datetime default getdate()
GO
select * from Foo
select * from synFoo
select * from vFooDelim
select * from vFooStar
GO
(don’t forget to)
-- Clean things up
DROP Synonym synFoo
DROP VIEW vFooDelim
DROP VIEW vFooStar
DROP TABLE Foo
A significantly more obscure situation (that we do all the time here), if you have to set up a reference in a database to an object in another database, you don’t necessarily know what columns are in that table (dynamic denormalized) are or will be, and you don’t know the name of the database at the time you write your code (one database per client, but only once they sign the contract) (usually), using synonyms can be a godsend. At the time of database creation, just dynamicaly build and run CREATE SYNONYM myTable FOR <DatabaseName>.<schema>.MyTable, and you are done—no matter what columns get added for which client in the future.
Synonyms are useful for situations where you're working with lots of disparate data sources/multiple databases etc, or doing data migrations.
I've never really found cause to use them in new, greenfield developments.
I have two different SQL Server databases (on the same server - if it helps) that need to share the same stored procedure logic. The solution I'm trying to achieve looks like this:
Database1
Table: TestTable
Synonym: sp_MyProc pointing at SharedDatabase.dbo.sp_MyProc
Database2
Table: TestTable
Synonym: sp_MyProc pointing at SharedDatabase.dbo.sp_MyProc
SharedDatabase
Proc: sp_MyProc which runs queries against TestTable
My hope was to use the synonyms so that if I execute sp_MyProc while in the context of Database1, it would use Database2.TestTable. And if I execute sp_MyProc while in the context of Database2, it would go against Database2.TestTable. However, when I execute sp_MyProc through either of the synonyms, it ignores the context of the synonym and executes looking for a local copy of TestTable, which is not found.
Is there a way to implement a shared stored procedure that executes against different copies of tables in different databases, either through synonyms or some other mechanism?
Edit
I should mention that in my case I am looking to do this with a large set of existing tables and procs, so any solution that requires modifying the procs or tables themselves are not ideal.
Something like this would work for the definition of the procedure. Be sure to guard against SQL injection since this is built dynamically.
CREATE PROCEDURE [dbo].dosomething
#databaseName sysname,
#schema sysname,
#tableName sysname
as
declare #cmd as nvarchar(max)
set #cmd = N'select * from ' + quotename(#schema) + N'.' + quotename(#tableName)
exec sp_executesql #cmd
Then use it like this:
dosomething 'SampleDb', 'dbo', 'sampleTable'
If the stored proc is in the SharedDatabase, then it will always run in context of SharedDatabase. To accomplish what you are trying to do to centralize code, I would maybe pass in a parameter to designate which server it is coming from, so then you can execute the query against that specific TestTable. Basically, you will need to refer to each table using their fully qualified name - i.e. Database1.dbo.TestTable
USE SharedDatabase
CREATE PROCEDURE [dbo].sp_MyProc
#dbsource varchar(50)
as
if(#dbsource == 'DB1')
begin
select * from Database1.dbo.TestTable
end
else
begin
select * from Database2.dbo.TestTable
end
GO
The other alternative is to make a view in SharedDatabase, which will be called TestTableComposite, with an extra column to identify where the source data is. And then pass that in as the parameter, and your SP on SharedDatabase will always be in context of that DB.
I'm using SQL Server 2008.
How can I pass Table Valued parameter to a Stored procedure across different Databases, but same server?
Should I create the same table type in both databases?
Please, give an example or a link according to the problem.
Thanks for any kind of help.
In response to this comment (if I'm correct and that using TVPs between databases isn't possible):
What choice do I have in this situation? Using XML type?
The purist approach would be to say that if both databases are working with the same data, they ought to be merged into a single database. The pragmatist realizes that this isn't always possible - but since you can obviously change both the caller and callee, maybe just use a temp table that both stored procs know about.
I don't believe it's possible - you can't reference a table type from another database, and even with identical type definitions in both DBs, a value of one type isn't assignable to the other.
You don't pass the temp table between databases. A temp table is always stored in tempdb, and is accessible to your connection, so long as the connection is open and the temp table isn't dropped.
So, you create the temp table in the caller:
CREATE TABLE #Values (ID int not null,ColA varchar(10) not null)
INSERT INTO #Values (ID,ColA)
/* Whatever you do to populate the table */
EXEC OtherDB..OtherProc
And then in the callee:
CREATE PROCEDURE OtherProc
/* No parameter passed */
AS
SELECT * from #Values
Table UDTs are only valid for stored procs within the same database.
So yes you would have to create the type on each server and reference it in the stored procs - e.g. just run the first part of this example in both DBs http://msdn.microsoft.com/en-us/library/bb510489.aspx.
If you don't need the efficency you can always use other methods - i.e. pass an xml document parameter or have the s.p. expect a temp table with the input data.
Edit: added example
create database Test1
create database Test2
go
use Test1
create type PersonalMessage as TABLE
(Message varchar(50))
go
create proc InsertPersonalMessage #Message PersonalMessage READONLY AS
select * from #Message
go
use Test2
create type PersonalMessage as TABLE
(Message varchar(50))
go
create proc InsertPersonalMessage #Message PersonalMessage READONLY AS
select * from #Message
go
use Test1
declare #mymsg PersonalMessage
insert #mymsg select 'oh noes'
exec InsertPersonalMessage #mymsg
go
use Test2
declare #mymsg2 PersonalMessage
insert #mymsg2 select 'oh noes'
exec InsertPersonalMessage #mymsg2
Disadvantage is that there are two copies of the data.
But you would be able to run the batch against each database simultaneously.
Whether this is any better than using a table table is really down to what processing/data sizes you have - btw to use a temp table from an s.p. you just access it from the s.p. code (and it fails if it doesn't exist).
Another way to solve this (though not necessarily the correct way) is to only utilize the UDT as a part of a dynamic SQL call.
USE [db1]
CREATE PROCEDURE [dbo].[sp_Db2Data_Sync]
AS
BEGIN
/*
*
* Presumably, you have some other logic here that requires this sproc to live in db1.
* Maybe it's how you get your identifier?
*
*/
DECLARE #SQL VARCHAR(MAX) = '
USE [db2]
DECLARE #db2tvp tableType
INSERT INTO #db2tvp
SELECT dataColumn1
FROM db2.dbo.tblData td
WHERE td.Id = ' + CAST(#YourIdentifierHere AS VARCHAR) '
EXEC db2.dbo.sp_BulkData_Sync #db2tvp
'
EXEC(#SQL)
END
It's definitely not a purist approach, and it doesn't work for every use case, but it is technically an option.
I have query in a stored procedure that calls some linked servers with some dynamic SQL. I understand that EF doesn't like that, so I specifically listed all the columns that would be returned. Yet, it still doesn't like that. What am I doing wrong here? I just want EF to be able to detect the columns returned from the stored procedure so I can create the classes I need.
Please see the following code that makes up the last lines of my stored procedure:
SELECT
#TempMain.ID,
#TempMain.Class_Data,
#TempMain.Web_Store_Class1,
#TempMain.Web_Store_Class2,
#TempMain.Web_Store_Status,
#TempMain.Cur_1pc_Cat51_Price,
#TempMain.Cur_1pc_Cat52_Price,
#TempMain.Cur_1pc_Cat61_Price,
#TempMain.Cur_1pc_Cat62_Price,
#TempMain.Cur_1pc_Cat63_Price,
#TempMain.Flat_Length,
#TempMain.Flat_Width,
#TempMain.Item_Height,
#TempMain.Item_Weight,
#TempMain.Um,
#TempMain.Lead_Time_Code,
#TempMain.Wp_Image_Nme,
#TempMain.Wp_Mod_Dte,
#TempMain.Catalog_Price_Chg_Dt,
#TempMain.Description,
#TempMain.Supersede_Ctl,
#TempMain.Supersede_Pn,
TempDesc.Cust_Desc,
TempMfgr.Mfgr_Item_Nbr,
TempMfgr.Mfgr_Name,
TempMfgr.Vendor_ID
FROM
#TempMain
LEFT JOIN TempDesc ON #TempMain.ID = TempDesc.ID
LEFT JOIN TempMfgr ON #TempMain.ID = TempMfgr.ID
EF doesn't support importing stored procedures which build result set from:
Dynamic queries
Temporary tables
The reason is that to import the procedure EF must execute it. Such operation can be dangerous because it can trigger some changes in the database. Because of that EF uses special SQL command before it executes the stored procedure:
SET FMTONLY ON
By executing this command stored procedure will return only "metadata" about columns in its result set and it will not execute its logic. But because the logic wasn't executed there is no temporary table (or built dynamic query) so metadata contains nothing.
You have two choices (except the one which requires re-writing your stored procedure to not use these features):
Define the returned complex type manually (I guess it should work)
Use a hack and just for adding the stored procedure put at its beginning SET FMTONLY OFF. This will allow rest of your SP's code to execute in normal way. Just make sure that your SP doesn't modify any data because these modifications will be executed during import! After successful import remove that hack.
Adding this Non-Logical block of code solved the problem. Even though it will never Hit
IF 1=0 BEGIN
SET FMTONLY OFF
END
Why does my typed dataset not like temporary tables?
http://social.msdn.microsoft.com/Forums/en-US/adodotnetdataset/thread/fe76d511-64a8-436d-9c16-6d09ecf436ea/
Or you can create a User-Defined Table Type and return that.
CREATE TYPE T1 AS TABLE
( ID bigint NOT NULL
,Field1 varchar(max) COLLATE Latin1_General_CI_AI NOT NULL
,Field2 bit NOT NULL
,Field3 varchar(500) NOT NULL
);
GO
Then in the procedure:
DECLARE #tempTable dbo.T1
INSERT #tempTable (ID, Field1, Field2, Field3)
SELECT .....
....
SELECT * FROM #tempTable
Now EF should be able to recognize the returned columns type.
As some others have noted, make sure the procedure actually runs. In particular, in my case, I was running the procedure happily without error in SQL Server Management Studio completely forgetting that I was logged in with admin rights. As soon as I tried running the procedure using my application's principal user I found there was a table in the query that that user did not have permission to access.
Interesting side note: Had the same problem which I first solved by using Table Variables, rather than Temp Tables (just for the import). That wasn't particularly intuitive to me, and threw me off when initially observing my two SProcs: one using Temp tables and one with Table Variables.
(SET FMTONLY OFF never worked for me, so I just changed my SProcs temporarily to get the column info, rather than bothering with the hack on the EF side just as an FYI.)
My best option was really just manually creating the complex type and mapping the function import to it. Worked great, and the only difference ended up being that an additional FactoryMethod to create the properties was included in the Designer.
What I would add is:
That the import also fails if the stored procedures has parameters and returns no result set for the default parameter values.
My stored procedure had 2 float parameters and would not return anything when both parameters are 0.
So in order to add this stored procedure to the entity model, I set the value of these parameters in the stored procedure so that it is guaranteed to return some rows, no matter what the parameters actually are.
Then after adding this stored procedure to the entity model I undid the changes.
both solutions :
1- Define the returned complex type manually (I guess it should work)
2- Use a hack and just for adding the stored procedure put at its beginning SET FMTONLY OFF.
not working with me in some procedure however it worked with other one!
my procedure ends with this line:
SELECT machineId, production [AProduction]
, (select production FROM #ShiftBFinalProd WHERE machineId = #ShiftAFinalProd.machineId) [BProduction]
, (select production FROM #ShiftCFinalProd WHERE machineId = #ShiftAFinalProd.machineId) [CProduction]
FROM #ShiftAFinalProd
ORDER BY machineId
Thanks
In addition to what #tmanthley said, be sure that your stored procedure actually works by running it first in SSMS. I had imported some stored procedures and forgot about a couple dependent scalar functions, which caused EF to determine that the procedure returned no columns. Seems like a mistake I should have caught earlier on, but EF doesn't give you an error message in that case.
Entity Framework will try to get the columns by executing your stored procedure, passing NULL for every argument.
Please make sure that the stored procedure will return something under all the circumstances. Note it may have been smarter for Entity Framework to execute the stored proc with default values for the arguments, as opposed to NULLs.
ER does the following to get the metadata of the table:
SET FMTONLY ON
This will break your stored procedure in various circumstances, in particular, if it uses a temporary table.
So to get a result as complex type; please try by adding
SET FMTONLY OFF;
This worked for me - hope it works for you too.
Referred from https://social.msdn.microsoft.com/Forums/en-US/e7f598a2-6827-4b27-a09d-aefe733b48e6/entity-model-add-function-import-stored-procedure-returns-no-columns?forum=adodotnetentityframework
In my case adding SET NOCOUNT ON; at the top of the procedure fixed the problem. It's best practice anyway.
In my case SET FMTONLY OFF did not work. The method I followed is, I took backup of original stored procedure and replace with only column name like the below query.
Select Convert(max,'') as Id,Convert(max,'') as Name
After this change, create new function import, complex type in entity framework.
Once the function import and complex type is created, replace the above query with your original stored procedure.
SET FMTONLY OFF
worked for me for one of the procedure but failed for other procedure. Following steps helps me to resolve my problem
Within a stored procedure, I have created temporary table with the same column type and inserted all the data returned by dynamic query to temp table.
and selected the temp table data.
Create table #temp
(
-- columns with same types as dynamic query
)
EXEC sp_executeSQL #sql
insert into #temp
Select * from #temp
drop table #temp
Deleted existing complex type, import function and stored procedure instance for old stored procedure and updated entity model for current new procedure.
Edit the imported Function in entity modal for desired complex type, you will get all the column information there which is not getting for previous stored procedure.
once you have done with the type creation you can delete the temporary table from stored procedure and then refresh Entity Framework.
In Entity framework, while getting column information the sql executes the procedure with passing null values in parameter. So I handled null case differently by creating a temp table with all the required columns and returning all the columns with no value when null is passed to the procedure.
In my procedure there was dynamic query, something like
declare #category_id int
set #category_id = (SELECT CATEGORY_ID FROM CORE_USER where USER_ID = #USER_ID)
declare #tableName varchar(15)
declare #sql VARCHAR(max)
declare #USER_IDT varchar(100)
declare #SESSION_IDT varchar(10)
IF (#category_id = 3)
set #tableName = 'STUD_STUDENT'
else if(#category_id = 4)
set #tableName = 'STUD_GUARDIAN'
if isnull(#tableName,'')<>''
begin
set #sql = 'SELECT [USER_ID], [FIRST_NAME], SCHOOL_NAME, SOCIETY_NAME, SCHOOL_ID,
SESSION_ID, [START_DATE], [END_DATE]
from #tableName
....
EXECUTE (#sql)
END
ELSE
BEGIN
SELECT * from #UserPrfTemp
END
I was not getting the column information in
my case after using the set FMTONLY OFF trick.
This is temp table I created to get the blank data.
Now I am getting the column info
Create table #UserPrfTemp
(
[USER_ID] bigint,
[FIRST_NAME] nvarchar(60),
SCHOOL_NAME nvarchar(60),
SOCIETY_NAME nvarchar(200)
.....
}
I solved this problem creating a table variable and then returning from it.
DECLARE #VarTable TABLE (
NeededColumn1 VARCHAR(100),
NeededColumn2 INT,
NeededColumn3 VARCHAR(100)
)
...
--Fetch Data from Linked server here
...
INSERT INTO #VarTable (NeededColumn1,NeededColumn2,NeededColumn3)
SELECT Column1, Column2, Column3
FROM #TempTable
SELECT * FROM #VarTable.
In that manner, your the SP result will be bounded to the table variable, which EF has access to.
I discovered a method that should help most people out whatever's happening.
Pull up your favourite SQL client and run the proc that you're trying to update with every parameter = null. Visual Studio is literally trying to do this when SET FMTONLY ON. Run a trace. You'll see.
You'll probably get an error, or unexpected data out. Fix that and your issue is fixed.
In my case the function read in JSON and failed because the JSON string was empty.
I just put something like
IF(#FooJSON IS NULL)
BEGIN
SELECT 1 VAR1, 2 VAR2;
END
ELSE
--OTHER LOGIC
That's probably an ugly solution, but I inherited this mess and we don't go into Ravenholm.
Change #Temp tables with WITH SQL EXPRESSION