I've checked familiar posts and I hope that this hasn't been covered before.
I'm looking into ways to abstract the name of the DB and the schema in calling a stored procedure in a different database than the current location. Something in the following way:
execute [DatabaseName].[schema].[storedProcedureName]
The idea would be to allow the DB and the schema to be provided depending on a run time requirement. The obvious way to do this is to use dynamic sql but for the system I'm using thats a big no-no. Does anyone know of anyway to abstract the DB name and schema in this case? The only options I can think of are storing the name etc in variables and some how incorporating that into the execute call or being able to store an instance of the DB, but i don't know if thats even possible.
Any feedback will be useful, Thanks.
I'm not sure about abstracting it as you describe (like a parameter or something?), but one idea might be to do something like this:
IF #db = 'dababase1'
BEGIN
IF #schema = 'schema1'
BEGIN
EXECUTE database1.schema1.procedure
END
IF #schema = 'schema2'
BEGIN
EXECUTE database1.schema2.procedure
END
...
END
IF #db = 'database2'
BEGIN
...
You'd have to know at design time what all of the possible databases and schemas were though...
It's a bit hard to understand the why behind this approach. What is calling your stored procedures? That might be the better place to change database by using a different connection string?
If you want to switch DB inside a TSQL statement then look at the USE command and to run as a different user with access to another schema look at EXECUTE AS
I think you're maybe looking for a construct like SYNONYM? This should allow that level of abstraction I believe
CREATE SYNONYM MySP FOR MyDatabase1.dbo.MySP
http://blog.sqlauthority.com/2008/01/07/sql-server-2005-introduction-and-explanation-to-synonym-helpful-t-sql-feature-for-developer/
Related
I am trying to hunt down a certain stored procedure which writes to certain table (it needs to be changed) however going through every single stored procedure is not a route I really want to take. So I was hoping there might be a way to find out which stored procedures INSERT or UPDATE certain table.
I have tried using this method (pinal_daves_blog), but it is not giving me any results.
NOTICE: The stored procedure might not be in the same DB!
Is there another way or can I somehow check what procedure/function has made the last insert or update to table.
One brute-force method would be to download an add-in from RedGate called SQL Search (free), then do a stored procedure search for the table name. I'm not affiliated at all with RedGate or anything, this is just a method that I have used to find similar things and has served me well.
http://www.red-gate.com/products/sql-development/sql-search/
If you go this route, you just type in the table name, change the 'object types' ddl selection to 'Procedures' and select 'All databases' in the DB ddl.
Hope this helps! I know it isn't the most technical solution, but it should work.
There is no built-in way to tell what function, procedure, or executed batch has made the last change to a table. There just isn't. Some databases have this as part of their transaction logging but SQL Server isn't one of them.
I have wondered in the past whether transactional replication might provide that information, if you already have that set up, but I don't know whether that's true.
If you know the change has to be taking place in a stored procedure (as opposed to someone using SSMS or executing lines of SQL via ADO.NET), then #koppinjo's suggestion is a good one, as is this one from Pinal Dave's blog:
USE AdventureWorks
GO
--Searching for Empoloyee table
SELECT Name
FROM sys.procedures
WHERE OBJECT_DEFINITION(OBJECT_ID) LIKE '%Employee%'
There are also dependency functions, though they can be outdated or incomplete:
select * from sys.dm_sql_referencing_entities( 'dbo.Employee', 'object' )
You could run a trace in Profiler. The procedure would have to write to the table while the trace is running for you to catch it.
We are architecting a new database that will make heavy use of schemas to separate logical parts of our database.
And example could be employee and client. We will have a schema for each and the web services that connect to one will not be allowed in the other.
Where we are hitting problems/concerns is where the data appears very similar between the two schemas. For example both employees and clients have addresses.
We could do something like common.Address. But the mandate to keep the services data access separate is fairly strong.
So it is looking like we will go with employee.Address and client.Address.
However, it would be nice if there was a way to enforce a global Address table definition. Something to prevent the definition of these two Address tables from drifting apart during development. (Note: there will actually be more than two.)
Is there anything like that in SQL Server. Some kind of Table "type" or "class" that can be "instantiated" into different schemas. (I am not hopeful here, but I thought I would ask.)
Thoughts, rather then a hard answer...
We have
a common Data schema
Views, Procs etc in a schema per client
An internal "Helper" schema for shared code
Would this work for you?
My other thought is a database per client. It easier to permission per database, than per schema, especially for direct DB access, support or power user types
I think your best bet is a DDL trigger, where you can cause a failure when altering any of your "common" tables.
something like:
CREATE TRIGGER [Dont_Change_CommonTables]
ON DATABASE
FOR DDL_TABLE_EVENTS,GRANT_DATABASE
AS
DECLARE #EventData xml
DECLARE #Message varchar(1000)
SET #EventData=EVENTDATA()
IF (#EventData.value('(/EVENT_INSTANCE/ObjectType)[1]', 'varchar(50)')='TABLE'
AND #EventData.value('(/EVENT_INSTANCE/ObjectName)[1]', 'varchar(50)') IN ('Address'
,'etc...'
--place your table list here
)
)
BEGIN
ROLLBACK
SET #Message='Error! you can not make changes to '+ISNULL(LOWER(#EventData.value('(/EVENT_INSTANCE/ObjectType)[1]', 'varchar(50)')),'')+': '+ISNULL(#EventData.value('(/EVENT_INSTANCE/ObjectName)[1]', 'varchar(50)'),'')
RAISERROR(#Message,16,1)
RETURN
END
GO
Does anyone know of a way to verify the correctness of the queries in all stored procedures in a database?
I'm thinking of the scenario where if you modify something in a code file, simply doing a rebuild would show you compilation errors that point you to places where you need to fix things. In a database scenario, say if you modify a table and remove a column which is used in a stored procedure you won't know anything about this problem until the first time that procedure would run.
What you describe is what unit testing is for. Stored procedures and functions often require parameters to be set, and if the stored procedure or function encapsulates dynamic SQL--there's a chance that a [corner] case is missed.
Also, all you mention is checking for basic errors--nothing about validating the data returned. For example - I can change the precision on a numeric column...
This also gets into the basic testing that should occur for the immediate issue, and regression testing to ensure there aren't unforeseen issues.
You could create all of your objects with SCHEMABINDING, which would prevent you from changing any underlying tables without dropping and recreating the views and procedures built on top of them.
Depending on your development process, this could be pretty cumbersome. I offer it as a solution though, because if you want to ensure the correctness of all procedures in the db, this would do it.
I found this example on MSDN (SQL Server 2012). I guess it can be used in some scenarios:
USE AdventureWorks2012;
GO
SELECT p.name, r.*
FROM sys.procedures AS p
CROSS APPLY sys.dm_exec_describe_first_result_set_for_object(p.object_id, 0) AS r;
Source: sys.dm_exec_describe_first_result_set_for_object
If Proc A executes Proc B, is there a way for Proc B to look-up that it was called by A instead of having a parameter where A passes B its ID?
Per request: The reason I'm interested in this is multi-fold
1) General knowledge, I'm sure if it can be done it would involve clever use of some system tables/variables that may help me do other things down the road.
2) As others have mentioned, logging/auditing. I'd like to make a procedure that logs a begin, end and message entry that requires no parameters, and accepts one optional parameter of a user specified message. This would allow one to simply drop an exec in the top and bottom of a proc to make it work and the audit procedure would figure out the rest on its own.
I know this info is available in the log files, but parsing those and giving them to users is not all that straight forward, whereas, this would give easy access to that basic info.
3) Used in conjunction with a semaphore such a generalized procedure could ensure that related processes are not executed simultaneously regardless of sessions/transactions etc.
use a prarameter like this:
CREATE PROCEDURE ParentProcedure
AS
DECLARE #ProcID int
SET #ProcID=##PROCID
EXEC ChildProcedure #ProcID
RETURN 0
go
and this...
CREATE PROCEDURE ChildProcedure
(
#ProcID int=null --optional
)
AS
if #ProcID IS NOT NULL
BEGIN
PRINT 'called by '+OBJECT_NAME(#ProcID)
END
RETURN 0
go
no, it cant tell what sproc called it. You'd need to add an extra parameter in to tell it who the caller is
In MSSQL Server 2008 you can use sys.dm_exec_procedures_stats, this dynamic management view can show you when stored procedure (see also sys.procedures to obtain name of the procedure) was executed and so on.
SELECT s.*, d.*
FROM sys.procedures s
INNER JOIN sys.dm_exec_procedure_stats d
ON s.object_id = d.object_id
ORDER BY [d.last_execution_time] DESC;
Parent-procedure will be shown in this result set very close, because this procedure will be executed earlier.
Of course, this is not complete solution of your problem, but you may obtain some info.
and yes, if there is concurrency, this solution doesn't works. It can help in development or debug only.
If a stored procedure needs to behave differently based on who calls it, then it needs to add a parameter. That way, if you add stored procedure "Z", the code will still work - "Z" can pass the parameter the way that "C" passed it, or the way "D" passed it. If that's not good enough, then new logic needs to be added in "B".
Why would you want to do that?
AFAIK, there is no way for proc B to know who called it.
EDIT: As KM shows that it is possible (as per the code), I am interested in understanding the reason behind doing this. Can you post that as well, by adding it to your question?
I am using a trigger in PostgreSQL 8.2 to audit changes to a table:
CREATE OR REPLACE FUNCTION update_issue_history() RETURNS trigger as $trig$
BEGIN
INSERT INTO issue_history (username, issueid)
VALUES ('fixed-username', OLD.issueid);
RETURN NULL;
END;
$trig$ LANGUAGE plpgsql;
CREATE TRIGGER update_issue_history_trigger
AFTER UPDATE ON issue
FOR EACH ROW EXECUTE PROCEDURE update_issue_history();
What I want to do is have some way to provide the value of fixed-username at the time that I execute the update. Is this possible? If so, how do I accomplish it?
Try something like this:
CREATE OR REPLACE FUNCTION update_issue_history()
RETURNS trigger as $trig$
DECLARE
arg_username varchar;
BEGIN
arg_username := TG_ARGV[0];
INSERT INTO issue_history (username, issueid)
VALUES (arg_username, OLD.issueid);
RETURN NULL;
END;
$trig$ LANGUAGE plpgsql;
CREATE TRIGGER update_issue_history_trigger
AFTER UPDATE ON issue
FOR EACH ROW EXECUTE PROCEDURE update_issue_history('my username value');
I don't see a way to do this other than to create temp tables and use EXECUTE in your trigger. This will have performance consequences though. A better option might be to tie into another table somewhere and log in who logs in/out by session id and back-end PID, and reference that?
Note you don't have any other way of getting the information into the update statement. Keep in mind that the trigger can only see what is available in the API or in the database. If you want a trigger to work transparently, you can't expect to pass information to it at runtime that it would not have access to otherwise.
The basic question you have to ask is "How does the db know what to put there?" Once you decide on a method there the answer should be straight-forward but there are no free lunches.
Update
In the past when I have had to do something like this when the login to the db is with an application role, the way I have done it is to create a temporary table and then access that table from the stored procedures. Currently stored procedures can handle temporary tables in this way but in the past we had to use EXECUTE.
There are two huge limitations with this approach. The first is that it creates a lot of tables and this leads eventually to the possibility of oid wraparound.
These days I much prefer to have the logins to the db being user logins. This makes this far easier to manage and you can just access via the value SESSION_USER (a newbie mistake is to use CURRENT_USER which shows you the current security context rather than the login of the user.
Neither of these approaches work well with connection pooling. In the first case you can't do connection pooling because your temporary tables will get misinterpreted or clobbered. In the second, you can't do it because the login roles are different.