Database name while referencing tables - Sybase - database

How do I get away with hardcoding the database name in referencing a table within a stored procedure. For example there are two databases db1 and db2. I am writing a stored procedure in db2 which references two tables, one from db1 and another from db2. Both are on the same sybase server.

If I understand your question correctly, on the one hand, in your stored procedure you can refer to the table in the same database directly by name
SELECT ...
FROM table_in_db2
You can refer to a table in database db1 by prefixing the database name and an empty owner:
SELECT ...
FROM db1..table_in_db1
On the other hand, if you want to avoid hard-coding database names in the procedure you might create a view in database db2 that references the db1 table:
CREATE VIEW view_in_db2
AS
SELECT *
FROM db1..table_in_db1
and use that in the procedure:
SELECT ...
FROM view_in_db2

You need to keep the code portable, involve 2 databases, but avoid referencing databases by name. Then you can create proxy tables (or proxy views, if such views exist in 12.5). Refer to proxy tables as to local objects.
This will work, but will require some extra care, every time you move/change databases. But anyway the separation of concerns you are after can be achieved.

Related

SQL Server Stored Procedure, View and Table Unique Identifier

Where does SQL Server save its unique identifier for stored procedures, views and tables? When I rename a stored procedure, how does SQL Server know what stored procedure to rename?
I'm hoping it's something like a row number that I can select in a query. By looking at the INFORMATION_SCHEMA, I'm able to get a table of objects but can't figure out how SQL Server keeps track of any changes
I would guess you are talking about object_id. SELECT * FROM sys.objects has all objects and their ID's
You can also do:
select OBJECT_ID('your_proc_name_here')
to see what the object_id is.
As for tracking changes, there is not a table that keeps what your proc was prior to an alter statement or tells you what the view definition was 2 weeks ago. You would have to make a user defined table and write logic to handle that, or use a VCS.

Neater way of dynamically selecting a database other than sp_executesql

I am looking to set up a high availability architecture whereby two mirror databases exist (DB1 & DB2) that serve another database with views (DBV) on it. DB1 has the overnight ETL on it, whilst DBV looks at DB2 until the etl is complete on DB1, at which point its views switch to the underlying tables on DB1. Once the ETL is complete on DB1, DB2 is restored with DB1 data before the next day's ETL. The next day, DB1 and DB2 switch roles.
I am looking for a neater/more secure way of switching between the two views than running sp_executesql to run a dynamically built string. I will be looking to also do this on stored procedures from a staging database which need to have their scripts dynamically altered to use the correct database to run the ETL on. Essentially, I am looking to pass the USE statement dynamically and then execute the rest of the script outside of any dynamic statement.
I want to avoid sp_executesql for support reasons for other developers and also to get around any possible extensive concatenation of strings if the stored procedure/view gets particularly lengthy.
Any ideas / different approaches to high availability in this context would be welcome.
One option might be to create a copy of each view in DBV for both target databases - i.e.
some_schema.DB1_myview
some_schema.DB2_myview
and then use a synonym to expose the views under their final names.
CREATE SYNONYM some_schema.myview ON some_schema.DB1_myview
Your switch process would then need only to drop and recreate the synonyms, rather than the views themselves. This would still need to be done with a dynamic SQL statement, but the complexity would be much lower.
A downside would be that there would be a risk of the definitions of the underlying views getting out of sync.
Edit
At the cost of more risk of getting out of sync, it would be possible to avoid dynamic SQL altogether by creating (for instance) a pair of stored procedures each of which generated the synonyms for one database or the other. Your switch code would then only need to work out which procedure to call.
Have you considered renaming the databases as you switch things around? I.e. the following prints 1 followed by 2, nothing in DBV had to be modified:
create database DB1
go
use DB1
go
create table T (ID int not null);
insert into T(ID) values (1);
go
create database DB2
go
use DB2
go
create table T (ID int not null);
insert into T(ID) values (2);
go
create database DBV
go
use DBV
go
create view V
as
select ID
from DB1..T
go
select * from V
go
alter database DB1 modify name = DBt
go
alter database DB2 modify name = DB1
go
alter database DBt modify name = DB2
go
select * from V
Obviously better names than 1 and 2 may be used. This way, DB1 is always the one used for live and DB2 is used for any staging work.

Entire DB find replace in all procedures

I have inherited a db that contains approx 150 stored procs that have references to db objects on that same server. It worked fine, until the db was moved to another server, now the procedures cannot find those objects.
for example: what is commonplace is simply [database].[dbo].[object], when what should have occurred is [server].[database].[dbo].[object], for these references not be break.
I am presently using
select [definition]
into #test
from sys.sql_modules
where definition like '%(db name)%'
To locate the procs with references to the db, and possibly doing a REPLACE on each db match.
But is there simpler way?
You may consider use SYNONYM
USE [database];
GO
CREATE SYNONYM [object] FOR [server].[database].[dbo].[object];
GO

Temporary Tables in SQL Server created and used inside a stored procedure

I'm creating a temporary table in a stored procedure. I understand that they get created and destroyed for each session, however something is not clear. Let's say two users access the web page where I call the stored procedure that creates the temporary table, would there be a conflict when the two users create the same temp table?
Thanks
If you create a local temp table (like #temp) then there is no problem. A global temp table (##Temp) however can be accessed by other sessions and as such I never use them unless I have no choice. From Books Online:
Local temporary tables are visible only to their creators during the
same connection to an instance of SQL Server as when the tables were
first created or referenced. Local temporary tables are deleted after
the user disconnects from the instance of SQL Server. Global temporary
tables are visible to any user and any connection after they are
created, and are deleted when all users that are referencing the table
disconnect from the instance of SQL Server.
Temporary tables are created per SQL connection so two users calling the same stored procedure would create an individual instance of the table in question.
The simplest way to demonstrate this is to run the following query in 2 seperate query windows:
select 1 as someid
into #temp
Each window will have it's own connection, so will create a unique temp table.
If you look in System Databases > TempDB > Temporary Tables (you may have to refresh the table list), you will see 2 tables uniquely named, something like:
#temp________xxx1
#temp________xxx2
If you then close one of the query windows and refresh the temp table list, you will see one table has been dropped.

How to prevent the renaming of a column and table in SQL Server 2008 R2

I want to protect a database in the following way:
Tables should be protected against drops and renames
Columns should be protected against drops and renames
Adding tables and columns is permitted
To put this into context: it's a vocabularies database (look-up tables) that will be the central master copy, to be distributed to other databases (on the same server and in the future to other servers (even non SQL server systems).
Adding tables and columns can be handled in a way that the client databases can continue to work and get the updates when they appear. Dropping tables and columns on the other hand will have to be handled in a different, orchestrated) way.
I created a DDL trigger to lock the drops of tables and columns, that was the easy part.
Unfortunately, renaming tables and columns are handled, it seems, by the sp_rename procedure and this procedure uses the following construct:
EXEC %%Object(ID = #objid).SetName(Name = #newname)
This isn't picked up by the DDL trigger. Is there any way of making sure tables and columns are NOT renamed?
For each table, create a view with schemabinding as the following code demonstrates:
CREATE VIEW PreventDropOnT1
WITH SCHEMABINDING
AS
SELECT Col1, Col2
FROM T1

Resources