Database availability during database update - sql-server

I have a database from a 3rd party. They supply a tool to update the database data weekly. The tool is pretty old and uses ODBC. Updates can either be incremental or can delete all database data then recreate the data. The update can take several hours. In order to have high availability, it was suggested to have 2 SQL databases, and store a "active database" setting in another database to determine which of the two databases applications should use (while the other could be being updated).
One issue we are running into is: How to do reference the active database in stored procedures in other databases?
Is this the right approach? Is there a simple, perhaps-infrastructure-based approach? (Should this be posted on ServerFault?)
Note: Databases are read-only besides the update tool.

If the databases are on different servers, you can create an alias for the server which will redirect to the other server in SQL Server Configuration Manager. Under SQLNative Client 10.0 Configuration (or 9.0 if you're in SQL Server 2005) you can add a new alias.
Otherwise, you can always rename the databases using sp_dbrename so thata your client applications are always using database1 while you are updating database2.

If you want to use different databases inside a stored procedure you either need to:
Duplicate all the calls. Ugly. You would end with a lot of:
if #firstDatabase=1
select * from database1..ExampleTable where ...
else
select * from database2..ExampleTable where ...
Use dynamic queries. Less ugly:
set #sqlQuery='select * from '+#currentDatabase+'..ExampleTable where...'
exec sp_executesql #sqlQuery
I admit that neither solution is perfect...

I'd take the approach of having the stored procedures in both databases with some sort of automatic trigger to update the stored procedures in the other database if a stored procedure is changed.

Related

How to execute a stored procedure present in database A/Server A and query database B/Server B as well?

I have a configurations database with a lot of stored procedures and then I have a large number of databases, all are part of one system and on the same database server.
When the stored procedures in the config database are executing, they often query other databases as well and it is possible to do so because the configuration and all the other database are on the same database server.
But with time, as the data is growing, customers are growing, databases are growing, this one database server is slowing down. So now we want to take some of our databases and put them on a different database server but we are unable to do so because these databases and the configuration database are tightly coupled to each other because many of the stored procedures in the config database query other databases as well.
Is there some way I can execute a stored procedure present in config database / Server A, but this stored procedure is also querying database 2 on Server B?
If not then what could be the best approach to decouple all the other databases from the configurations database? I know getting rid of the stored procedures by implementing an ORM or something could be an option but that would be very time taking as we 1000+ stored procedures.
Let's say your configure database is Server A and all your user databases are on Server A and your SP are using 3 part naming to query multiple databases.
Now you want to migration one of the databases (DB1) to Servers B. Now DB1 does not exist on Server A, it is now on Server B. You can then Create a place holder DB on Server A called DB1. Then you will create a linked server to Server B and in the Place holder DB1, you will create alias which uses the linked server object with the same table name. This way, no changes is required on your thousands of SP and its configuration.
However, Linked Server may introduce performance problems as joining and indexing is less efficient.

Trying to understand stored procedure behavior

I'm tired of searching for this, but I couldn't find anything.
I have three databases in SQL Server and although all stored procedures are in the Main database, they work with tables from the other databases.
My question is: if you have the query
select name
from SecondDatabase.dbo.SomeTable
where id = 56
and this query is stored in the main database, will it run in the main database and go all the way to the second database and returns the data, or will it run in the second database and you have the select result directly?
(hope you understand my question)
I think you are misunderstanding the difference between a Database and an Instance.
An instance is the software running the SQL service. Each instance can have multiple databases. For example, there is a master database and a tempdb database for each instance of SQL Server, these are system databases. You can create any number of user databases. All these databases will be handled by the same SQL Server instance (on the same machine).
A particular client session is connected first to an instance and then to a particular database, thats why you include which database you will connect to by default on connection strings (or by login). When you write select name from SecondDatabase.dbo.SomeTable, you are telling the SQL service to retrieve data from the SecondDatabase, even if your session is linked to any other database. The engine will then use your login credential to match a user of the other database (since users go by database and logins by instance) to validate if it has enough privileges to query that table, before searching for the data.
A complete different story would be trying to access data from another instance (machine), in which you will need a linked server, a openrowset or such.
use FirstDatabase
select name
from SecondDatabase.dbo.SomeTable
where id = 56
Question:
will it run in the main database and go all the way to the second
database and returns the data, or will it run in the second database
and you have the select result directly?
Your first assumption is correct:
This query will run in a first database, it will use context and all settings (ANSI, query optimizer and statistic related) of the first database but will get data from a table of the second database.
Just an example from a life: if database have to stay in an old compatibility mode, but new T-SQL features need occasionally to be used, query can switch context to tempdb (which normally set to the latest compatibility level) and run queries referencing data from any other database where access is granted. Usage of those new features will not raise exception
The (now edited) query above will always execute on SecondDatabase.dbo.SomeTable even if the active database context was another database and even if the active user had a different default schema. This is because the object SomeTable is qualified with the schema and the schema owner.
Test to illustrate that the following still returns the executed results (assuming the objects exist and the active user context has access to them)
USE [OtherDatabaseSchema]
GO
SELECT TOP 10 *
FROM [SecondDatabase].[dbo].[SomeTable]

SQL Server How to add a linked server to the same instance without performance impact

in my company, we have several environments with MS SQL database servers (SQL 2008 R2, SQL 2014). For the sake of simplicity, let us consider just a TEST environment and a PROD environment and two sql servers in each. Let the servers be called srTest1, srTest2, srProd1, srProd2 and each be running a default MS SQL Server instance. We work with multiple databases, say DataDb, ReportDb, DWHDb.
We want to keep the same source code in T-SQL for both TEST and PROD, but the problem is the architecture or distribution of the above mentioned databases in each environment:
TEST:
srTest1 - DataDb
srTest2 - DWHDb, ReportDb
PROD:
srProd1 - DataDb, ReportDb
srProd2 - DWHDb
Now, say, in ReportDb, we write stored procedures with many SELECTs referencing tables and other objects in DataDb and DWHDb. In order to have source code as universal as possible, we decided to create linked servers for each database on each db server in each environment and name them with respect to the database they're created for. Therefore, there'll be these linked servers:
lnkDataDb, lnkReportDb and lnkDWHDb on srTest1,
lnkDataDb, lnkReportDb and lnkDWHDb on srTest2,
lnkDataDb, lnkReportDb and lnkDWHDb on srProd1,
lnkDataDb, lnkReportDb and lnkDWHDb on srProd2.
And we'll adjust the source in the stored procs accordingly. For instance:
Instead of
SELECT * FROM DataDb.dbo.Contact
We'll write
SELECT * FROM lnkDataDb.DataDb.dbo.Contact
The example above is reasonable for a situation where the database from which you execute the query (ReportDb) lies on a different server than that with the referenced table (DataDb). Which is the case for the TEST environment. But not so in PROD. It is performance I'm here concerned about. The SQL Server will treat that SELECT as a "remote query" no matter whether, in fact, it is a reference to a local object or not.
Now, it comes the most important part:
If you check these 3 queries for their actual execution plans, you'll see an interesting thing:
(1) SELECT * FROM DataDb.dbo.Contact
(2) SELECT * FROM srProd1.DataDb.dbo.Contact
(3) SELECT * FROM lnkDataDb.DataDb.dbo.Contact
The first two (query #1 and #2) have the same execution plan (the fastest possible) even if you use the four-part name manner of referencing the table Contact in #2.
The last query has a different plan (remote query, thus slower).
The question is:
Can you somehow create a linked server to self (the same sql server instance, the default instance actually) as an "alias" to the name of the host (srProd1) in order for the SQL server to be forced to understand it as local and not issue "remote execution" plans?
Thanks a lot for any hints
Pavel
Recently I found a workaround which seems to solve this kind of issues more efficiently and more elegantly than the solution with self-pointing linked servers.
If you work (making reports, for example) with multiple databases on multiple SQL servers and the physical distribution of the databases on the servers is a challenge since it may differ from one environment to another (e.g. TEST vs PROD), I suggest this:
Use three-part db object names whenever possible. If the objects are local, then execution plans are also local, and thus effective.
Example:
SELECT * FROM DataDb.dbo.Contact
If you happen to run the above query from within a different SQL server instance (residing on a different physical machine, for example, but this not necessarily, the other SQL server instance could be installed even on the same machine), briefly if you're about to use a four-part name:
SELECT * FROM lnkDataDb.DataDb.dbo.Contact
Then you can circumvent that using the following trick:
Let's assume lnkDataDb points to srTest2 and you're executing your queries from srTest1. Now, you'll create a "fake" database DataDb on your local server (srTest1). This fake DataDb shall contain no real db objects (no tables, no views, no stored procedures, no UDFs etc.). There shall only be synonyms defined in it. (And there also shall be the same schemas in it as those in the real DataDb on srTest2). These synonyms shall be named exactly the same way as their real db-object counterparts in DataDb on srTest2. Example:
-- To be executed on srTest1.
EXEC sp_addlinkedserver
#server = N'lnkDataDb',
#srvproduct = N'',
#provider = N'SQLNCLI',
#datasrc = N'srTest2'
;
GO
CREATE DATABASE [DataDb];
GO
USE [DataDb];
GO
CREATE SYNONYM dbo.Contact FOR lnkDataDb.DataDb.dbo.Contact;
GO
Now, if you want to SELECT rows from the table dbo.Contact residing in the database DataDb on srTest2 and you're executing your query from srTest1, you'll use a simple three-part table name:
SELECT * FROM DataDb.dbo.Contact
Of course, on srTest1, this is not a table, that's just a synonym referencing the same-named table on srTest2. However, that's the trick, you use the same query syntax as if you were executing it on srTest2 where the real db object resides.
There are disadvantages of this approach:
On the local server, at the beginning, there must not be a database
with the same name as the remote one. Because you're about to create
a "fake" database with that name to reflect the names of remote
db objects.
You're creating one database that is almost empty, thus
increasing the mess of various databases residing on your local
SQL server. This might provoke reluctance of your database admin
if they prefer having as few databases as possible.
If you're developing your T-SQL scripts in SQL Server Management
Studio, for example, using synonyms cuts you off from the convenience
of the IntelliSense feature.
Advantages outweigh the above-mentioned disadvantages, though:
Your scripts work in any environment (DEV, TEST, PROD) without
the need to change any part of the source code.
If the other database you're querying data from resides on the same
SQL server instance as your script, you also use the three-part name
convention and your SQL server evaluates the query in execution plan
as local which is OK. (This is what the original question of this
post was searching to solve.)
If the other database you're querying data from resides on another
SQL server instance, you still use a "local syntax manner" of a SQL
query (with the synonym) which, only at runtime, evaluates in
a remote execution plan. Which is also fine because the db object
actually is remote.
To summarize
The query executes as local if the referenced object is local, the query executes as remote if the referenced object is remote, but the T-SQL script is always the same. You don't have to change a letter in it.

Copy access database to SQL server periodically

I have an access 2003 database that holds all of my business data. This access database gets updated every few hours during the day.
We're currently writing a website that will need to use the data from the access database. This website (for the time being) will have only read only capabilities. Meaning there will only need to be one way transfer of data (Access -> SQL).
I'm imaging there's a way to perform this data migration from access to SQL server programatically. Does anyone have any links to something I can read about?
If this practice sounds odd, and you'd like to suggest another way to do this (or a situation where data can go both ways (Access -> SQL, SQL -> Access), that's perfectly fine.
The company is going to continue using Access 2003 for their business functionality. There's no way around that. But I'd like to build the (readonly) website on top of SQL Server.
The strategy you outlined can be very challenging. You could use INSERT queries to copy new Access rows to SQL Server, as described in another answer.
However, if you have changes to existing Access rows, and you also want those changes propagated to SQL Server, it won't be so simple. And it will be more complicated still if you want deleted Access rows deleted from SQL Server, too.
It seems more reasonable to me to use a different approach. Migrate the data to SQL Server once. Then replace the tables in your Access database with ODBC links to the SQL Server tables. Thereafter, changes to the data from within your Access application will not require a separate synchronization step ... they will already be in SQL Server. And you won't need to write any code to synchronize them.
If your concern is that the connections between the web server and SQL Server be read-only, just set them up that way. You can still independently allow read-write permissions for your Access application.
To do the initial data migration and set the SQL Server automatically, I would use the SQL Server Migration Assistant. The only thing you should definitely change that I can think of would be to turn off the Identity property on any columns that have it - to be explained below (MS Access calls Identity autonumber). Once you have your tables loaded, you can set up a dsnless connection to the database (and tables) you just created.
I haven't used the method just linked, but I believe it allows you to use SQL Server authentication to connect to the db. The benefit of using this method is you can easily change which SQL Server instance and/or database your are connecting to for development and testing.
There might be a better, automated way, but you can create several insert queries doing left joins from the primary key of the Access table to the SQL Server table, and putting a WHERE clause that specifies the SQL Server PrimaryKey must be null. This is why you need to turn off the Identity property in the SQL Server tables, so that you can insert the new data.
Finally, put the name of each query in one function, then run the function periodically.
I have used Microsoft's free SQL Server Migration Assistant (SSMA) to migrate Access to SQL Server. The tool is very simple to use. The only problem I have encountered with the tool was overloaded data types when migrating. What I mean by this is a small string will get converted to a NVARCHAR(MAX) in some instances. Otherwise, the tool is very handy and can be reused after setting up a 'profile'.

SQL Server database remote transfer - best method

I have two databases, one on a remote server the other local. (SQL Server 2008)
The database on my local server has the entire structure setup but no data. I would like to copy the data from the remote server to my server and I am wondering the best method in which to do this.
The main issue I am experiencing is the user that I have to the remote database has limited permissions. I cannot read the stored procedures, user defined functions so when I use Import/Export wizard I do not get the schema etc. So a regular dump/restore is not working for me as it restores the tables without the Primary Keys/Foreign Keys and the stored procedures.
I'd like to do this,
INSERT INTO localtable SELECT * FROM remotedb.table
I was having issues because of the IDENTITY fields and I had to explicitly name all of the columns. Also I am not sure if SQL Server Management Studio allows you to use two different databases, remote and local, so I was looking for any advice.
I have also tried applications like SQL FTP and Backup and it fails because it runs out of memory (I have 16GB of memory on the machine and the DB is like 4GB). I also can use the SQL Server import/export wizard but then I don't get the schema information. I also tried SQL Compare from Red Gate and it runs into issues with the permissions. Unfortunately I do not have the time to request and gain access to a new user so I was hoping someone had a creative idea.
You can definitely use SQL Server Backups for this. It will not run out of memory. If it does please tell us the message (because likely you are misinterpreting it). This is the fastest possible and the most complete solution.
You can tell the export wizard to also script the schema. It is hidden under "advanced" somewhere (terrible UI). But the script will be extremely big and I know of no way to execute it.
You can drop all schema objects except PKs in the target database. Then you can use remote queries to copy all the data over. You will not get any problems with foreign keys and identity columns if you drop the beforehand. After you are done you can recreate all those objects. It is probably best if you use a transaction for all of this because that way you get consistent source data from a point-in-time.

Resources