I'm tired of searching for this, but I couldn't find anything.
I have three databases in SQL Server and although all stored procedures are in the Main database, they work with tables from the other databases.
My question is: if you have the query
select name
from SecondDatabase.dbo.SomeTable
where id = 56
and this query is stored in the main database, will it run in the main database and go all the way to the second database and returns the data, or will it run in the second database and you have the select result directly?
(hope you understand my question)
I think you are misunderstanding the difference between a Database and an Instance.
An instance is the software running the SQL service. Each instance can have multiple databases. For example, there is a master database and a tempdb database for each instance of SQL Server, these are system databases. You can create any number of user databases. All these databases will be handled by the same SQL Server instance (on the same machine).
A particular client session is connected first to an instance and then to a particular database, thats why you include which database you will connect to by default on connection strings (or by login). When you write select name from SecondDatabase.dbo.SomeTable, you are telling the SQL service to retrieve data from the SecondDatabase, even if your session is linked to any other database. The engine will then use your login credential to match a user of the other database (since users go by database and logins by instance) to validate if it has enough privileges to query that table, before searching for the data.
A complete different story would be trying to access data from another instance (machine), in which you will need a linked server, a openrowset or such.
use FirstDatabase
select name
from SecondDatabase.dbo.SomeTable
where id = 56
Question:
will it run in the main database and go all the way to the second
database and returns the data, or will it run in the second database
and you have the select result directly?
Your first assumption is correct:
This query will run in a first database, it will use context and all settings (ANSI, query optimizer and statistic related) of the first database but will get data from a table of the second database.
Just an example from a life: if database have to stay in an old compatibility mode, but new T-SQL features need occasionally to be used, query can switch context to tempdb (which normally set to the latest compatibility level) and run queries referencing data from any other database where access is granted. Usage of those new features will not raise exception
The (now edited) query above will always execute on SecondDatabase.dbo.SomeTable even if the active database context was another database and even if the active user had a different default schema. This is because the object SomeTable is qualified with the schema and the schema owner.
Test to illustrate that the following still returns the executed results (assuming the objects exist and the active user context has access to them)
USE [OtherDatabaseSchema]
GO
SELECT TOP 10 *
FROM [SecondDatabase].[dbo].[SomeTable]
Related
I am using SQL Server 2017. I am in the role of sa for the server in question. I have two databases that are used in an ETL process. The ETL is coded in one database, and the raw imported tables are located in the staging database. All ETL is handled in SQL stored procedures that follow a pattern. The first step in each ETL SP is a call to a diagnostics table in the staging database.
My current ETL job is a wrapper around two of these ETL sps; the wrapper itself contains only code that accesses the main db.
The first SP can be called and successfully selects the data from the staging db, however, the second SP that has identical code up to the point of failure with the first, fails on accessing the diagnostics table and tells me
The server principal "sa" is not able to access the database "staging" under the current security context.
The problem stays if I comment out the first SP call, so something must be different in the definition of the two SPs, but I cannot spot it.
There are plenty of SPs that use the diagnostics staging table, so it is not a general problem (as stated in answers to similar questions that suggest changing security options in the staging database), but must be related to the new SP somehow.
Any suggestions?
There are three things to check/do.
First of all, the login associated with the user in database DB1 must also be associated with a user in DB2. This provides the login with a security context in database DB2. The sa login will map to dbo in both databases, so this should already be fine.
Second, the security context of the code being executed in DB1 must be "trustworthy". In other words, when the user context goes from DB1 back up to the server level and then down into DB2 via the cross-database call, the new user context has to trust the original login. There are two ways to do this, the quick and dirty and opens-up-possible-security-holes way, and the more complicated but safer way:
Quick and not entirely safe: alter database DB1 set trustworthy on.
Safe: Use signed modules
Third, in the general case you should check that the owner of DB1 and the owner of DB2 are the same (otherwise you can't cross database ownership chain): select owner_sid from sys.databases where name in ('DB1', 'DB2') But as with the first point, as a sysadmin you can take ownership of anything.
As too often happens, I failed to recognize a subtle difference between the two stored procedures: They both call a logging stored procedure, but this logging procedure has two variants, one with prefix sp_, and another one with the prefix usp. (Someone reacted to the Microsoft warning not to use sp_ as prefix.) The old one had an 'execute as owner' inside, which caused the error.
Replacing the function call with the new version fixed the error.
Sometimes the error is on the other side of the screen.
We have a SQL Server 2008 R2 database whose tables are used by stored procedures themselves called by dedicated application code (VBA).
Until now all the final users were accessing the same data but for regulatory compliance they will be split into 2 legal entities and we'll have to ensure each user only accesses his entity's records.
Implementing this restriction at the application level is quite simple but not safe (AFAIK any XLA is easily broken).
So to be safe we must implement it at the database level.
My first idea was to simply change the stored procedures to join on the current caller's entity to transparently filter the records retrieved by the SQL queries.
Unfortunately the access is made via a generic SQL Server user, and, from what I've seen on SO and elsewhere, although we are on a full Microsoft infrastructure, there is no way to get the Windows user name.
And indeed all the functions I have tested return the SQL Server user name:
SELECT SUSER_NAME();
SELECT ORIGINAL_LOGIN();
EXEC sp_who
EXEC sp_who2
So, unless I've missed something, we'll have to switch the authentification mode to Windows.
Then, either join as described above, or:
create 2 database roles, one per legal entity, and manually assign users to each one,
create views dedicated to each legal entity, and restrict their access with the roles.
Is there any other option?
in my company, we have several environments with MS SQL database servers (SQL 2008 R2, SQL 2014). For the sake of simplicity, let us consider just a TEST environment and a PROD environment and two sql servers in each. Let the servers be called srTest1, srTest2, srProd1, srProd2 and each be running a default MS SQL Server instance. We work with multiple databases, say DataDb, ReportDb, DWHDb.
We want to keep the same source code in T-SQL for both TEST and PROD, but the problem is the architecture or distribution of the above mentioned databases in each environment:
TEST:
srTest1 - DataDb
srTest2 - DWHDb, ReportDb
PROD:
srProd1 - DataDb, ReportDb
srProd2 - DWHDb
Now, say, in ReportDb, we write stored procedures with many SELECTs referencing tables and other objects in DataDb and DWHDb. In order to have source code as universal as possible, we decided to create linked servers for each database on each db server in each environment and name them with respect to the database they're created for. Therefore, there'll be these linked servers:
lnkDataDb, lnkReportDb and lnkDWHDb on srTest1,
lnkDataDb, lnkReportDb and lnkDWHDb on srTest2,
lnkDataDb, lnkReportDb and lnkDWHDb on srProd1,
lnkDataDb, lnkReportDb and lnkDWHDb on srProd2.
And we'll adjust the source in the stored procs accordingly. For instance:
Instead of
SELECT * FROM DataDb.dbo.Contact
We'll write
SELECT * FROM lnkDataDb.DataDb.dbo.Contact
The example above is reasonable for a situation where the database from which you execute the query (ReportDb) lies on a different server than that with the referenced table (DataDb). Which is the case for the TEST environment. But not so in PROD. It is performance I'm here concerned about. The SQL Server will treat that SELECT as a "remote query" no matter whether, in fact, it is a reference to a local object or not.
Now, it comes the most important part:
If you check these 3 queries for their actual execution plans, you'll see an interesting thing:
(1) SELECT * FROM DataDb.dbo.Contact
(2) SELECT * FROM srProd1.DataDb.dbo.Contact
(3) SELECT * FROM lnkDataDb.DataDb.dbo.Contact
The first two (query #1 and #2) have the same execution plan (the fastest possible) even if you use the four-part name manner of referencing the table Contact in #2.
The last query has a different plan (remote query, thus slower).
The question is:
Can you somehow create a linked server to self (the same sql server instance, the default instance actually) as an "alias" to the name of the host (srProd1) in order for the SQL server to be forced to understand it as local and not issue "remote execution" plans?
Thanks a lot for any hints
Pavel
Recently I found a workaround which seems to solve this kind of issues more efficiently and more elegantly than the solution with self-pointing linked servers.
If you work (making reports, for example) with multiple databases on multiple SQL servers and the physical distribution of the databases on the servers is a challenge since it may differ from one environment to another (e.g. TEST vs PROD), I suggest this:
Use three-part db object names whenever possible. If the objects are local, then execution plans are also local, and thus effective.
Example:
SELECT * FROM DataDb.dbo.Contact
If you happen to run the above query from within a different SQL server instance (residing on a different physical machine, for example, but this not necessarily, the other SQL server instance could be installed even on the same machine), briefly if you're about to use a four-part name:
SELECT * FROM lnkDataDb.DataDb.dbo.Contact
Then you can circumvent that using the following trick:
Let's assume lnkDataDb points to srTest2 and you're executing your queries from srTest1. Now, you'll create a "fake" database DataDb on your local server (srTest1). This fake DataDb shall contain no real db objects (no tables, no views, no stored procedures, no UDFs etc.). There shall only be synonyms defined in it. (And there also shall be the same schemas in it as those in the real DataDb on srTest2). These synonyms shall be named exactly the same way as their real db-object counterparts in DataDb on srTest2. Example:
-- To be executed on srTest1.
EXEC sp_addlinkedserver
#server = N'lnkDataDb',
#srvproduct = N'',
#provider = N'SQLNCLI',
#datasrc = N'srTest2'
;
GO
CREATE DATABASE [DataDb];
GO
USE [DataDb];
GO
CREATE SYNONYM dbo.Contact FOR lnkDataDb.DataDb.dbo.Contact;
GO
Now, if you want to SELECT rows from the table dbo.Contact residing in the database DataDb on srTest2 and you're executing your query from srTest1, you'll use a simple three-part table name:
SELECT * FROM DataDb.dbo.Contact
Of course, on srTest1, this is not a table, that's just a synonym referencing the same-named table on srTest2. However, that's the trick, you use the same query syntax as if you were executing it on srTest2 where the real db object resides.
There are disadvantages of this approach:
On the local server, at the beginning, there must not be a database
with the same name as the remote one. Because you're about to create
a "fake" database with that name to reflect the names of remote
db objects.
You're creating one database that is almost empty, thus
increasing the mess of various databases residing on your local
SQL server. This might provoke reluctance of your database admin
if they prefer having as few databases as possible.
If you're developing your T-SQL scripts in SQL Server Management
Studio, for example, using synonyms cuts you off from the convenience
of the IntelliSense feature.
Advantages outweigh the above-mentioned disadvantages, though:
Your scripts work in any environment (DEV, TEST, PROD) without
the need to change any part of the source code.
If the other database you're querying data from resides on the same
SQL server instance as your script, you also use the three-part name
convention and your SQL server evaluates the query in execution plan
as local which is OK. (This is what the original question of this
post was searching to solve.)
If the other database you're querying data from resides on another
SQL server instance, you still use a "local syntax manner" of a SQL
query (with the synonym) which, only at runtime, evaluates in
a remote execution plan. Which is also fine because the db object
actually is remote.
To summarize
The query executes as local if the referenced object is local, the query executes as remote if the referenced object is remote, but the T-SQL script is always the same. You don't have to change a letter in it.
We're running SQL Server 2012 / .Net Framework 4.5.1
We have an application that does the following:
Extract all table data from a source database using an instance of .Net's SqlBulkCopy.
Delete all data in a target database using regular SQL statements.
Deploy the data from the source database to the target database using an instance of .Net's SqlBulkCopy.
The third step is successful when the SQL connection uses my Active Directory account, but fails with the following error message when using a SQL Server account created for this purpose: Cannot find the object "[SchemaName].[TableName]" because it does not exist or you do not have permissions.
Interestingly, the process runs through about a dozen tables before hitting one that causes this error. Manual verification proves that a) The table exists on the target, b) The problem user can select from the table, and c) the problem user can manually insert into the table with the standard INSERT INTO [SchemaName].[TableName] ([Columns]) VALUES ([Values]) format. BCP also works for that user, but using SqlBulkCopy from a .Net application fails for the same user.
Our DBA (A pretty seasoned guy, so far as I can tell, actually) says that the database permissions on the target database are IDENTICAL between the two users, but reality would seem to suggest this is not the case.
Googling the problem shows that the user should have the db_owner or db_ddladmin roles. The user actually belongs to both.
Anyway, solving the local problem is of secondary concern, since I can get done what I need done with my AD account. What I'd really like to know is whether there is a baked-in way to compare the differences in permissions between two users. If not, can this be done with a T-SQL query of some kind?
Thanks, guys and gals!
Here's my permissions script that I use. It's generally the approach that everyone uses, unless they have a schema compare product via Visual Studio, Red Gate, etc. http://www.csvreader.com/posts/permissions_list.php
Are you specifying the schema on the destination table with SqlBulkCopy? Is it possible that you're running into a user owned schema instance?
It's also been my experience that SqlBulkCopy only requires select and insert on the destination table. BCP requires the escalated permissions that you described, which is another benefit of SqlBulkCopy.
I have an access 2003 database that holds all of my business data. This access database gets updated every few hours during the day.
We're currently writing a website that will need to use the data from the access database. This website (for the time being) will have only read only capabilities. Meaning there will only need to be one way transfer of data (Access -> SQL).
I'm imaging there's a way to perform this data migration from access to SQL server programatically. Does anyone have any links to something I can read about?
If this practice sounds odd, and you'd like to suggest another way to do this (or a situation where data can go both ways (Access -> SQL, SQL -> Access), that's perfectly fine.
The company is going to continue using Access 2003 for their business functionality. There's no way around that. But I'd like to build the (readonly) website on top of SQL Server.
The strategy you outlined can be very challenging. You could use INSERT queries to copy new Access rows to SQL Server, as described in another answer.
However, if you have changes to existing Access rows, and you also want those changes propagated to SQL Server, it won't be so simple. And it will be more complicated still if you want deleted Access rows deleted from SQL Server, too.
It seems more reasonable to me to use a different approach. Migrate the data to SQL Server once. Then replace the tables in your Access database with ODBC links to the SQL Server tables. Thereafter, changes to the data from within your Access application will not require a separate synchronization step ... they will already be in SQL Server. And you won't need to write any code to synchronize them.
If your concern is that the connections between the web server and SQL Server be read-only, just set them up that way. You can still independently allow read-write permissions for your Access application.
To do the initial data migration and set the SQL Server automatically, I would use the SQL Server Migration Assistant. The only thing you should definitely change that I can think of would be to turn off the Identity property on any columns that have it - to be explained below (MS Access calls Identity autonumber). Once you have your tables loaded, you can set up a dsnless connection to the database (and tables) you just created.
I haven't used the method just linked, but I believe it allows you to use SQL Server authentication to connect to the db. The benefit of using this method is you can easily change which SQL Server instance and/or database your are connecting to for development and testing.
There might be a better, automated way, but you can create several insert queries doing left joins from the primary key of the Access table to the SQL Server table, and putting a WHERE clause that specifies the SQL Server PrimaryKey must be null. This is why you need to turn off the Identity property in the SQL Server tables, so that you can insert the new data.
Finally, put the name of each query in one function, then run the function periodically.
I have used Microsoft's free SQL Server Migration Assistant (SSMA) to migrate Access to SQL Server. The tool is very simple to use. The only problem I have encountered with the tool was overloaded data types when migrating. What I mean by this is a small string will get converted to a NVARCHAR(MAX) in some instances. Otherwise, the tool is very handy and can be reused after setting up a 'profile'.