I am using MSSQL 2014 and I am having troubles fulfilling one request from DB admin regarding the security.
Our solution contains several databases (on the same server) with stored procedures in them that have joins between the tables of two or more databases. We have one user that has same rights on all databases.
Very simplified example: one database contains Articles while the other one contains Prices, queries need to get Articles with their Prices. Queries have INNER JOIN-s between tables in those two databases.
After deployment of the solution on our client's test environment, I talked to client's DB admin and he asked me to modify the databases and users to match some of their standards. Those standards include different database names on different environments as well as one separate user per database.
If I did changed database names and users in my current solution, stored procedures would return errors due to invalid database names or invalid credentials. I asked that admin how they solved such problem in their environment and his answer was that they create database links. After googling for the solution, I found out that Oracle has CREATE DATABASE LINK option, but there is nothing similar in MSSQL (except maybe linked servers feature, but that does not solve my problem).
What I am looking for is something similar to Oracle's CREATE DATABASE LINK in MSSQL 2014, some solution that would allow execution of stored procedures without need to change the queries, but rather to create 'alias' of the databases that need to be renamed.
Does anyone have an idea how I could do this?
I just searched msdn
CREATE SYNONYM (Transact-SQL)
I know link only answers are frowned upon but that is the answer and msdn is not going away
not sure if this can be an answer but it is too long to fit a comment...
there is no tech detail in your question so i'm making a huge wild guess but maybe the problem could be avoided using many schema instead of many databases.
that way all your objects would belong to the same database and there will be no need to make cross-database calls.
obviously this solution may not fit your situation, be not applicable because of [your long list of reason here] and would also require changes whose impact cannot be estimated reading a post on SO... ^^
as a side note the customer is not always right; if they have a policy that relies upon a specific oracle feature they cannot expect to enforce that very same policy on a different rdbms that lack the feature.
actually they may expect to do so but is your job to 'educate' them (or at least try to!).
anyway if the policy is mandatory then they will be happy to pay for the extra effort required to comply, isn't it?
Starting with the obvious I think you should explain the client that although there is more than one physical database it is in fact the very same database split apart and used by the very same product. Make it clear that this shouldn't be considered like a breach in their standard.
If you can not convince them then you have to deal with two problems : different user/login and different database name.
Different user/login : You can solve this with a linked server. Linked servers aren't only for different servers or different instances, you can create a "local" linked server that will use a different login.
EXEC sp_addlinkedserver
#server='yourAlias',
#srvproduct='SQL_SERVER',
#provider='yourServer\yourInstance',
#dataSrc = 'yourDatabaseXYZ'
GO
You can easily modify the credentials used by the linked server you've just created :
In SQL Server Management Studio, open Object Explorer, expand Server Objects and ight-click the new linked servers you just created;
On the Security page specify the login you want to use.
Different database name : The only solution I can came up with is to create a SYNONYM for every object in the database. This is something others people seems to have done before and although it is not something funny it seems very doable. You can have a look here for a complete solution.
Of course you can use a SYNONYM on a linked server resource. Something like :
CREATE SYNONYM yourTable FOR [yourAlias].yourDatabaseXYZ.schemaABC.yourTable
And then you will be able to do :
-- Transparent usage of a table from another database with different credentials
SELECT * FROM yourTable;
By the way, there is a feature request to Microsoft to Allow CREATE SYNONYM for database. Maybe you'd like to upvote it
Final note :
To avoid problem like this, and like Blam also mentionned, you should consider not hardcoding the database name in your application.
Related
I know that this is most probably not possible. But I will still detail my problem here, and if anyone has something similar to what I need, that would be great.
I have a file validation system which is hosted on a Windows server. This system holds a metadata table which is used by the front-end file validation application to validate various types of data. Right now it caters to a single application which is hosted on the same database as the metadata table.
The problem is that I want this system to be scalable so that it can validate files for a variety of applications some of which exist on different databases. Since some of the checks in my metadata table are based on Pl/SQL, I cannot run these checks unless the database for both the file validation system and the application database is the same. Is there a way for a table to be shared across multiple databases at once? If not, what could be the possible workarounds for this?
If you want to access several tables on several databases as if it was one table then you have to use a view which uses database links. You should first learn about database links and then creation of the view should not be the problem.
A database link allows you to access another database table just as a table on the same database. All you do is adding the #mydblink to the table name once you created the db link. Example
CREATE DATABASE LINK mydblink
CONNECT TO user IDENTIFIED BY password
USING 'name_of_other_db'
SELECT * FROM sometable#mydblink
It works well and there even is a two phase commit for updates in the remote and local database. There is some more to know about it depending on your set-up but you can read all about it in the oracle documentation. I have worked in a larger project extensively with database links and that is the right approach for what you want to do.
http://docs.oracle.com/cd/E11882_01/server.112/e25494/ds_concepts002.htm#ADMIN12083
http://docs.oracle.com/cd/E11882_01/server.112/e26088/statements_5005.htm
Here is a link that I found after some googling telling how to build a view that accesses data at several databases and also gives some information about the expected performance
http://www.dba-oracle.com/t_sql_dblink_performance.htm
Oracle has DBLinks. Check out the docs.
http://docs.oracle.com/cd/B28359_01/server.111/b28310/ds_concepts002.htm
Is there any way to obscure the schema of a database on SQL Server?
If I have SQL Server Express installed on a client site, is there a way to obscure the schema and data so that someone else cannot come along and learn the schema in order to extract data out of it and into another product?
The best way to obscure your database schema is to not let it leave your servers.
Even if you encrypt the schema you still will have to provide the key somewhere, and if the client is determined to get it, they'll spend time and money to do so.
So you're better off either offering your product as service or making your client loyal by doing good job.
AFAIK, "no".
The best way to "lock down" your database is:
1) Install with appropriate roles and users (ideally, SQL roles and SQL users you create)
2) Explicitly restrict object permissions in SQL Server
3) Code your application to use SQL Server stored procedures (instead of raw T-SQL) as much as possible
4) Encrypt your stored procedures
Here's a good link on "SQL Server Best Practices" that might be of interest. It discusses security issues and a (relatively) new feature, "User Schema Separation":
http://msdn.microsoft.com/en-us/library/dd283095%28v=sql.100%29.aspx
This is a tricky one and may not even be 100% possible. However, there are a few tricks to setting it up:
Install a new named instance of SQL server with a custom SA account (both name and password). There is an installation method for SQL server call "Unattended Installation" which allows you to specify all the installation parameters for SQL server in an ini file and then run the install silently. Check out the documentation here: Unattended Installation of SQL Server 2008 r2
Create your database, tables, procedures, etc. with your magic SQL install script (use encrypted stored procs if you want, but they too are crackable)
Add/Verify the schema permissions for the custom SA account and Drop all schema permissions for all Administrator roles. The goal here is that no roles have any schema permissions to your database and only your custom SA user has permission (not assigned by role, but directly to the user).
There are several commercial applications that I know of that don't even tell you they are installing an instance of MS SQL express. They too will create their own named instance with a named SA account. I can't say I like that as a customer (as SQL takes a hit on the CPU and I don't want "secret" instances running on my workstation). But so long as you disclose this to your customers upfront, they may understand.
**Keep in mind a skilled DBA may have the knowledge to mess with system tables and what not to manually grant access to your database. These techniques really are just "obfuscation" and won't be 100% bullet proof.
As a side note: With the plethora of available 3rd party datalayers and webservice technologies, I think many companies are finding their database schema alone isn't so proprietary or valuable anymore. There was a time when the database schema alone could have represented hundreds of hours of coding. But today tools like EntityFramework, NHibernate, Linq-to-SQL, XPO, etc all create your database schema for you based on your software class definitions and in code attributes. So just seeing a DB table isn't really very valuable. Plus you might write a bunch of business logic, statistical analysis or other helper methods in your software that aren't in your database schema. In my opinion, this is where today's "value add" is found, in the business logic, analysis and reporting functionality of your software - not in the raw datatables.
This is also why another poster recommended obfuscating stored procedures, because these could be many times the work of the database schema itself if you have some nice analysis and reporting procedures written up. Its also what customer's would most likely want to customize for their own reporting needs. You may be inclined to have a policy that custom reporting can only be done by your company (hey, even the big guys like SAP are sticky with who can modify what).
There is a way, it's convoluted and ugly but it works.
You have a master table that acts as a lookup table for your other tables. This master table would look sort of like this:
id, guid, entityname, parent_id
then all of your table names and column names get renamed to be guids. after that you put an entry in the lookup table for each of them. When you want to select data you have to do so by pulling the guid's out of the lookup table by their entitynames which then give you the obscured table and column names.
There is a major software vendor out there that does something very similar to this, so it has been done before.
I have bought a Oracle 11g recently and I wanted all my developers to use it. Obviously I can't buy different licenses for each. So is it possible for me to create one database for each of the developers?. By inference I know it is possible.
However, I couldn't find how I can do it. I googled. There was no definite guide for this particular case. Can you point to the right resource?
Or could you list down the steps to achieve this?
I would ever be grateful.
-
Sheldon
When you create a user in Oracle, you're creating a schema. A schema is a collection of tables and related objects (views, functions, stored procedures, etc) specific to that schema. So each developer could have their own user/schema, and work independently of one another. Access to other users can be granted, and public synonyms can be created to ensure that YOUR_TABLE points to a YOUR_TABLE in a specific schema, without the need to specify that schema. But this can eat space...
If there is shared development, might be best to have a single schema so everyone is working on the same copy.
Create one database and give each developer it's own schema (username/password).
As long as all your database instances are on the same server you can build as many as you want without paying any more. Performance might become an issue with more instances depending on how heavily used they are.
You don't mention your platform.
On windows, here's how to use the Database Configuration Assistant (DBCA). I think it's pretty similar on *nix as well.
Each database so created has a different name. To access them it's simply a matter of using a tnsnames.ora file with different entries for each instance on the server.
You can buy Oracle personal edition for each developer and install it on their desktop/laptop. According to shop.oracle.com it's $460 per user. This way you can give everyone full access to Oracle and save a lot of trouble. Developers can learn Oracle more quickly and be more productive, and DBAs won't have to worry about them bringing down the server.
Or possibly you could even use it for free if your program is not in production yet. The Oracle Developer license lets you:
... use the Programs, subject to the restrictions stated in this
Agreement, only for the purpose of developing, testing, prototyping,
and demonstrating Your application and only as long as Your
application has not been used for any data processing, business,
commercial, or production purposes, and not for any other purpose.
I have been googling a lot and I couldn't find if this even exists or I'm asking for some magic =P
Ok, so here's the deal.
I need to have a way to create a "master-structured" database which will only contain the schemas, structures, tables, store procedures, udfs, etc, everything but real data in SQL SERVER 2005 (if this is available in 2008 let me know, I could try to convince my client to pay for it =P)
Then I want to have several "children" of that master db which implement those schemas, tables, etc but each one has different data.
So when I need to create a new stored procedure or something like that, I just create it on the master database (and of course it's available on its children).
Actually I have several different databases with the same schema and different data. But the problem is to maintain congruency between them. Everytime I create a script to create some SP or add some index or whatever, I have to execute it in every database, and sometimes I could miss one =P
So let's say you have a UNIVERSE (would be the master db) and the universe has SPACES (each one represented by a child db). So the application I'm working on needs to dynamically "clone" SPACES. To do that, we have to create a new database. Nowadays I'm creating a backup of the db being cloned, restoring it as a new one and truncate the tables.
I want to be able to create a new "child" of the "master" db, which will maintain the schemas and everything, but will start with empty data.
Hope it's clear... My english is not perfect, sorry about that =P
Thanks to all!
What you really need is to version-control your database schema.
See do-you-source-control-your-databases
If you use SQL Server, I would recommend dbGhost - not expensive and does a great job of:
synchronizing 2 databases
diff-ing 2 databases
creating a database from a set of scripts (I would recommend this version).
batch support, so that you can upgrade all your databases using a single batch
You can use this infrastructure for both:
rolling development versions to test, integration and production systems
rolling your 'updated' system to multiple production deployments (especially in a hosted environment)
I would write my changes as a sql file and use OSQL or SQLCMD via a batchfile to ensure that I repeatedly executed on all the databases without thinking about it.
As an alternative I would use the VisualStudio Database Pro tools or RedGate SQL compare tools to compare and propogate the changes.
There are kludges, but the mainstream way to handle this is still to use Source Code Control (with all its other attendant benefits.) And SQL Server is increasingly SCC friendly.
Also, for many (most robust) sites it's a per-server issue as much as a per-database issue.
You can put things in master like SPs and call them from anywhere. As far as other objects like tables, you can put them in model and new databases will get them when you create a new database.
However, in order to get new tables to simply pop up in the child databases after being added to the parent, nothing.
It would be possible to create something to look through the databases and script them from a template database, and there are also commercial tools which can help discover differences between databases. You could also have a DDL trigger in the "master" database which went out and did this when you created a new table.
If you kept a nice SPACES template, you could script it out (without data) and create the new database - so there would be no need to TRUNCATE. You can script it out from SQL or an external tool.
Little trivia here. The mssqlsystemresource database works as you describe: is defined once and 'appears' in every database as the special sys schema. Unfortunately the special 'magic' needed to get this working is not available to the user databases. You'll have to use deployment techniques to keep your schema in synk. That is, apply the changes to every database as the other answers already suggested.
In theory, you could put a trigger on your UNIVERSE.sysobjects table (assuming SQL Server), and then you could enumerate master.dbo.sysdatabases to find all the child databases. If you have a special table that indicates it's a child database, you can reference child.dbo.sysobjects to find it.
Make no mistake, it would be difficult to implement. But it's one way you could do it.
Can anyone tell me if there are RDBMSs that allow me to create a separate database for every user so that there is full separation of users' data?
Are there any?
I know I can add UID to every table but this solution has its own problems (for example per user database schema changes are impossible).
Doesnt MySQL, PostgreSQL, Oracle and so on and so on allow you to do that?. There's the grant statements to control ACLs
I would imagine most (all?) databases allow you to create a user which you could then grant database level access to? SQL server certainly does.
Another simple solution if you don't need the databases to be massive or scalable, say for teaching SQL to students or having many testers work against their own database to isolate problems is SQLite, that way the whole database is a single file (per user), and each user cannot possibly screw up or interfere with other users.
They can even mail you the databases, or install them anywhere, say at home and at work with no internet required.
MS SQLServer2005 is one which can be used for multiple users.An instance can be created
if you have any, run the previlegs and use one user per instance
Oracle lets you create a separate schema (set of tables, indexes, functions, etc) for individual users. This is good if they should have separate different tables. Creating a new user could be a very expensive operation as you would be making new tables. Updating is a nightmare as well, as you need to update the model for each user.
If you want everyone to have the same set of tables, but only able to view their own records then you could use Fine Grain Access Control or Virtual Private Database features to do this.