Which database implementations allow sandboxing users in separate databases? - database

Can anyone tell me if there are RDBMSs that allow me to create a separate database for every user so that there is full separation of users' data?
Are there any?
I know I can add UID to every table but this solution has its own problems (for example per user database schema changes are impossible).

Doesnt MySQL, PostgreSQL, Oracle and so on and so on allow you to do that?. There's the grant statements to control ACLs

I would imagine most (all?) databases allow you to create a user which you could then grant database level access to? SQL server certainly does.

Another simple solution if you don't need the databases to be massive or scalable, say for teaching SQL to students or having many testers work against their own database to isolate problems is SQLite, that way the whole database is a single file (per user), and each user cannot possibly screw up or interfere with other users.
They can even mail you the databases, or install them anywhere, say at home and at work with no internet required.

MS SQLServer2005 is one which can be used for multiple users.An instance can be created
if you have any, run the previlegs and use one user per instance

Oracle lets you create a separate schema (set of tables, indexes, functions, etc) for individual users. This is good if they should have separate different tables. Creating a new user could be a very expensive operation as you would be making new tables. Updating is a nightmare as well, as you need to update the model for each user.
If you want everyone to have the same set of tables, but only able to view their own records then you could use Fine Grain Access Control or Virtual Private Database features to do this.

Related

Is there a way to create a constraint that prevents updates in a SQL Server 2012 database table?

I am creating an application that will track hours for employees. Ideally, HR has asked that certain tables not be modified once data is commited. This is done easily enough from the front-end and stored procedures. However, it would be great to be able to prevent it from the server itself through constraints so that folks that have access to the back-end data can't change any values in the selected tables (unless they are sneaky enough to know how to disable the constraints).
If you trust your SQL Server admins then it’s possible. Have your admin to create users that don’t have datawriter permissions for those tables or schema.
So, application would write the data into database and users who have access to those tables would only be able to read the data.
If you don’t want admins to have the ability to modify data that’s not possible. There is no way to prevent it but there is a way to detect it if it happens. Check out this article for details on details how to this is done in third party application and see if it helps.
Use Server Side security roles to give only the HR Group data-write privileges.

Multiple companies in same database

I'm working on a system which for every "company" has their own "users" and their own "bills". That scenario is better in performance and management? Handle all companies in the same database and link everything to an idempresa, or database for each client?
This is called multi tenancy architecture and each customer is a tenant. There are various strategies to deal with it and each one might bring potential problems.
Having a separate database for each tenant is an option that provides data separation and do not require you to add a column to identify each tenant in your tables and queries, but also has the downside to keep multiple databases up to date.
Having a column in each table of a single database to identify your tenants is also a good strategy, but then it brings problems when scaling and managing different features for different customer for example.
You need to study all available strategies and decides which one is best based on your requirements and pain points.
Putting a tenant data in a separate Database is a straight forward approach and less painful option but then in a long run, when your product gets wildly successful, maintaining this database will become a nightmare.
On the Other hand keeping all the Tenants data in a single database could also make your application non scalable and less performable. The better approach would be the combination of both, the decision of making the choice between these two is completely based on the type, usage and size of the customer.
In certain cases, you may need to provision a separate database for a particular module or feature of your application may be for security or to isolate the specific data alone. I have written an article on these lines; kindly have a look at http://blog.techcello.com/2012/07/database-sharding-scaling-data-in-a-multi-tenant-environment/
I think the scaling problem of multi-tenant in a single database can be overcome by proper planning up-front. Plan to make it easy to migrate a tenant and their data to another database anything they become big enough to justify it.
If you can automate this migration, based on tenant ID, in each table then it should be easy and safe. I'd just make sure I tested it often as development of new features are going on.
You can mitigate the risks of multi-tenant on one database. You can't really do much when there are multiple databases. You can only be diligent and disciplined to make sure all the databases stay in sync.
Good luck!!!
This is an old thread, but it's worth mentioning this for others with this question who may come across this post in the future.
I've had great success on projects in the past by using PostgreSQL and putting the global tables in the "public" schema (like users, groups, etc.) and the same set of tables for each tenant in their own separate schemas.
For example:
For every tenant that's added to the system, a new schema is created with a standard set of tables for the application:
CREATE SCHEMA tenant1;
CREATE TABLE tenant1.products (...);
CREATE TABLE tenant1.orders (...);
etc.
Each tenant's schema would have its own isolated section within the database with the same set of tables that every other tenant has but filled with their own data.
In the default "public" schema you'd have global "users" and "tenants" tables (along with tables for things like groups and access control lists). Every user belongs only to a single tenant. Upon login, the tenant for that user is looked up and from that point forward any time you connect to the database you set it to use that tenant's schema:
SET search_path TO tenant1, public;
Once the schema search_path is set, all your SQL queries can be written as if you're working with a single database with tables named "products", "orders", and so forth (along with the tables in the "public" schema). So you can just use something like "SELECT * FROM products" and it would get the products belonging to this user's tenant.

Is it safe to keep all databases on one SQL server?

I'm creating a Multi-Tenant application that uses separate databases for each 'client'.
Is it safe to keep all the clients databases on one SQL server? Assuming I give each db its own user account?
Thanks
There was an excellent blog post by Brent Ozar last week on this exact subject.
How To Design Multi-Client Databases
Yes thats basically good idea to manage the tenanats from one sql server(better in terms of resources etc), but you need to create one seperate database for storing the connections strings of other tenant databases, roles etc.
It would be fine with one sql server, if you decide later on to place all the data into cloud. Basically its easy to manage . Also If you want to update any procedure, you can do it easily for all the tenants.
I'd normally use one database per client on the same instance.
From a security perspective, you then have only to deal with logins and users: not permissions per schema or whatever in one big database.
Note that SQL Server will balance resources across all database per instance fairly effectively: not all databases will be in use at the same time so memory etc will be allocated to need. You lose this advantage with multiple instances.

Database flexibility and privacy, hidden structure, software compatibility and 'public' permissions

I'm one of those that recently decided to migrate from MySQL to PostgreSQL and with it a lot of old habits are being torn apart. However there is functionality from MySQL I would like to preserve in PostgreSQL.
So... topics:
User should have ability to create tables under a restricted namespace.
Tables of one user should not be visible to other users by default (both data, structure, stored procedures and whatnot).
Optionally the user should be given the right to GRANT permissions to other users.
Default permission to new users is to have no permission (read nothing, write even less)
Maintain compatibility with applications that are not schema aware.
Point 1:
Under MySQL the solution in place was to allow the user to create databases under the criteria 'username_%'. Under PostgreSQL I thought of having one database per user such that they can create as many schemas as they want. However there is the limitation of not being able to do joins across databases, only across schemas on the same database.
The possibility of having all as PostgreSQL schemas under the same database is not completely discarded. But then it suffers from the next point...
Point 2:
After reading this question I was inclined to think that the only way to make data completely private was to use different databases. Still I can't seem to figure out how to do it and on the other hand it conflicts with the ability to do the joins mentioned in the previous point.
Point 3:
Is this even possible or do you need the 'Create roles' privilege and create a new role for the given table/schema.
Point 4:
Again, is this possible? From what I read it feels like I'm fighting the default 'public' behavior, but still I would like to have the users seeing nothing unless an admin gives them access to the information.
Point 5:
Some of the programs I use with MySQL, on which I have no direct control of the actions they perform on the database, are not schema aware. This means they simply ignore the schema layer. For this PostgreSQL provides the 'public' schema as default. However this is still a bit awkward in some cases.
It also means that by default I need one independent database per software/tool or else I need to trick the system by setting search_path to some predefined schema on a per user (role) basis.
So those are the options/solutions I've found so far. I'm fine with having to use the search_path for point 5 and sacrificing joins between tables/schemas in different databases for the sake of privacy (points 1 and 2), but I would still like to know what is the best solution to the above problems and what are the best ways to put them in practice.
With that said, I'm all ears.
PS: Links to information on how to accomplish the mentioned above are also welcome.
The solution we ended up taking is the following:
Point 1:
One database per user. User can create as many tables and schema as he wants. Joins across databases are not possible. The alternative is to retrieve subsets and manage the results on the client, obviously not the most efficient way.
Point 2:
This can be accomplished by defining a specific ownership and permission for a given database and removing the default "public" behavior. With this, only users that belong to allowed groups or are the owner itself can access the content.
Note: PostgreSQL uses multiple level permissions which means that even if the database is owned by someone, tables can be owned by someone else.
Point 3:
Can be done with WITH GRANT OPTION.
Point 4:
There is no automated way to do this. The only way to ensure this is by restricting "public" access to all existing databases.
Point 5:
Using search_path on a per user basis is the only way to do it, using multiple users to access different schema (when needed). There is obviously the issue that a schema unaware application cannot "reach" other schema if no user with appropriate search_path exists.

SQL Server Database Schemas

I use schemas in my databases, but other than the security benefits and the fact that my OCD is happy, I don't really know whay it is good practice to use them. Besides the more granular security, are there other reasons for using schemas when building a database?
The primary pupose of schemas is indeed security. A secondary benefit is that they act like namepaces for your application tables and objects, thus allowing a conflict free side-by-side deployment with other applications that may use same names for its object.
Schema's arose from the original Sql Server. They didn't have schemas which meant that every single object in the database had to be owned by someone. If jill from accounting left the company then you had to manually reassign all her stuff to someone else etc. Schemas now own objects and users belong to schemas, which makes all the DB Admins very happy people :).
Basically you can have users leave and you remove their privileges by removing them from schemas and deleting the user. Adding privileges to a user is now as simple as adding the user to the schema.

Resources