Database flexibility and privacy, hidden structure, software compatibility and 'public' permissions - database

I'm one of those that recently decided to migrate from MySQL to PostgreSQL and with it a lot of old habits are being torn apart. However there is functionality from MySQL I would like to preserve in PostgreSQL.
So... topics:
User should have ability to create tables under a restricted namespace.
Tables of one user should not be visible to other users by default (both data, structure, stored procedures and whatnot).
Optionally the user should be given the right to GRANT permissions to other users.
Default permission to new users is to have no permission (read nothing, write even less)
Maintain compatibility with applications that are not schema aware.
Point 1:
Under MySQL the solution in place was to allow the user to create databases under the criteria 'username_%'. Under PostgreSQL I thought of having one database per user such that they can create as many schemas as they want. However there is the limitation of not being able to do joins across databases, only across schemas on the same database.
The possibility of having all as PostgreSQL schemas under the same database is not completely discarded. But then it suffers from the next point...
Point 2:
After reading this question I was inclined to think that the only way to make data completely private was to use different databases. Still I can't seem to figure out how to do it and on the other hand it conflicts with the ability to do the joins mentioned in the previous point.
Point 3:
Is this even possible or do you need the 'Create roles' privilege and create a new role for the given table/schema.
Point 4:
Again, is this possible? From what I read it feels like I'm fighting the default 'public' behavior, but still I would like to have the users seeing nothing unless an admin gives them access to the information.
Point 5:
Some of the programs I use with MySQL, on which I have no direct control of the actions they perform on the database, are not schema aware. This means they simply ignore the schema layer. For this PostgreSQL provides the 'public' schema as default. However this is still a bit awkward in some cases.
It also means that by default I need one independent database per software/tool or else I need to trick the system by setting search_path to some predefined schema on a per user (role) basis.
So those are the options/solutions I've found so far. I'm fine with having to use the search_path for point 5 and sacrificing joins between tables/schemas in different databases for the sake of privacy (points 1 and 2), but I would still like to know what is the best solution to the above problems and what are the best ways to put them in practice.
With that said, I'm all ears.
PS: Links to information on how to accomplish the mentioned above are also welcome.

The solution we ended up taking is the following:
Point 1:
One database per user. User can create as many tables and schema as he wants. Joins across databases are not possible. The alternative is to retrieve subsets and manage the results on the client, obviously not the most efficient way.
Point 2:
This can be accomplished by defining a specific ownership and permission for a given database and removing the default "public" behavior. With this, only users that belong to allowed groups or are the owner itself can access the content.
Note: PostgreSQL uses multiple level permissions which means that even if the database is owned by someone, tables can be owned by someone else.
Point 3:
Can be done with WITH GRANT OPTION.
Point 4:
There is no automated way to do this. The only way to ensure this is by restricting "public" access to all existing databases.
Point 5:
Using search_path on a per user basis is the only way to do it, using multiple users to access different schema (when needed). There is obviously the issue that a schema unaware application cannot "reach" other schema if no user with appropriate search_path exists.

Related

Multiple companies in same database

I'm working on a system which for every "company" has their own "users" and their own "bills". That scenario is better in performance and management? Handle all companies in the same database and link everything to an idempresa, or database for each client?
This is called multi tenancy architecture and each customer is a tenant. There are various strategies to deal with it and each one might bring potential problems.
Having a separate database for each tenant is an option that provides data separation and do not require you to add a column to identify each tenant in your tables and queries, but also has the downside to keep multiple databases up to date.
Having a column in each table of a single database to identify your tenants is also a good strategy, but then it brings problems when scaling and managing different features for different customer for example.
You need to study all available strategies and decides which one is best based on your requirements and pain points.
Putting a tenant data in a separate Database is a straight forward approach and less painful option but then in a long run, when your product gets wildly successful, maintaining this database will become a nightmare.
On the Other hand keeping all the Tenants data in a single database could also make your application non scalable and less performable. The better approach would be the combination of both, the decision of making the choice between these two is completely based on the type, usage and size of the customer.
In certain cases, you may need to provision a separate database for a particular module or feature of your application may be for security or to isolate the specific data alone. I have written an article on these lines; kindly have a look at http://blog.techcello.com/2012/07/database-sharding-scaling-data-in-a-multi-tenant-environment/
I think the scaling problem of multi-tenant in a single database can be overcome by proper planning up-front. Plan to make it easy to migrate a tenant and their data to another database anything they become big enough to justify it.
If you can automate this migration, based on tenant ID, in each table then it should be easy and safe. I'd just make sure I tested it often as development of new features are going on.
You can mitigate the risks of multi-tenant on one database. You can't really do much when there are multiple databases. You can only be diligent and disciplined to make sure all the databases stay in sync.
Good luck!!!
This is an old thread, but it's worth mentioning this for others with this question who may come across this post in the future.
I've had great success on projects in the past by using PostgreSQL and putting the global tables in the "public" schema (like users, groups, etc.) and the same set of tables for each tenant in their own separate schemas.
For example:
For every tenant that's added to the system, a new schema is created with a standard set of tables for the application:
CREATE SCHEMA tenant1;
CREATE TABLE tenant1.products (...);
CREATE TABLE tenant1.orders (...);
etc.
Each tenant's schema would have its own isolated section within the database with the same set of tables that every other tenant has but filled with their own data.
In the default "public" schema you'd have global "users" and "tenants" tables (along with tables for things like groups and access control lists). Every user belongs only to a single tenant. Upon login, the tenant for that user is looked up and from that point forward any time you connect to the database you set it to use that tenant's schema:
SET search_path TO tenant1, public;
Once the schema search_path is set, all your SQL queries can be written as if you're working with a single database with tables named "products", "orders", and so forth (along with the tables in the "public" schema). So you can just use something like "SELECT * FROM products" and it would get the products belonging to this user's tenant.

Database schema clarification

Unfortunately, the term "schema" has come to take on different definitions for different databases. We're using SQL Server 2008 R2, and with that in mind, I have a better understanding thanks to some other questions here with people asking similar questions. However, before I begin making the database, I want to be sure I have this right for my specific scenario.
Basically it's a database for various departments of the company. For example, Administration will manage employees with a bunch of tables related to employee management. Marketing will have a lot of marketing related tables. And tech support will have a lot of tech support related tables. These "groups" will probably never interact with one another, but they're all part of the same project, so I'm putting them all in one database, rather than three separate databases.
Am I correct in understanding that this means I would want three different schemas? So that for Administration, for example, the tables would be named:
Administration.Employees
Administration.VacationDays
Administration.EmployeeAddresses
etc.
and then for tech support, for example:
Techsupport.Clients
Techsupport.OpenIssues
Techsupport.ClosedIssues
etc.
And then am I correct in understanding that the PURPOSE of this, instead of just having every table in the dbo schema, is for A) organization purposes, and B) permission purposes (users with Techsupport schema access shouldn't be able to access the Administration schema, for instance). The idea I've come to in my head is that schemas in the SQL Server definition is that a schema is just like a virtual folder that groups related tables together.
I think this is right, after all the similar questions that I've read, but I just really want to be sure I'm on the right path before I get too far in and realize I'm doing it completely wrong.
Is throwing everything into the dbo schema and calling a day discouraged / not intended? Should you use a schema, even for small databases that don't necessarily need multiple schemas?
Thanks.
Schemas support two primary purposes:
security container. Permissions can be granted on schemas and such permissions apply to all objects in the schema. Eg. GRANT SELECT ON SCHEMA::Administration TO [foo\bar]; grants the SELECT permission to any table in the schema, including future added tables.
namespace. You can deploy your application in the schema [CptSupermarkt] and know that your app has a very low probability of a name conflict with other applications.
The prevalent use is the first one because most apps are not concerned with side-by-side deployment with other applications and usually assume ownership of an entire database (if not an entire instance). However there are types of applications (eg. audit tools and monitoring apps) that use the namespace aspect of schemas (or, at least, most should use it...).

SQL Server 2008 Schema Naming Conventions

I'm creating a brand new database with no legacy constraints, so I'm curious as to what the schema best practices are.
The database will be called "SecurityData". It stores information about bonds.
The schema I have already identified are:
import - Views and procs that are really linked server calls to other databases
export - Views and procs meant to be used by other databases
staging - Tables used for bulk inserts so we can verify and scrub the data.
??? - The real tables containing useful data
history - Change logs for the real tables
Questions:
Am I going schema crazy or does this make sense?
Should I use dbo for my "real tables" or should I avoid that schema as it tends to become a garbage dump?
Schemas serve a dual purpose:
security containers. Grants/deny/revokes on a schema apply to all objects in the schema. Separating related security objects into a shcema allows for easy maintenance and control of access.
namespaces. Qualifying object names with schemas allows reduced conflict probability with names used by other applications and even other modules within your own application.
So my question to you is: why do you want to use schemas in the first place? I'm not saying you shouldn't, but i want to understand which advantage of the schemas are you most appealed to. If you know the answer to that, then you'll know how many schemas you need and what those schemas are. Of course, the answer can be a mixture of the two reasons I give at start, that is OK. In that case you may find that what makes sense from a namespace point of view is a disaster from security point or view or vice-versa.
I myself I used separate schemas just like you plan to, and soley for programming namespace benefits. during development it helped me to see, just from the name of an object, where to it belongs logically in the app.

SQL Server Database Schemas

I use schemas in my databases, but other than the security benefits and the fact that my OCD is happy, I don't really know whay it is good practice to use them. Besides the more granular security, are there other reasons for using schemas when building a database?
The primary pupose of schemas is indeed security. A secondary benefit is that they act like namepaces for your application tables and objects, thus allowing a conflict free side-by-side deployment with other applications that may use same names for its object.
Schema's arose from the original Sql Server. They didn't have schemas which meant that every single object in the database had to be owned by someone. If jill from accounting left the company then you had to manually reassign all her stuff to someone else etc. Schemas now own objects and users belong to schemas, which makes all the DB Admins very happy people :).
Basically you can have users leave and you remove their privileges by removing them from schemas and deleting the user. Adding privileges to a user is now as simple as adding the user to the schema.

Which database implementations allow sandboxing users in separate databases?

Can anyone tell me if there are RDBMSs that allow me to create a separate database for every user so that there is full separation of users' data?
Are there any?
I know I can add UID to every table but this solution has its own problems (for example per user database schema changes are impossible).
Doesnt MySQL, PostgreSQL, Oracle and so on and so on allow you to do that?. There's the grant statements to control ACLs
I would imagine most (all?) databases allow you to create a user which you could then grant database level access to? SQL server certainly does.
Another simple solution if you don't need the databases to be massive or scalable, say for teaching SQL to students or having many testers work against their own database to isolate problems is SQLite, that way the whole database is a single file (per user), and each user cannot possibly screw up or interfere with other users.
They can even mail you the databases, or install them anywhere, say at home and at work with no internet required.
MS SQLServer2005 is one which can be used for multiple users.An instance can be created
if you have any, run the previlegs and use one user per instance
Oracle lets you create a separate schema (set of tables, indexes, functions, etc) for individual users. This is good if they should have separate different tables. Creating a new user could be a very expensive operation as you would be making new tables. Updating is a nightmare as well, as you need to update the model for each user.
If you want everyone to have the same set of tables, but only able to view their own records then you could use Fine Grain Access Control or Virtual Private Database features to do this.

Resources