How to use an application database without touching its schema - database

If you have another application that uses data of an existing database and needs some more, and you don't want to change the schema of the existing database, how do you do that?
Background of my question: We use an IBM product (Connections) to store user profiles. But we have lots of custom requirements (lots of custom fields and logics), so currently we create a few more tables, views and functions in the backend database of Connections to store the custom data. However, as it is IBM's internal database and we are not supposed to touch it, when we upgrade Connections, all our custom tables, views and functions are gone.
So we decide to move out our custom things. But the problem is we still need to join with the data from Connections. (Or not database join, just some other way to integrate with the data before presenting to the users. )
If we create a federated table in our own database, we can create tables and views like we used to. But would it have performance issues? And we are still going to be heavily depend on IBM's schema and have to assume they don't change it. Is it a good approach?
What are the other options we could consider?

If we create a federated table in our own database, we can create tables and views like we used to. But would it have performance issues?
Probably. Your application code would have to do joins between the IBM database tables and your database tables.
I'm assuming that Connections uses DB2. If you bring up your own DB2 database, I think you can do SQL joins between two separate DB2 databases.
Either way, this code should reside in a separate data access package made up of data access objects. The rest of your applications would use the data access package.
And we are still going to be heavily depend on IBM's schema and have to assume they don't change it.
IBM will change their schema, and you have to plan on making corresponding changes to your database and / or application.
What are the other options we could consider?
You could copy the IBM data from their database to your database. You still have to make changes to the copy process when the IBM schema table definitions change.

Related

Can a table be shared across multiple databases

I know that this is most probably not possible. But I will still detail my problem here, and if anyone has something similar to what I need, that would be great.
I have a file validation system which is hosted on a Windows server. This system holds a metadata table which is used by the front-end file validation application to validate various types of data. Right now it caters to a single application which is hosted on the same database as the metadata table.
The problem is that I want this system to be scalable so that it can validate files for a variety of applications some of which exist on different databases. Since some of the checks in my metadata table are based on Pl/SQL, I cannot run these checks unless the database for both the file validation system and the application database is the same. Is there a way for a table to be shared across multiple databases at once? If not, what could be the possible workarounds for this?
If you want to access several tables on several databases as if it was one table then you have to use a view which uses database links. You should first learn about database links and then creation of the view should not be the problem.
A database link allows you to access another database table just as a table on the same database. All you do is adding the #mydblink to the table name once you created the db link. Example
CREATE DATABASE LINK mydblink
CONNECT TO user IDENTIFIED BY password
USING 'name_of_other_db'
SELECT * FROM sometable#mydblink
It works well and there even is a two phase commit for updates in the remote and local database. There is some more to know about it depending on your set-up but you can read all about it in the oracle documentation. I have worked in a larger project extensively with database links and that is the right approach for what you want to do.
http://docs.oracle.com/cd/E11882_01/server.112/e25494/ds_concepts002.htm#ADMIN12083
http://docs.oracle.com/cd/E11882_01/server.112/e26088/statements_5005.htm
Here is a link that I found after some googling telling how to build a view that accesses data at several databases and also gives some information about the expected performance
http://www.dba-oracle.com/t_sql_dblink_performance.htm
Oracle has DBLinks. Check out the docs.
http://docs.oracle.com/cd/B28359_01/server.111/b28310/ds_concepts002.htm

CakePhp Multiple tenants - single DB versus multiple DBs

We are working on an application in CakePHP that will be used by multiple companies. We want to ensure performance, scalability, code manageability and security for our application.
Our current beta version creates a new database for each customer. When a new company joins the site we run a SQL script to create a blank database.
This has the following advantages:
- Better security (companies users are separated from each other)
- We can set the database via the subdomain (IE: monkey.site.com, uses the site_monkey database)
- Single code base.
- Performance for SQL queries is generally quite good as data is split across smaller databases.
Now unfortunately this has many disadvantages
- Manageability: changes to database have to happen across all existing databases
- The SQL script method of creation is clunky and not as reliable as we would like
- We want to allow users to login from the home page (EG. www.site.com) but we cant currently do this as the subdomain determines what database to use.
- We would like a central place to keep metrics/customer usage.
So we are torn/undecided as to what is the best solution to our database structure for our application.
Currently we see three options:
- Keep multiple database design
- Merge all companies into one DB and identify each by a 'companyId'
- Some kind of split model, where certain tables are in a 'core database' and others are in a customer specific database.
Can you guys offer some of your precious advise on how you think we should best do this?
Any feedback / info would be greatly appreciated.
Many thanks,
kSeudo
Just my suggestion:
I think better you keep the customer related data in different databases and authentication related data in a common database So when a user logs in you should have an entry with domain that user belongs to and redirect to that domain and access the corresponding database and data.
Again your concern of changes to the database, You need to implement the changes in each databases separately. I think there is some advantages to this also. Some customers may ask for few changes according to their process. So this can be easily managed if you are keeping separate databases for different customers.

Can I store any custom tables in SharePoint system database?

Can I store any custom tables in SharePoint's own database?
Is this supported behavior or not?
(I mean tables in MS SQL database, not SharePoint lists.)
If I can, how well does this play with backup/restore functionality?
What are possible caveats?
For anyone wondering why I'm asking: there's an app which is bound to SharePoint server and needs to store some purely relational internal information that doesn't make sense apart from that SharePoint instance. I would like to narrow down data storage to one place but I'm not sure if SharePoint likes its database being used for other purposes.
I'm using SharePoint 2007.
Is it possible? Sure. Should you? Nope.
The SharePoint content/configuration databases are subject to change with any update Microsoft releases, and any changes you make will very likely be destroyed, and if your farm depends on them, be left non-functional.
If you want to store purely relational data in a set of tables, just create another database. There's nothing stopping you from using the same SQL Server instance that houses your SharePoint content and/or configuration databases to store other relational databases as well.
Not a good idea: Support for changes to the databases used by Windows Sharepoint Services
...
Making any modification to the database schema
Adding tables to any of the databases
...
If an unsupported database modification is discovered during a support call, the customer must perform one of the following procedures at a minimum:
Perform a database restoration from the last known good backup that did not include the database modifications
Roll back all the database modifications
It is even worse than the above. It is likely that future upgrades will notice your changes to the content database schema and refuse to upgrade the database period.

Best-Practices for using schemas in SQL Server (2008)

I can see in the AdventureWorks database that different schemas are used to group tables. Why is this done (security, ...?) and are there best-practices I can find?
thx, Lieven Cardoen
As a manager of Business Intelligence, we rely on schema for logical grouping and managing security. Here are some cases as to how we use schema:
LOGICAL ORGANIZATION
We have a general database that is loaded by SSIS packages solely for staging data before we load our operational data store (ODS). In this database, with the exception of the schema all objects are indentical in structure (table names, column names, data types, nullability, etc.) to their original source. We use the schema to indicate the original source system of the table. In some rare instances, two different databases have tables with the same name and schema allows us to continue to use the original name in the staging database.
In every database on our BI servers each team member has a test_username schema. When we create test objects in a database, this makes it easy to keep track of who made the object. It also makes it a lot easier to purge the test objects later since everyone knows who made what. Frankly, just knowing that we made it is usually enough to know it can be deleted safely, especially when we can't remember when or why we made it!
In our data controller database, we rely on schema to separate different types of processes between reports, etl, and generic resources.
In our star schema data warehouse, all objects are devided into dimension and fact schemas.
When we push data to other departmental servers, we make all BI objects on their servers use the schema bi. This makes it REALLY easy to know bi loads and maintains the table even though it isn't on our server. If the target server isn't a 2008/2005 SQL Server box, then we prefix the table with bi_.
When it gets down to it, we use schema for logical organization anytime we WOULD have appended a prefix or suffix to an object to help organize it in the absence of schema. Having said this, there are a few instances where we don't use schema on our BI servers. In our WorkingDB, everything is dbo. Our WorkingDB is used like TempDB to create temporary tables, but these tables are temporary tables that we know we will create everytime an ETL process runs. The special property of WorkingDB is that we don't ever backup the database and all ETL processes that use the database must be able to recreate their objects from scratch in the absence of the table. In this instance, we felt using schema didn't add ANY organizational value since we don't actually use the objects outside of their temporary ETL process.
SECURITY
Since we are a BI group, we don't generally build and support our own applications. We almost exclusively use other people's applications and bring data from their back-end databases to our server. However, we do have one database called bi_applications that is the back-end for a variety of small CRUD applications. These applications are usually data entry forms that we provide to the business so that they can capture data we would otherwise have to maintain in BI. It is a way of getting data that should be in production applications into BI while we wait for our low priority application enhancements to gather dust in the future development lists. Each application has a separate schema and the application account used to update the underlying tables ONLY has access to objects of the associated schema. This makes it really easy to understand, secure, and maintain the separate applications.
In a few instances, I have let power users have direct database access to our tables or stored procedures. We rely on using schema combined with roles to secure the objects. We grant permissions to the schema and users are added to roles. This allows us to easily understand which objects are used by whom without having to dig through roles to figure it out.
In short, we use schema for security purposes when we probably would have considered separating the objects out into their own databases and when we expect an application or user outside of BI to access our databases.
Although these aren't best business practices for application developers, I hope my bi use-cases may help you think of some of the ways to use schema in your end of the business.

Which database implementations allow sandboxing users in separate databases?

Can anyone tell me if there are RDBMSs that allow me to create a separate database for every user so that there is full separation of users' data?
Are there any?
I know I can add UID to every table but this solution has its own problems (for example per user database schema changes are impossible).
Doesnt MySQL, PostgreSQL, Oracle and so on and so on allow you to do that?. There's the grant statements to control ACLs
I would imagine most (all?) databases allow you to create a user which you could then grant database level access to? SQL server certainly does.
Another simple solution if you don't need the databases to be massive or scalable, say for teaching SQL to students or having many testers work against their own database to isolate problems is SQLite, that way the whole database is a single file (per user), and each user cannot possibly screw up or interfere with other users.
They can even mail you the databases, or install them anywhere, say at home and at work with no internet required.
MS SQLServer2005 is one which can be used for multiple users.An instance can be created
if you have any, run the previlegs and use one user per instance
Oracle lets you create a separate schema (set of tables, indexes, functions, etc) for individual users. This is good if they should have separate different tables. Creating a new user could be a very expensive operation as you would be making new tables. Updating is a nightmare as well, as you need to update the model for each user.
If you want everyone to have the same set of tables, but only able to view their own records then you could use Fine Grain Access Control or Virtual Private Database features to do this.

Resources