I know that this is most probably not possible. But I will still detail my problem here, and if anyone has something similar to what I need, that would be great.
I have a file validation system which is hosted on a Windows server. This system holds a metadata table which is used by the front-end file validation application to validate various types of data. Right now it caters to a single application which is hosted on the same database as the metadata table.
The problem is that I want this system to be scalable so that it can validate files for a variety of applications some of which exist on different databases. Since some of the checks in my metadata table are based on Pl/SQL, I cannot run these checks unless the database for both the file validation system and the application database is the same. Is there a way for a table to be shared across multiple databases at once? If not, what could be the possible workarounds for this?
If you want to access several tables on several databases as if it was one table then you have to use a view which uses database links. You should first learn about database links and then creation of the view should not be the problem.
A database link allows you to access another database table just as a table on the same database. All you do is adding the #mydblink to the table name once you created the db link. Example
CREATE DATABASE LINK mydblink
CONNECT TO user IDENTIFIED BY password
USING 'name_of_other_db'
SELECT * FROM sometable#mydblink
It works well and there even is a two phase commit for updates in the remote and local database. There is some more to know about it depending on your set-up but you can read all about it in the oracle documentation. I have worked in a larger project extensively with database links and that is the right approach for what you want to do.
http://docs.oracle.com/cd/E11882_01/server.112/e25494/ds_concepts002.htm#ADMIN12083
http://docs.oracle.com/cd/E11882_01/server.112/e26088/statements_5005.htm
Here is a link that I found after some googling telling how to build a view that accesses data at several databases and also gives some information about the expected performance
http://www.dba-oracle.com/t_sql_dblink_performance.htm
Oracle has DBLinks. Check out the docs.
http://docs.oracle.com/cd/B28359_01/server.111/b28310/ds_concepts002.htm
Related
I am using MSSQL 2014 and I am having troubles fulfilling one request from DB admin regarding the security.
Our solution contains several databases (on the same server) with stored procedures in them that have joins between the tables of two or more databases. We have one user that has same rights on all databases.
Very simplified example: one database contains Articles while the other one contains Prices, queries need to get Articles with their Prices. Queries have INNER JOIN-s between tables in those two databases.
After deployment of the solution on our client's test environment, I talked to client's DB admin and he asked me to modify the databases and users to match some of their standards. Those standards include different database names on different environments as well as one separate user per database.
If I did changed database names and users in my current solution, stored procedures would return errors due to invalid database names or invalid credentials. I asked that admin how they solved such problem in their environment and his answer was that they create database links. After googling for the solution, I found out that Oracle has CREATE DATABASE LINK option, but there is nothing similar in MSSQL (except maybe linked servers feature, but that does not solve my problem).
What I am looking for is something similar to Oracle's CREATE DATABASE LINK in MSSQL 2014, some solution that would allow execution of stored procedures without need to change the queries, but rather to create 'alias' of the databases that need to be renamed.
Does anyone have an idea how I could do this?
I just searched msdn
CREATE SYNONYM (Transact-SQL)
I know link only answers are frowned upon but that is the answer and msdn is not going away
not sure if this can be an answer but it is too long to fit a comment...
there is no tech detail in your question so i'm making a huge wild guess but maybe the problem could be avoided using many schema instead of many databases.
that way all your objects would belong to the same database and there will be no need to make cross-database calls.
obviously this solution may not fit your situation, be not applicable because of [your long list of reason here] and would also require changes whose impact cannot be estimated reading a post on SO... ^^
as a side note the customer is not always right; if they have a policy that relies upon a specific oracle feature they cannot expect to enforce that very same policy on a different rdbms that lack the feature.
actually they may expect to do so but is your job to 'educate' them (or at least try to!).
anyway if the policy is mandatory then they will be happy to pay for the extra effort required to comply, isn't it?
Starting with the obvious I think you should explain the client that although there is more than one physical database it is in fact the very same database split apart and used by the very same product. Make it clear that this shouldn't be considered like a breach in their standard.
If you can not convince them then you have to deal with two problems : different user/login and different database name.
Different user/login : You can solve this with a linked server. Linked servers aren't only for different servers or different instances, you can create a "local" linked server that will use a different login.
EXEC sp_addlinkedserver
#server='yourAlias',
#srvproduct='SQL_SERVER',
#provider='yourServer\yourInstance',
#dataSrc = 'yourDatabaseXYZ'
GO
You can easily modify the credentials used by the linked server you've just created :
In SQL Server Management Studio, open Object Explorer, expand Server Objects and ight-click the new linked servers you just created;
On the Security page specify the login you want to use.
Different database name : The only solution I can came up with is to create a SYNONYM for every object in the database. This is something others people seems to have done before and although it is not something funny it seems very doable. You can have a look here for a complete solution.
Of course you can use a SYNONYM on a linked server resource. Something like :
CREATE SYNONYM yourTable FOR [yourAlias].yourDatabaseXYZ.schemaABC.yourTable
And then you will be able to do :
-- Transparent usage of a table from another database with different credentials
SELECT * FROM yourTable;
By the way, there is a feature request to Microsoft to Allow CREATE SYNONYM for database. Maybe you'd like to upvote it
Final note :
To avoid problem like this, and like Blam also mentionned, you should consider not hardcoding the database name in your application.
If you have another application that uses data of an existing database and needs some more, and you don't want to change the schema of the existing database, how do you do that?
Background of my question: We use an IBM product (Connections) to store user profiles. But we have lots of custom requirements (lots of custom fields and logics), so currently we create a few more tables, views and functions in the backend database of Connections to store the custom data. However, as it is IBM's internal database and we are not supposed to touch it, when we upgrade Connections, all our custom tables, views and functions are gone.
So we decide to move out our custom things. But the problem is we still need to join with the data from Connections. (Or not database join, just some other way to integrate with the data before presenting to the users. )
If we create a federated table in our own database, we can create tables and views like we used to. But would it have performance issues? And we are still going to be heavily depend on IBM's schema and have to assume they don't change it. Is it a good approach?
What are the other options we could consider?
If we create a federated table in our own database, we can create tables and views like we used to. But would it have performance issues?
Probably. Your application code would have to do joins between the IBM database tables and your database tables.
I'm assuming that Connections uses DB2. If you bring up your own DB2 database, I think you can do SQL joins between two separate DB2 databases.
Either way, this code should reside in a separate data access package made up of data access objects. The rest of your applications would use the data access package.
And we are still going to be heavily depend on IBM's schema and have to assume they don't change it.
IBM will change their schema, and you have to plan on making corresponding changes to your database and / or application.
What are the other options we could consider?
You could copy the IBM data from their database to your database. You still have to make changes to the copy process when the IBM schema table definitions change.
I am currently been assigned to develop a sync application for my company. We have SQL server on our database server which will be synced with the client database. Client databases are not known, they can be SQLite or MYSQL or whatever.
What this sync app does is, detect changes that occur on server & client databases. Save these changes and sync. If changes occur on server database it will be synced with the client database and vice versa.
I did some research on it and came to know many solutions. One of them is to use a Microsoft Sync Framework. But I hardly found a good implementation example on it for syncing with remote databases.
Then I came across Change Data Capture(CDC) on SQL Server 2008. CDC works by detecting the change on the source table through triggers and put these changes on a separate table called sync_table, this table is then used for syncing.
Since, I cannot use the CDC feature because I don't have sufficient database rights on my machine, I have started to develop my own solution which works like how CDC does. I create separate sync_table for each source table, create triggers to detect data change and put this data in the sync_table.
However, I am advised to do some more research on it for choosing the best implementation methodology.
I need to keep the following things in mind,
Databases may/may not be on the same network.
On server side, the user must be able to select which tables will take part in the sync process.
Devices that will sync with the server database need to be registered first. Meaning that all client devices will be registered by the user before they can start syncing.
As usual any help will be appreciated :)
There is an open source project called SymmetricDS with many of the same goals. Take a look at the documentation and data model to see how the problem was solved, and maybe you will get some ideas. Instead of a separate shadow table for each source table, there is a single sym_data table where all the data is captured in comma separated value format. The advantage is one place to look for captured data and retrieve changes that were part of the same transaction. The table is kept small by purging it often after data is transferred successfully. It uses web protocols (HTTP) for data transfer. The advantage is leveraging existing web servers for performance, administration, and known filtering through firewalls. There is also a registration protocol used before clients are allowed to sync. The server admin "opens registration" for a client ID, which allows the client to connect for the first time. It supports many different databases, so you'll find examples of how to write triggers and retrieve unique transaction IDs on those systems.
I know that SQLite doesn't support setting up different users. I have a requirement where I need to prevent certain set of users from doing INSERTS/UPDATES into the SQLite DB. As SQLite doesnt have any GRANT/REVOKE commands is there any other way to setup different access levels to the DB file apart changing the file permissions.
Thanks.
Not built into sqlite, no.
You could write your own access control, however. Many web projects are an example of this model. They regularly authenticate against user records contained within the database itself.
We are working on an application in CakePHP that will be used by multiple companies. We want to ensure performance, scalability, code manageability and security for our application.
Our current beta version creates a new database for each customer. When a new company joins the site we run a SQL script to create a blank database.
This has the following advantages:
- Better security (companies users are separated from each other)
- We can set the database via the subdomain (IE: monkey.site.com, uses the site_monkey database)
- Single code base.
- Performance for SQL queries is generally quite good as data is split across smaller databases.
Now unfortunately this has many disadvantages
- Manageability: changes to database have to happen across all existing databases
- The SQL script method of creation is clunky and not as reliable as we would like
- We want to allow users to login from the home page (EG. www.site.com) but we cant currently do this as the subdomain determines what database to use.
- We would like a central place to keep metrics/customer usage.
So we are torn/undecided as to what is the best solution to our database structure for our application.
Currently we see three options:
- Keep multiple database design
- Merge all companies into one DB and identify each by a 'companyId'
- Some kind of split model, where certain tables are in a 'core database' and others are in a customer specific database.
Can you guys offer some of your precious advise on how you think we should best do this?
Any feedback / info would be greatly appreciated.
Many thanks,
kSeudo
Just my suggestion:
I think better you keep the customer related data in different databases and authentication related data in a common database So when a user logs in you should have an entry with domain that user belongs to and redirect to that domain and access the corresponding database and data.
Again your concern of changes to the database, You need to implement the changes in each databases separately. I think there is some advantages to this also. Some customers may ask for few changes according to their process. So this can be easily managed if you are keeping separate databases for different customers.