I'm developing a web application using PHP and an RDBMS. Some of the data my application needs are stored in a remote database owned by another entity. I have limited read-only access to this other database. Is there an RDBMS capable of executing a query to the remote database and using the result as if it were a local table (i.e. satisfying foreign key relationships, JOINing, etc)? I would prefer FOSS, but it's not a requirement.
MySQL have the FEDERATED table type. You can create a table that references a table at a remote MySQL instance, and than use that table as if it was a local table.
This does have some limitations (e.g. no transactions support, IMHO), but it should work as you described.
Related
Is it possible to create external table in Snowflake referring to on premise Oracle database?
No, Snowflake does not presently support query federation to other DBMS software.
External tables in Snowflake exist only to expose a collection of data files (commonly found in data-lake architectures) as a qualified table without requiring a load first.
Querying your Oracle tables will currently require an explicit export of its data onto a cloud storage location to allow Snowflake to access it.
When our IT department converts Access databases to SQL Server the relationships do not transfer over. In the past, I have provided ERDs that they can use to build the relationships. In this case, I didn't.
What are the possible consequences of defining the table relationships in the MS Access Front End versus on the SQL Server itself?
It would be ideal if I could just create the relationships in Access and avoid submitting a request to IT, but I don't want to risk performance issues now or in the future.
There may be some misconceptions.
A relationship in SQL Server enforces referential integrity (an order cannot have a customer ID that doesn't exist). It does not automatically create an index on the Foreign Key, so it has per se no impact on performance.
But in most cases it is a good idea to define an index on a foreign key, to improve performance.
A relationship that you define in Access on linked tables does neither. It cannot enforce referential integrity (that's the server's job).
It is merely a "hint" that the tables are related via the specified fields, e.g., so that the Query Builder can automatically join the tables if they are added to the query design. (copied from here)
So you should
Create the relationships in SQL Server to avoid inconsistent data. ("But my application logic prevents that!", I hear you say. Well, applications have bugs.)
Create indexes on foreign keys where appropriate to avoid performance problems.
If you are working with queries in the Access frontend, additionally define the relationships there.
Ideally you should have a test server where you can yourself define the relationships, and just send the finished SQL script to IT.
Documentation on Azure Search says I can use Azure Sql server as a datasource(https://azure.microsoft.com/en-us/documentation/articles/search-howto-connecting-azure-sql-database-to-azure-search-using-indexers-2015-02-28/). Can I do the same with an on-premise SQL server?
I have a typical relational structure like
User Table -> Address Table
User Table -> UserDetails table etc..
All linked to each other via foreign keys. My search should end up with an UserId, so I can link to my UserDetailsPage.aspx?UserId=xxx
What will be the best suggested way to build the datasource? Should I Create a view and apply change tracking on it? or Should I create a different datasource for each table and sync the concerned index?
Please shed some light on best practices in a typical relational database scenario.
you would need to allow the IP address of your search service to connect to your on-prem DB.
In terms of view vs. multiple indexers targeting the same index - both approaches might work. What info will your users be searching on - address, details, or both? If it's only one of those, then you wouldn't have to index both tables.
Keep in mind that if you decide to index a view joining both tables, you won't be able to use SQL integrated change tracking, and will have to rely on a rowversion or timestamp column in the view.
HTH
I'm am currently developing one project of many to come which will be using its own database and also data from a central database.
Example:
the database "accountancy" with all accountancy package specific tables.
the database "personelladministration" with its specific tables
But we also use data which is general and will be used in all projects like "countries", "cities", ...
So we have put these tables in a separate database called "general"
We come from a db2 environment where we could create foreign keys between databases.
However, we are switching to MS SQL server where it is not possible to put foreign keys between databases.
I have seen that a workaround would be to use triggers, but I'm not convinced that is a clean solution.
Are we doing something wrong in our setup? Because it seems right to me to put tables with general data in a separate database instead of having a table "countries" in every database, that seams difficult to maintain and inefficiƫnt.
What could be a good approach to overcome this?
I would say that countries is not a terrible table to reproduce in multiple databases. I would rather duplicate static data like that than use more elaborate techniques. There is one physical schema per database in sql server and the schema can not be shared. That is why people use replication or triggers for shared data.
I can across this problem a while back. We have one database for authentication, however, those users have to be shared across multiple applications some of which have their own database.
Here is my question on this topic.
We resorted to replication and using an custom Authentication/Registration service agent to keep the data up to data.
Using views, in what Sourav_Agasti suggested in his answer, would be the most straight forward approach for static data. You can create views and indexed views and join data from databases on linked servers.
Create a loopback linked server and then create a view(if required, on each database) which accesses the table in this "central database" through this linked server. There will be a minor performance impact but it more than enough compensates by being very simiplistic.
I write a module to translate 1 sql query into another query. When users send sql queries to DB-Engine, then DB-Engine will firstly forward these queries to my defined-module before processing sql syntax.
How can I integrate my-defined module to DB-Engine of SQL Server?
You can redirect queries for certain data to different tables using a partitioned view:
http://technet.microsoft.com/en-US/library/ms188299(v=SQL.105).aspx
In a nutshell, you tell the server some rules as to which values reside in which tables (usually based on primary or foreign key ranges for example). When you query using the partition field, the database can direct your query to the correct remote table. But you can still do queries over all the tables just as if they were held locally (except more slowly).