I'm looking for a strategy to allow automatic updates for a number of databases at customer sites through a publish-subscribe kind of mechanism. Right now there is a datacenter which has all the master data that get fed through extractions from hundreds of databases out there. The problem is that, whenever I need to do create a new view in the remote customer databases, I have to manually roll out an installation patch and ask the users to run it (their sites are behind firewalls, so I can't remotely do that from my end). Ideally, I would like to have a "DDL image" of the customer database schema at the datacenter, and whenever any change happens to it, all the subscribing customer databases would update their table view codes. The target databases are mostly SQL Server 2005 and Oracle.
I heard the MS SQL replication services could do such a thing? What about Oracle? anybody had experience with such?
Thanks!
Not sure about existing solutions, but how about writing your own auto-update mechanism that would run on a timer on the client machines and pull the latest schemas and views from some service table in your master database? Your change wouldn't get propagated straight away to all sites and some sites would update before others, but they would all eventually see the changes.
Golden gate might fit your needs.
Related
I am a new SQL developer (not DBA or Architect) and I'm working on a new system for the directors of a company.
This company has around 50 dependencies and each one of them use the same desktop system for their inventory. The database of this system is not centralized, in every branch office they have their own individual database and it's not shared.
Now the directors of the company want to supervise all the inventory of the company, so they want that at the end of every day the data of each branch is transferred to a centralized database at the company server.
These are all SQL Server 2008 R2 databases. I am tasked with figuring out how to:
Transfer that data (is not a complete replication of those databases because they only want some of it) to the centralized database without direct data access between the databases of each brach and the database at the company server (they don't want it)
I have read a number of articles on the internet but almost every one of them talk about transactional replication and the use of SSIS (this was my first option when they assigned me this task) to transfer the data between the SQL servers, but I can't do that, they don't want a direct connection between the databases of the brach and the company due to technical limitations imposed by the network administrators (also I can only use ports 80 and 443).
I am hoping some of you with more experience with SQL and integration can help me with that.
I have been thinking about using web services, but I have no idea I have no idea where to start. Sorry if this may be a trivial question but I have been working as an android developer and all this is new to me.
We have a requirement where we will have to move data between different database instance on regular basis. (For e.g. some customers willing to pay more for the better performance). So this is not going to be one off.
The database tables has referential integrity. Is there a way in which this can be done without rewriting sql script (or some other method) every time we migrate customers data?
I came across this How to move data between multiple database's table while maintaining foreign-key relationships/referential integrity?. However it appears that we have write script every time we migrate data (please correct me if I misunderstood the answer on this thread).
Thanks
Edit:
Both servers are using SQL Server 2012 (same version). Its an Azure SQL Server database.
They are not necessarily linked (no firewall between them)
We are only transferring some data, not the whole database. This is only for certain customers who opted pay more.
The schema are exactly same in both databases.
Preyash - please see the documentation on the Split-Merge tool. The Split-Merge tool enables you do move data between databases, as you have described, based on a sharding key (e.g., customer ID). One modification that you will need for your application is to add a shard map (i.e., a database that understand the global state of which customers resides in which databases).
Have a look into Azure Data Sync. It is much more aligned with your requirements. But you may end up in having another SQL Azure DB to maintain a Hub. Azure data Sync follows hub-spoke pattern and will let you do all flexible directional syncs with a few minutes of syncing gap. It is more simple and can set it up very fast without any scripts and all as you wanted.
I have a client/server application currently that has a Oracle 10G database. The company that I purchased the application form is not providing support. The company when I purchased the application provided me a SQL tool with a READ Only access access to approx 30-40 views.
Based on my analysis the views provide some but not all the data and I want access to data which may be in other tables
I am not a developer but the business owner so excuse my naivety in some of the questions below.
Can I export/duplicate/replicate the Oracle DB to another Oracle DB and will a Oracle DBA be able to view/access all the tables and understand the relationships
What is the best way to create a duplicate DB that keeps in sync with the application DB which we currently have. We would like to use the Duplicate DB as a backend for a website.
Thanks a lot!
ML
Assuming that the Oracle database resides on a server in your organization, it seems premature to be talking about talking about replicating the data to a different database. It is certainly possible to do so. But you can also run many, many different applications against the same database. Unless you know that the current database server would not be able to cope with the additional workload of the new application or you are planning on investing the time and effort to transform the data into better data model as part of replicating the data (which is extremely unlikely if you don't already know what the underlying data model is and if you don't already know that this data model isn't going to work well for the new application), you probably want to start with the assumption that you can probably build the new application against the existing database.
A database developer or a DBA should be able (again, assuming that you own the server) to determine what underlying tables exist. That person should be able to at least get some idea of how the tables relate to each other based on the existing view definitions. If the original company did a good job building the database, a new developer/ DBA should have a relatively easy time understanding the relationships. If the original company did shoddy work or was intentionally secretive, it will be a more challenging undertaking.
I am currently been assigned to develop a sync application for my company. We have SQL server on our database server which will be synced with the client database. Client databases are not known, they can be SQLite or MYSQL or whatever.
What this sync app does is, detect changes that occur on server & client databases. Save these changes and sync. If changes occur on server database it will be synced with the client database and vice versa.
I did some research on it and came to know many solutions. One of them is to use a Microsoft Sync Framework. But I hardly found a good implementation example on it for syncing with remote databases.
Then I came across Change Data Capture(CDC) on SQL Server 2008. CDC works by detecting the change on the source table through triggers and put these changes on a separate table called sync_table, this table is then used for syncing.
Since, I cannot use the CDC feature because I don't have sufficient database rights on my machine, I have started to develop my own solution which works like how CDC does. I create separate sync_table for each source table, create triggers to detect data change and put this data in the sync_table.
However, I am advised to do some more research on it for choosing the best implementation methodology.
I need to keep the following things in mind,
Databases may/may not be on the same network.
On server side, the user must be able to select which tables will take part in the sync process.
Devices that will sync with the server database need to be registered first. Meaning that all client devices will be registered by the user before they can start syncing.
As usual any help will be appreciated :)
There is an open source project called SymmetricDS with many of the same goals. Take a look at the documentation and data model to see how the problem was solved, and maybe you will get some ideas. Instead of a separate shadow table for each source table, there is a single sym_data table where all the data is captured in comma separated value format. The advantage is one place to look for captured data and retrieve changes that were part of the same transaction. The table is kept small by purging it often after data is transferred successfully. It uses web protocols (HTTP) for data transfer. The advantage is leveraging existing web servers for performance, administration, and known filtering through firewalls. There is also a registration protocol used before clients are allowed to sync. The server admin "opens registration" for a client ID, which allows the client to connect for the first time. It supports many different databases, so you'll find examples of how to write triggers and retrieve unique transaction IDs on those systems.
Can I store any custom tables in SharePoint's own database?
Is this supported behavior or not?
(I mean tables in MS SQL database, not SharePoint lists.)
If I can, how well does this play with backup/restore functionality?
What are possible caveats?
For anyone wondering why I'm asking: there's an app which is bound to SharePoint server and needs to store some purely relational internal information that doesn't make sense apart from that SharePoint instance. I would like to narrow down data storage to one place but I'm not sure if SharePoint likes its database being used for other purposes.
I'm using SharePoint 2007.
Is it possible? Sure. Should you? Nope.
The SharePoint content/configuration databases are subject to change with any update Microsoft releases, and any changes you make will very likely be destroyed, and if your farm depends on them, be left non-functional.
If you want to store purely relational data in a set of tables, just create another database. There's nothing stopping you from using the same SQL Server instance that houses your SharePoint content and/or configuration databases to store other relational databases as well.
Not a good idea: Support for changes to the databases used by Windows Sharepoint Services
...
Making any modification to the database schema
Adding tables to any of the databases
...
If an unsupported database modification is discovered during a support call, the customer must perform one of the following procedures at a minimum:
Perform a database restoration from the last known good backup that did not include the database modifications
Roll back all the database modifications
It is even worse than the above. It is likely that future upgrades will notice your changes to the content database schema and refuse to upgrade the database period.