I'm exploring options on how to one-way sync from a table available via API to an SQL database. Does anyone have any suggestions on how to achieve this?
The data from the "Source" is often updated and should be copied to the "Destination" as the changes happen (live).
Source
Read Only table from an ERP available via an API. Webhooks on the source are not possible. Entries to this table may be created, updated or deleted. There would be approximately 150,000 entries in the table with about 1000 changes per day.
Destination
Azure MS SQL database which I have full control over.
I'm looking for best practice or any ideas on how to achieve this. There seems to be very few articles that I can find with anything helpful.
I'm open to using any tool on Azure including Logic Apps and Azure Functions but want to stay away from using 3rd party tools.
If you are trying to achieving this through logic apps, Below is the flow that you can follow.
Note: Make sure you preprocess the data before sending the data to SQL database using appropriate actions based on the type of data that you are receiving.
Related
I was hoping to get help to find a solution.
I have an excel spreadsheet that connects to my SQL server and populates a spreadsheet based on a query i have written.
I want to share this spreadsheet with 10 users.
This is were the problem comes. In order to share the spreadsheet i have to remove the link. Meaning the connection to the SQL data is lost.
The database is frequently updated so need to keep the connection live. All 10 users might be in the spreadsheet at the same time working on cases.
Is there a way for me to do this as iv search high and low and cant find a solution. I am almost tempted to create a c# app that will allow me to do this instead of using a spreadsheet, Any suggestions will be very much welcomed
Thanks
Take a look at https://cirkulate.com. We are working on such a service that enables developers to go from SQL to automatically refreshing spreadsheets.
You'll only need to specify SQL that populate the cells, the refresh schedule(optional) and the recipient email addresses.
When the spreadsheet refreshes, all the recipients get the latest copy of the spreadsheet along with a snapshot of the spreadsheet in the email body, so they consume a quick snapshot within the email client.
Disclosure: Just to be clear about affiliations, I am the founder of Cirkulate.
We have a requirement where we will have to move data between different database instance on regular basis. (For e.g. some customers willing to pay more for the better performance). So this is not going to be one off.
The database tables has referential integrity. Is there a way in which this can be done without rewriting sql script (or some other method) every time we migrate customers data?
I came across this How to move data between multiple database's table while maintaining foreign-key relationships/referential integrity?. However it appears that we have write script every time we migrate data (please correct me if I misunderstood the answer on this thread).
Thanks
Edit:
Both servers are using SQL Server 2012 (same version). Its an Azure SQL Server database.
They are not necessarily linked (no firewall between them)
We are only transferring some data, not the whole database. This is only for certain customers who opted pay more.
The schema are exactly same in both databases.
Preyash - please see the documentation on the Split-Merge tool. The Split-Merge tool enables you do move data between databases, as you have described, based on a sharding key (e.g., customer ID). One modification that you will need for your application is to add a shard map (i.e., a database that understand the global state of which customers resides in which databases).
Have a look into Azure Data Sync. It is much more aligned with your requirements. But you may end up in having another SQL Azure DB to maintain a Hub. Azure data Sync follows hub-spoke pattern and will let you do all flexible directional syncs with a few minutes of syncing gap. It is more simple and can set it up very fast without any scripts and all as you wanted.
I have a Dynamics CRM 2011 instance where the database has become corrupted. The corrupted data appears to be isolated to a few tables (e.g. PrincipalObjectAccess) and the instance still functions normally to all appearances. The data is irretrievable (all forms of DBCC CHECKDB, etc. have been run) and a backup is not available (preaching on backups will not help resolve the issue).
I've tried using schema and data synchronization tools like those offered by dbForge and Red-Gate, the schema sync works but the data sync always seems to come up inconsistent.
At this juncture I think my best route is probably to export all data from Dynamics CRM 2011 and then import it into a new instance of Dynamics CRM 2011. Any thoughts on the best way to accomplish this? Or alternative methods of rectifying the situation?
Exporting all data and importing it into new organization will likely create more errors and I wouldn't really go with that option unless everything else fails.
You said data synchronization failed: have you tried deleting all data from new instance first and then running data synchronization. It should be simpler than synchronization when data already exists there.
Have you tried synchronizing data using ApexSQL Data Diff ?
Another option you can try that doesn't require you to create new organization is reading your SQL Server transaction logs and checking if corrupted data can be found there. If you can retrieve the data then you can just re-create tables with valid data and you'll be all good. Unfortunately this is only possible using 3rd pary tools such as ApexSQL Log
I would recommend looking into the CRM 2011 Instance Adapter
Unlike Scribe, it's free.
Microsoft blog post: http://blogs.msdn.com/b/crm/archive/2012/10/24/the-microsoft-dynamics-crm-2011-instance-adapter-has-released.aspx
PowerObjects wrote an article about it as well:
http://www.powerobjects.com/blog/2012/10/26/introduction-microsoft-dynamics-crm-2011-instance-adapter/
Peter
If you can, export to Excel and import from there.
Advantage: easy, fast, graspable
If you can't, design a console application that connects to the server, queries it, fetches data and shoves it into the other instance.
Advantage: full control, repeatibility, configurability, coolness factor and you get to type some code
This really depends on the scope of your data. Are you talking about millions of records with a huge list of entities or are you talking a couple entities with a thousand or so records?
If it's something small, you could always try exporting via excel and then importing into the new org.
SSIS, CozyRoc or Scribe will do the trick. I'd opt for Scribe and go entity by entity if it is a mission critical situation.
I am currently been assigned to develop a sync application for my company. We have SQL server on our database server which will be synced with the client database. Client databases are not known, they can be SQLite or MYSQL or whatever.
What this sync app does is, detect changes that occur on server & client databases. Save these changes and sync. If changes occur on server database it will be synced with the client database and vice versa.
I did some research on it and came to know many solutions. One of them is to use a Microsoft Sync Framework. But I hardly found a good implementation example on it for syncing with remote databases.
Then I came across Change Data Capture(CDC) on SQL Server 2008. CDC works by detecting the change on the source table through triggers and put these changes on a separate table called sync_table, this table is then used for syncing.
Since, I cannot use the CDC feature because I don't have sufficient database rights on my machine, I have started to develop my own solution which works like how CDC does. I create separate sync_table for each source table, create triggers to detect data change and put this data in the sync_table.
However, I am advised to do some more research on it for choosing the best implementation methodology.
I need to keep the following things in mind,
Databases may/may not be on the same network.
On server side, the user must be able to select which tables will take part in the sync process.
Devices that will sync with the server database need to be registered first. Meaning that all client devices will be registered by the user before they can start syncing.
As usual any help will be appreciated :)
There is an open source project called SymmetricDS with many of the same goals. Take a look at the documentation and data model to see how the problem was solved, and maybe you will get some ideas. Instead of a separate shadow table for each source table, there is a single sym_data table where all the data is captured in comma separated value format. The advantage is one place to look for captured data and retrieve changes that were part of the same transaction. The table is kept small by purging it often after data is transferred successfully. It uses web protocols (HTTP) for data transfer. The advantage is leveraging existing web servers for performance, administration, and known filtering through firewalls. There is also a registration protocol used before clients are allowed to sync. The server admin "opens registration" for a client ID, which allows the client to connect for the first time. It supports many different databases, so you'll find examples of how to write triggers and retrieve unique transaction IDs on those systems.
I have two applications with own database.
1.) Desktop application which has vb.net winforms interface, runs in offline enterprise network and stores data in central database [SQL Server]
**All the data entry and other office operations are carried out and stored in central database
2.) Second application has been build on php. it has html pages and runs as website in online environment. It stores all data in mysql database.
**This application is accessed by registered members only and they are facilitied with different reports of the data processed by 1st application.
Now I have to synchronize data between online and offline database servers. I am planning for following:
1.) Write a small program to export all the data of SQL Server [offline server] to a file in CVS format.
2.) Login to admin Section of live server.
3.) Upload the exported cvs file to the server.
4.) Import the data from cvs file to mysql database.
Is the method i am planning good or it can be tunned to perform good. I would also appreciate for other nice ways for data synchronisation other than changing applications.. ie. network application to some other using mysql database
What you are asking for does not actually sound like bidirectional sync (or movement of data both ways from SQL Server to MySQL and from MySQL to SQL Server) which is a good thing as it really simplifies things for you. Although I suspect your method of using CSV's (which I would assume you would use something like BCP to do this) would work, one of the issues is that you are moving ALL of the data every time you run the process and you are basically overwriting the whole MySQL db everytime. This is obviously somewhat inefficient. Not to mention during that time the MySQL db would not be in a usable state.
One alternative (assuming you have SQL Server 2008 or higher) would be to look into using this technique along with Integrated Change Tracking or Integrated Change Capture. This is a capability within SQL Server that allows you to determine data that has changed since a certain point of time. What you could do is create a process that just extracts the changes since the last time you checked to a CSV file and then apply those to MySQL. If you do this, don't forget to also apply the deletes as well.
I don't think there's an off the shelf solution for what you want that you can use without customization - but the MS Sync framework (http://msdn.microsoft.com/en-us/sync/default) sounds close.
You will probably need to write a provider for MySQL to make it go - which may well be less work than writing the whole data synchronization logic from scratch. Voclare is right about the challenges you could face with writing your own synchronization mechanism...
Do look into SQL Server Integration Service as a good alternate.